As part of the Steam Summer Festival, we released the first playable demo of Unspottable.
It was a very exciting time to see players from everywhere in the world playing our game!

You can see the video describing the analytics we gathered and are analysing to learn and improve the game here:

We wrote this post in case it can help other small studios get more from analytics data.
We are in no way experts about this and are using tools with are famliar with but there my be much better options out there!

If you have any feedback on better tools or data points we should gather, please do reach out on twitter or discord! :)

Data gathering in Unity

We want to gather in-game data for the events we want to track,
Unity provides tools to gather core and custom data:

+ First make sure the Analytics service is enabled in your project. (

+ You can then use the `Analytics Event Tracker` provided by Unity to trigger events through the editor or your custom json.
There is a limitation on size of event using unity so we broke down our data in a few different events sent at the end of each game or when the app is closed:
                                        AnalyticsEvent.Custom("EndOfGame", new Dictionary
                                            "botsPunched": 6,
                                            "playersCount": "3",

+ Unity adds metadata on top of your custom events, timestamp, appId, userId, Country ...

Getting data from unity dashboard

For now we manually downloaded the raw data for the week of the festival from the Unity analytics dashboard.

But this should be easy to automate using the Rest API provided to get a daily dump if the analytics:
That's an improvement we'll add when we have demos available publicly for longer periods if time.

Elaticsearch Installation and Indexing

Elasticsearch is available for free, and easy to setup:

Here we are running everyhting locally, but there are plenty of resources for hosted services and deployments.
Download the ES package and run the executable, ES should be available on localhost:9200
                                     > wget
                                     > tar -xvf elasticsearch-7.8.0-darwin-x86_64.tar.gz
                                     > elasticsearch-7.8.0/bin/elasticsearch
The next thing we want to do is indexing our analytics file in elasticsearch.
ES is a powerful full text search engine so there is a lot to be said about the configuration and index mappings.
For now we will keep it simple, use the ES auto-mapping feature, and just make sure that the fields we want to do calculations on are in the correct formats.
Here is an example of a basic indexer written in python: It will require the elasticsearch python package
pip install elasticsearch
                                        import sys
                                        import json
                                        from elasticsearch import Elasticsearch
                                        es = Elasticsearch()

                                        int_fields = ['botsPunched', 'playersCount']
                                        float_fields = ['gameDuration']

                                        analytics_file = sys.argv[1] 

                                        with open(analytics_file, "r") as a_file:
                                        for record in a_file:
                                            json_record = json.loads(record)

                                            if json_record['custom_params']:
                                                for field in int_fields:
                                                    if field in json_record['custom_params']:
                                                        json_record['custom_params'][field] = int(json_record['custom_params'][field])

                                                for field in float_fields:
                                                    if field in json_record['custom_params']:
                                                        json_record['custom_params'][field] = float(json_record['custom_params'][field])        

                                                data = json.dumps(json_record)

                                        res = es.index(index="my-index", body=data)
We are just making sure that the integer or float fields are in the right format for ES to guess the mapping, so we can run calculations on them (avg, sums, max ...) If this is running smoothly, when the indexer is done with your file, you should see documents in your Elasticsearch index:
                                        > python my_analytics_file.json
                                        > curl -XGET http://localhost:9200/my-index/_search

Kibana Installation and Visualizations

Kibana is a great tool for analysing and visualising log files. Install it in the same way you installed ElasticSearch:
                                     > wget
                                     > tar -xvf kibana-7.8.0-darwin-x86_64.tar.gz
                                     > kibana-7.8.0/bin/kibana

It should be accessible on localhost:5601

The last thing to do before seeing all your data is creating an index pattern, that will match the Elaticsearch index.

In the Kibana interface: > Management > Index patterns > Create index pattern

Kibana should guess the type of all the document fields automatically, one trick that you might want to do, as we did not index the ts (timestamp) fields as a datetime, kibana won't be able to run date querys on it.
To fix that now, we can add a scripted fields to the index pattern of type date, defining the srcipted field as: doc['ts'].value

You can now query all your analytics data in Kibana, I would suggest to first head to the Discover tab and run a few queries to familiarise with the Query Language.

Then have a go at creating visualization as seen in the video at the top of this post.

It takes a bit of practice and knowledge of the data model to get your head around how to select the right fields and operators (Sum, Avg, Max ...) but you should be able to make cool visualizations and dashboards to keep an eye on all your key data points!