How tracking metrics can help build models fact-based
In March we launched tracking metrics in AskAnna. In this post, I want to discuss why we build this feature and how it can help build models fact-based.
Tracking metrics is part of our philosophy that if you work in data analytics, you should be able to reproduce your results. When you make a summary of your data, this is relatively simple. But if we take the step to data modeling where we apply data science, ML or AI tools, it becomes challenging. To be able to reproduce a result, you need* to be in control of the:
- data used
- code version
- models applied
- model configuration
- variables set
- metrics
- artifacts
- results
- run environment
When you build a model where you tried out different feature selections, model configurations, thresholds, et cetera, you know how challenging it can be to keep track of what you did. Maybe you used an Excel file or version control tools like GitHub or DVC. It requires some discipline, but with these tools you can develop a workflow that enables you to reproduce results.
At AskAnna, we think it should be a no-brainer to have a way of working where you can reproduce what you have done. As part of the solution, we designed tracking metrics. Our goal is that it should not be hard to add tracking metrics to your project. You can keep it simple, but it’s also designed for more advanced use cases. For example, if you run multiple models or data configurations, you can use dictionaries and labels to track & tag your metrics.
* Disclaimer: many analytics projects don’t require that you keep track of all the above. But if you want to be fully able to reproduce your results, you should at least think about keeping track of some analytics meta information.
When should I track metrics?
So, now we have the feature available in AskAnna to track metrics. When should you use it? First, you don’t have to use it. But if you work on data science models, you probably will check metrics like:
- Mean Absolute Error
- Root Mean Squared Error
- Mean Absolute Percentage Error
- R-squared
- F-score
- Precision
If you need to review these kinds of metrics while creating your initial model, then let’s track them as a metric linked to the run you did. This way, you can open a run and see the metrics directly. You don’t have to check your run’s log or reopen your Python project to recalculate these statistics. It’s available and stored to check at any time in the future. And you can also share it easily with your team.
When you work on a data science or AI model, every run you try to improve something. You might change a threshold setting, feature selection, models used or something else. You are experimenting, tweaking, and tuning till you find the optimal result. But did it happen that at the end of the day, you know that you had a better configuration when you started the day? If you kept a proper record of what you did, it would be easy to find what you did. We designed AskAnna to support you in this.
A third example of when tracking metrics is useful is when you retrain your models frequently. For instance, for every train run you can use the tracked metrics to review accuracy development. You can even set thresholds to prevent updating your models if a metric is too low.
Analyze your runs
Now you have a structured way to collect metadata from the runs you do. Next, it will also become possible to create a new kind of analysis. There are already packages available that automatically select the optimal model configuration given the available data. But did you ever try to find patterns in the runs you did? By analyzing your runs, it can help find features and models that result in higher accuracy. Maybe these insights can even help you to tune your project further. Use the facts to your advantage.
Explain a change
When the business uses your data science model, you probably have to deal with new data, updates of the model or other tasks related to maintaining the solution. It will probably also happen that after an update or when using new data, the solution’s output changed. It gives an extreme forecast, the performance is worse than before, or something else you didn’t expect. With relevant metrics tracked, you would be able to narrow down which point of change in the code affected your results.
Just try it out
I hope this post gave you some insights into how tracking models can help you in your data analytics projects and why we build this feature. It’s also something you just need to try out. If you are curious about how it works in AskAnna, you can read more about tracking metrics in our documentation: https://docs.askanna.io/metrics/
Let’s make it easier
With version 1, we set the base of how you can track metrics. But we also want you to collaborate with us. For example, if you use scikit-learn or TensorFlow, it would be more efficient to track the relevant metrics automatically. In the future, we want to make this possible. If you want it “now”, we invite you to extend the AskAnna Python SDK to make this possible. You can find our project on GitLab and GitHub.
And we always would love to hear your ideas, if and how you use it or what we should improve. Simply send us an email: [email protected]