Evaluate and monitor algorithms throughout their lifecycle
ML solutions are different from standard software delivery because we want to know that the algorithm is performing as expected, as well as all the things we monitor to ensure the software is working correctly. In machine learning, performance is inherently tied to the accuracy of the model. Which measure of accuracy is the right one is a non-trivial question - which we won’t go into here except to say that usually the Data Scientists define an appropriate performance measure.
This performance of the algorithm should be evaluated throughout its lifecycle:
During the development of the model - it is an inherent part of initial algorithm development to measure how well different approaches work, as well as settling on the right way to measure the performance.
At initial release - when the model has reached an acceptable level of performance, this should be recorded as a baseline and it can be released into production.
In production - the algorithm performance should be monitored throughout the lifetime to detect if it has started performing badly as a result of data drift or concept drift.
Last updated