Data quality is fundamental for all ML products. If the data suffers from substantial quality issues the algorithm will learn the wrong things from the data. So we need to monitor that the values we’re receiving for a given feature are valid.
Some common data quality issues we see are:
missing values - fields are missing values.
out of bound values - e.g. negative values or very low or high values.
default values - e.g. fields set to zero or dates set to system time (1 jan 1900).
format changes - e.g. a field which has always been an integer changes to float.
changes in identifiers for categorical fields - e.g. GB becomes UK for a country identifier.
When training or retraining you need a strategy for handling data records with quality issues. The simplest approach is to filter out all records which do not meet your quality criteria, but this may remove important records. If you take this approach you should certainly look at what data is being discarded and find ways to resolve, if possible.
Other approaches are possible - for missing or incorrect fields we often follow the standard practice of imputing missing or clearly incorrect values. Where we impute values we typically record this in an additional column.
In cases where you can’t disentangle a data error from a real entry (e.g. data sets where Jan 1900 could be a real data point) you may have to filter out good data points or investigate individually.