Today, machine learning engineers and data scientists routinely prototype an ML model, train it on a specific learning dataset, and then evaluate the ML model on a test set using mean accuracy measures. However, putting such a model into a production use case automatically raises expectations about the model’s durability and dependability. Models that appear to work well in the prototype phase may encounter a variety of failure mechanisms in the scenario.
Three primary failure sources of machine learning models that scientists frequently overlook while prototyping the models – these failures might come back to haunt you in a production. Performance bias failures, model failures, and robustness failures are the three failure modes.
Performance failures
The first failure is performance bias failures, which data scientists seldom evaluate during their review workflow.
Model prejudices in favor of specific groupings can be hidden using these basic techniques. There is significant literature that examines the many reasons for bias, ranging from failure data analysis to model architecture. We’ll go over a few of the consequences of bias briefly:
- Certain subgroups have hidden long-term implications. If a system model performs poorly for a fraction of newly gained users, the underperformance will stack over time, potentially resulting in lower long-term engagement and retention.
- Discrepancies among data subsets that were unexpected. It’s conceivable that underperformance on a certain aspect appears incomprehensible and unexpected. It’s critical to bring this to the attention of the developer so that they can investigate the reason for the disparity.
Model Failures
Any organization that wants to use machine learning algorithms in a commercial scenario must first put up a working data pipeline. The notion that an upstream adjustment in data processing might have concealed, detrimental downstream implications on model performance and future data gathering is unique to a machine learning pipeline.
This can occur in several ways. A data engineer might mistakenly change a feature’s distribution by changing how it’s computed, or introduce a bug that causes the features to be represented as nan’s. Your feature values may be handled as strings rather than floats as a result of a data pipeline adjustment.
End-users may provide nonsense or erroneous values, which may then be exploited as features.
We’ve noticed that common machine learning tools canβt check whether the data is “valid” before providing a result out of the box. Validity can come in a variety of shapes and sizes:
- Numeric numbers that are within a reasonable range.
- Ensure the model is unaffected by type conversions.
- Discovering whether the model is missing any features.
When an input feature violates validity restrictions, machine learning tools both display inconsistent behavior and fail to warn the user.
Resilience failures
Finally, the last failure is robustness failure.Β These kinds of mistakes aren’t only an issue in a competitive environment; the fact that bad actors can take advantage of them is merely one side of the story. Generally speaking, they refer to an absence of toughness in the model.
There are several factors why you should be concerned about such failures. First, perturbations cause your model to make mistakes. There are areas in the input area where your model does not produce the intended output; if you are lucky, this can result in a disappointing experience and a loss of faith in the system; at worst, these can be abused by an external attacker. Furthermore, robustness failures are often associated with abrupt changes in your output space.
In a nutshell, performance failures, model failures, and resilience failures are three major factors to examine when determining whether or not a machine learning model is ready for production. In general, data scientist tools and practices do not take into consideration these shortcomings.