If you like what we're working on, please  star us on GitHub. This enables us to continue to give back to the community.

The Importance of Model Monitoring for Natural Language Processing

This blog post was written by Tonye Harry as part of the Deepchecks Community Blog. If you would like to contribute your own blog post, feel free to reach out to us via blog@deepchecks.com. We typically pay a symbolic fee for content that’s accepted by our reviewers.


The Importance of Model Monitoring for Natural Language Processing

Source

Natural Language Processing (NLP), as a field of Artificial Intelligence (AI), attempts to enable computers to understand human language as it’s written and spoken, with all its nuances. From simple language translators, it has evolved to create chatbots and voice assistants, to perform customer service, autocorrect, and email filtering tasks. OpenAI GPT3 is being used by companies to create apps for use-cases like code and content generation, copywriting, dating app assistants, and hiring.

It is a powerful tool and, like all Machine Learning (ML) models, NLP models have to be monitored and consistently improved upon by data science teams to ensure optimal performance. If not done, the decay will produce poor results.

This article covers:

  • Why NLP models fail;
  • The importance of monitoring NLP models in production;
  • What to monitor in production; and
  • How to monitor NLP models in production.

Why NLP Models Fail

According to Gartner’s research, 53% of projects move from AI prototype or POC to production, meaning almost half (47%) don’t. The lack of tools to manage production pipelines is the main reason companies find it hard to scale AI projects. Other reasons include:

    • Misalignment of Business Expectations and ML Objectives.

Not all business problems are automatically solved by ML. It is not magic. Business leaders sometimes get too enthusiastic about ML and expect it to improve or automate all processes. Before NLP projects start, all stakeholders should agree on which problem needs to be solved, the objectives, and the goals. A business might build a chatbot because other businesses are doing the same, thinking it’ll boost sales, but it might be a waste of resources when most customers prefer interacting with humans compared to chatbots in purchasing the business’ products.

    • Data Issues.

Even after establishing the expectations about the project, models can still fail at the beginning when sourcing data to solve a problem. The data required may not be adequate for the problem, or the data might not be relevant to the problem. Data quality issues and data bias that plague the project early on make it all the more important for data science teams to clear it up so as to move on with the model development process. Getting data labeling accuracy at scale in a straightforward NLP task where you have to categorize sentiments as “positive” or “negative” can be challenging.

    • NLP Model Generalization.

After gathering quality representative data, the next step is to train the NLP model to generalize well on data it hasn’t seen before. When the model fits the historical data too well, overfitting occurs. If the model is not able to generalize well enough, the model is underfitting. Those two scenarios happen when a model is too complex or simple, when there is variance and bias, when the split of training/test/validation data are not equal, and when there is data leakage.

    • ML Deployment Hurdles.

When an organization finally progresses to deployment, they have to deal with the resource demands of their project. This comes in the form of cost or proper use of needed operational ML infrastructure for deploying, maintaining, and monitoring the model.

Model Monitoring for NLP

NLP is a difficult field. The models have to understand complex human languages, context, emotions, ambiguity, domain-specific language, and colloquialisms. Consider this statement:

“John ran the business down, but I can run it better.”

As humans, this statement is easy to understand since we are aware of its context and the word definitions. NLP models, on the other hand, might know the lexicon but not understand the context. This becomes a problem when dealing with plenty of text data.

NLP is difficult because:

  • Computers still have a hard time learning about the world;
  • It is challenging for models to cover all the different ways concepts can be explained unless they have a specific definition or explanation; and
  • The data used to train NLP models has a level of bias since the data is created by humans.

Despite these challenges, organizations still create NLP models for their businesses to achieve their goals. To do this, data science teams use a monitoring tool to gain insight into the health of their models in production to maintain and improve them.

Models need to be monitored after deployment to ensure they function optimally no matter the use-case. For NLP, context is crucial and teams need to monitor how it changes. Data drifts can reduce model performance, and bias in NLP can be difficult to resolve. This can lead to problems, depending on the scale it is being used. Monitoring NLP models in production is necessary for businesses to gain market share in their industry.

Token level analysis by a monitoring tool

Token level analysis by a monitoring tool (Source)

Open source package for ml validation

Build Test Suites for ML Models & Data with Deepchecks

Get StartedOur GithubOur Github

What to Monitor in Production

Model performance reduces over time. If organizations are not proactive enough in correcting the flaws in their model, customer dissatisfaction could lead to losing more opportunities. Prioritize monitoring two aspects:

  • Drifts in Input or Output Distribution
  • Human Bias

Drifts in Input or Output Distribution

All ML models rely on datasets to train and test models. This dataset serves as a representation of the real world and, as the world evolves, either the input data in production changes or the predictions change over time. It represents both data drift and concept drifts that can occur in production.

Monitoring NLP data drift and concept drift in production entails tracking and comparing input and output distributions for production, training, and test data to see the level of drift, if any. It also includes using statistical tests to check for concept drift.Data science teams might need to adjust their models to account for the changes in linguistic patterns for their use-case. The ability to monitor these drifts will reduce cost and errors.

Human Bias

There is a bias in every dataset, and it is risky when algorithms adopt that bias in solving a problem. It could reinforce stereotypes in the solution, and could even raise ethical concerns in politics. A research conducted with GPT-3 showed that men are more associated with occupations and competency, in contrast to women. In another study based on data collected from the internet concerning words associated with African Americans and. white Americans, it was discovered that African Americans had negative associations in word embedding.

Algorithmic bias in NLP

Algorithmic bias in NLP (source)

In the pre-processing stage, bias can be mitigated by overcoming imbalances in the dataset used to train the NL algorithm. This can be done by making sure that the historical data is representative; minimizing bias through utilizing adversarial training by mixing both normal text data and adversarial inputs to increase the robustness and accuracy of the models. For post-processing (production level), earlier techniques require modifying the data or changing parts of the algorithm used. Recent research work looks at adjusting the output of a bias model in order to ensure that the final output is fair.

How to Monitor NLP Models in Production

The aim of monitoring any model, especially NLP models, is to ensure that the model’s performance is of high quality. To do this, you need to:

  • Know When an NLP Model Fails
  • Solving Model Failures
  • Finding the root cause
  • Acting fast with automation for performance recovery

Know When an NLP Model Fails

There are two ways data science teams can know if their models are failing:

Ground Truth

The term “ground truth” describes the precise characteristics of the problem that an ML model intends to solve, as revealed by the data sets connected based on the use-case. It allows teams to compare the accuracy of the trained NLP model against the real world.

Consider an NLP classification use-case where a labeled tweet dataset is used to train NLP models to classify tweets about real disasters into different categories like earthquake, fire, and injury so emergency services respond quickly.

To manually check if this product is performing well based on ground truth, feedback from users in terms of user complaints, system signals (e.g., a review, successful/unsuccessful API calls, latency), or hand labeling can be used to ensure the dataset predicts the target more accurately. The Ground Truth might not be the ultimate source of truth on model performance, so it is combined with monitoring tools. If there is a lack of quality feedback or access to ground truth, data pipelines and model insights can be used to detect changes in feature or label distribution.

Observability

This is the capacity to gauge a system’s internal conditions by looking at its outputs. By utilizing monitoring tools, teams can get to the root cause of issues with the NLP model by using selected metrics (accuracy, F1 score, precision, etc.) and an alert feedback loop to detect and get notified if something goes wrong. Details from ground truth can inform teams, but being able to see where the model is going wrong makes it a bit easier to drill down to the problem.

Some tools, along with showing a line graph on performance over time, can show whether or not the segments of customers and markets where they have optimal performance in production. They can also monitor how different language models perform or drift, among other things.

Solving Model Failures

Knowing why your model fails makes it easier for teams to start solving the issues present in their ML system. This requires them to find the root cause and act fast so the model’s performance is not hindered.

Finding the Root Cause

NLP monitoring tools can offer flexibility by allowing teams to either use classic visualizations like line and bar charts or create custom visualizations to enable them to drill down to check for issues. For example, in a sentiment analysis use-case, a confusion matrix can be used to compare the output prediction between training and production to provide insight on the performance/accuracy of the model.

Depending on the NLP use-case, teams find their own ways to track their desired metrics. Commonly, they can:

  • Analyze training and production n-grams to see the changes between them. Using 1-gram or bi-gram, they can track the top 50 n-grams for different classes and their frequencies. With this, shifts in training and production n-grams can be seen clearly and the undesirable text data filtered out.
  • Analyze tokens with a token analyzer to see which parts of your text data are positively or negatively impacting the predictions and apply changes to see how the model responds.
  • Debug word embeddings by comparing embedded words in training and production.
  • Do not forget to debug your code. It might just be coming from somewhere in your code. Think about building different tests for your code and data; they may be the culprits.

Acting Fast for Performance Recovery

Being proactive is key in preventing drifts, bias, and any other problems in production. Forecast the problem you might have and gather knowledge regarding the problem so the team can take action to mitigate it before it happens. Time gaps between data collection and deployment should be reduced since delays can create data drifts. Feedback by users might take time to get to the team, and so will the implementation. This can be remedied by effective communication channels and collaboration from stakeholders to data scientists to find better ways to get and implement feedback faster.

Acting Fast for Performance Recovery

Source

Also think about NLP tools like the deepcheck-nlp package, that can warn organizations about bad calls and minimize risk with validation and observability for NLP and NLU-based systems.

#intall the deepchecks nlp package

pip install deepchecks-nlp

Conclusion

Monitoring NLP models ensures that they are making the right predictions and, in doing so, are solving relevant problems. Whether it is in healthcare, social media, or human resource management, NLP models need to be safeguarded from bias. One way to make this easier for data scientists is to continually improve the monitoring tools to visually detect problems and solve them as quickly as possible.

To explore all the checks and validations in Deepchecks, go try it yourself! Don’t forget to ⭐ their Github repo – it’s a big deal for open-source-led companies.

Subscribe to Our Newsletter

Do you want to stay informed? Keep up-to-date with industry news, the latest trends in MLOps, and observability of ML systems.

Related articles

How to Choose the Right Metrics to Analyze Model Data Drift
How to Choose the Right Metrics to Analyze Model Data Drift
What to Look for in an AI Governance Solution
What to Look for in an AI Governance Solution
×

Event
Identifying and Preventing Key ML PitfallsDec 5th, 2022    06:00 PM PST

Days
:
Hours
:
Minutes
:
Seconds
Register NowRegister Now