If you like what we're working on, please  star us on GitHub. This enables us to continue to give back to the community.

How to interpret Machine Learning results and models?

Randall Hendricks
Randall HendricksAnswered

Interpreting machine learning models, depending on the techniques or the models used, ranges from a really easy task (e.g., a linear regression model) to a comparatively harder one (e.g., deep learning architectures like BERT). Fortunately, in recent years, there exist a number of tools that facilitate the interpretation of results from machine learning implementations. We have various methods to look at the results and interpretability of the models among them are:
– Feature Importance

– Gradient Visualization

– Grad-CAM
Let’s take a look at different packages and tools that help us in deciphering even deep learning implementations:

1. ELI5

Acronymed from “Explain like I am a 5-year-old,” the outputs from this package are in line with the same. This python-based package has two modes:
a.) Global Interpretation looks through the model parameters and tries to figure out how the model behaves for a given change in its inputs.
b.) Local interpretation looks at each individual prediction and identifies what features lead to a specific result or output.

2. LIME

Or “Local Interpretable Model-Agnostic Explanations” gives the reasons as to why a specific prediction or result was achieved for a given input. One of the biggest advantages of this is that it is model agnostic, meaning it is capable of interpreting the results of any type of model for you.

3. SHAP

Shap uses Shapely Values which are aimed at giving insights into individual results for each given sample. The library is specifically good for making predictions about the results obtained from tree-based algorithms, even taking into consideration dependent features when coming to conclusions or importance scores.

4. Yellowbrick

Yellowbrick is compatible with most of the scikit-learn library-based models and doesn’t require any additional parameters to work with. We can use the same parameters we would for a scikit-learn machine learning model. It makes use of the concept of Visualizers, where we can easily visualize the features in our sample/data from the individual data points.

5. Alibi

Alibi is an open-source implementation that also works on instance-wise (individual data points) explanation of the results. Unlike the other implementations, this one allows us to choose from a different set of available “explainers.” Some of the available explainers are:

–  Anchors

–  CEM

– Kernal SHAP

Testing. CI/CD. Monitoring.

Because ML systems are more fragile than you think. All based on our open-source core.

Deepchecks HubOur GithubOpen Source

Subscribe to Our Newsletter

Do you want to stay informed? Keep up-to-date with industry news, the latest trends in MLOps, and observability of ML systems.
×

Event
Testing your NLP Models:
Hands-On Tutorial
March 29th, 2023    18:00 PM IDT

Days
:
Hours
:
Minutes
:
Seconds
Register Now