Local Interpretable Model-Agnostic Explanations (LIME)

What is LIME?

LIME is a technique for explaining the predictions of any black-box classifier, such as neural networks and decision trees, and supports vector machines. It is independent of the specific architecture of an ML model, thus the name “model-agnostic.”

  • The goal of LIME is to create a basic, interpretable model by training it on a localized subset of the input data in order to imitate the behavior of the black-box classifier.

To do this, LIME trains a linear model on a perturbed version of the input data, where the perturbations are selected to be comparable to the original data. A black-box classifier relies on the weights generated by the linear model to draw attention to the aspects of the input data that are most crucial to making an accurate prediction.


Localized Linear Regression (LLR) is an interpretable model that may be used in LIME to approximately model the black-box model’s decision boundary.

For instance, LLR fits a linear model to the perturbed instances in the vicinity of the instance. The characteristics of the perturbed examples are used in the training of a linear model to make predictions about their class. Features of the input data are given weights based on their relevance, using the linear model’s weights. High-scoring characteristics are prioritized in the black-box classifier’s prediction process.

The decision boundary of the black-box model may be approximated in a small region of the input space by using LLR, a simple and interpretable model. By emphasizing the elements of the input data that are most relevant for the black-box classifier’s prediction, LIME is able to provide explanations that are straightforward.

Testing. CI/CD. Monitoring.

Because ML systems are more fragile than you think. All based on our open-source core.

Our GithubInstall Open SourceBook a Demo

LIME algorithm

The LIME algorithm is broken down into the following phases:

  • Sample: For the purpose of creating a dataset of perturbed examples, LIME takes a sample of the input space around the instance to be explained. These examples are quite identical to the one being described, but with some minor, arbitrary changes.
  • Train: LIME trains a model that can be interpreted, such as a linear model, on the examples that have been disrupted. The interpretable model is then used to make predictions about the class of disturbed instances using the characteristics available to it.
  • Assign: Weights from the interpretable model are used by LIME to give relevance ratings to the aspects of the incoming data. Each feature’s score here reflects its significance to the black-box classifier’s final prediction.
  • Explain: To provide context for the prediction, LIME isolates the elements of the input data that contributed most to the black-box classifier’s final conclusion. The relevance rankings from the previous stage are used to choose which of these traits to use.
  • Repeat: LIME can iteratively develop explanations for a number of predictions by repeating the procedure for various examples.

Importance of LIME

With LIME, the reasoning behind a black-box model’s choice can be understood in plain English. LIME generates a local approximation of the model’s decision boundary around the instance being explained and then emphasizes the properties of the instance that are most relevant to the model’s choice.

LIME can be applied to healthcare, banking, and other sensitive fields where openness and interpretability of the model’s conclusion are essential.

LIME is a strategy for providing context for the results of any black-box classifier by first training a basic, interpretable model on a limited neighborhood of the input data.

Advantages and disadvantages of LIME


  • Local explanations. When we say that LIME produces local explanations, we mean that it generates explanations for certain predictions rather than for the entire model as a whole. This clarifies the reasoning behind the model’s choices in certain cases.
  • Flexibility. LIME’s adaptability means that it can be used with many types of data, from images and text to tabular information.
  • Interpretability. Situations where transparency and interpretability are valued benefit greatly from LIME’s ability to provide explanations that are simple to understand.
  • Model-agnostic. Because it is not specific to any particular model architecture, a LIME model may be used to deconstruct a wide variety of black-box machine learning models.


  • Limited to models. Since LIME uses a linear model to approximate the black-box model’s decision boundary, it may not accurately reflect the complexity of the underlying model, particularly if the border is non-linear.
  • Limited to neighborhoods. LIME’s explanations are limited to the immediate vicinity of the occurrence in question. This method may not generalize well to new data sets or regions of the input space.
  • Sensitive. LIME’s performance is sensitive to the choice of parameters such as the neighborhood size and the degree of perturbation applied to the input data, both of which may be challenging to pick in reality.
  • Complexity. LIME may struggle with data types like images, where the number of features is large and the relationships between them are intricate.

In conclusion, LIME provides a flexible and interpretable approach to comprehending black-box model predictions, although it does have certain restrictions such as its dependence on linear models, sensitivity to the choice of parameters, and confinement to particular areas.