If you like what we're working on, please  star us on GitHub. This enables us to continue to give back to the community.
DEEPCHECKS GLOSSARY

Recall in Machine Learning

Machine learning model and confusion matrix

For a good enough accuracy metric in the machine learning model, you need a confusion matrix, recall, and precision. This is important because sometimes output may give you the wrong impression and to avoid it you need to see how well a model made its prediction.

Confusion matrix, recall, and precision is necessary for your machine learning model to be more accurate  

That’s where the confusion matrix comes in handy especially weighing the cost and benefit of choices. Ideally, you give your model inputs and it gives you precise and accurate output. The confusion matrix can operate in both binary and non-binary classifications. Binary classification is straightforward where the model predicts between two choices (yes or no, true or false, left or right). If the model predicts incorrectly you will either get false-positive or false-negative results. For example, if the model predicts yes when the actual result is no then it’s a false positive. A false negative is vice versa, the model predicts no but the actual result is yes.

Recall in Binary Classification

In binary classification (two classes) where you have an imbalanced classification problem, recall in machine learning is calculated with the next equation:

Recall classification =  Number of  True Positives/ (Total number of True Positives + Total number of False Negatives)

The result can be a value from 0.0 to 1.0, from no recall to full recall. In a practical example, let’s take a dataset with 1 minority to 1000 majority ratio (1:1000), with 1000 minority class examples and 1,000,000 majority class examples.

A machine learning model predicts 950 of the positive class predictions correctly and rests (50) incorrectly.

Based on that, recall calculation for this model is:

Recall = TruePositives / (TruePositives + FalseNegatives)

Recall = 950 / (950 + 50) → Recall = 950 / 1000 → Recall = 0.95

This model has almost a perfect recall score.

Recall in Multi-class Classification

Recall as a confusion metric does not apply only to a binary classifier. It can be used in more than two classes. In multi-class classification, recall is in deep learning calculated such as:

Recall formula = True Positives in all classes / (True Positives + False Negatives in all classes)

Just like in the previous example, let’s take a dataset with 1 minority to 1000 majority ratio and a 1:1 ratio for each positive class, with 1000 minority class examples and 1,000,000 majority class examples.

A machine learning model predicts 850 examples correctly (which means 150 is incorrect) in class 1, and 900 correctly and 100 incorrectly for the second class (class 2).

Based on that, recall calculation for this model is:

Recall = True Positives in all classes / (True Positives + False Negatives in all classes)

Recall = (850+900) / ((850+900) +(150+100)) → Recall = 1750 / (1750 + 250) → Recall = 1750 / 2000 → Recall = 0.875

If your end goal is to minimize false negatives on your imbalanced classification problem then utilizing recall is the right course of action. However, you should be wary of this, because increases in the recall will cause-and-effect trigger a decrease in precision.

Testing. CI/CD. Monitoring.

Because ML systems are more fragile than you think. All based on our open-source core.

Deepchecks HubOur GithubOpen Source

Precision

We discussed precision in machine learning in one of our previous blogs. So here we will just briefly mention it and compare it to the main subject of this blog – recall.

As we said, precision is useful to see how many true positives (TP) got thrown into the confusion matrix. If there are no false positives (FP) then the model is 100% correct. Needless to say, the more FP you got, the worse that precision is going to look.

Precision = True Positives / (True Positives + False Positives)

Recall vs Precision

For imbalanced classification problem recall and precision are both better-suited metrics than simply relying only on the accuracy of a model. However, that doesn’t mean they are equally important. In a specific situation, you may want to maximize either recall or precision at the cost of the other metric. You can’t have both, high recall and high precision, so there is a certain cost in getting higher points for either of them. In some cases, you want to take both metrics into account and find an optimal blend by using the F1 score.

F1 = 2 * (precision*recall / (precision + recall))