Continuous Validation for Machine Learning

Validate and monitor your data and models during training,
production and new version releases
BOOK A DEMO

Deepchecks can plug in to your ML pipelines wherever they are.

We support:

Stay Ahead of Tomorrow’s ML Glitches

slide logo

Self-Driving Cars More Likely To Hit Black Pedestrians than White Ones

08/03/2019 Read more
slide logo

@Glassdoor I love you guys, but you've offered me 3 unrelated jobs @Lemonade in the past 10 days!

19/11/2020 Read more
slide logo

Amazon Prime Day Glitch Let People Buy $13,000 Camera Gear for $94

18/7/2019 Read more
slide logo

Germany’s Former First Lady Sues Google For Defamation Over Autocomplete Suggestions

08/09/2012 Read more

How It Works

Phase 1: Validation of the training data and the ML model

Training Data

Training data is analyzed, looking for undesired issues regarding the training process, and collecting statistics to be used during monitoring.

Model

Model is analyzed for limitations, characteristics, and determining the borders of confidence regions.

Phase 2: Ongoing testing and monitoring of the production data and the ML model

Data Sources

Improved observability of the ML system are obtained by connecting to the data in it’s raw format, across all of the relevant data sources.

Input Data

Monitoring of the input data in production, before and after various phases of the preprocessing. These are constantly compared to the historic data as well as to the corresponding data in the original training set.

Model

Results stored during the pre-launch analysis of the model are used to determine the severity of different issues that are detected.

Predictions

Monitoring of the model’s predictions, looking for annomalies and patterns regarding which types of mistakes the model may be making.

Labels

Ground truth labels are not mandatory for the use of Deepchecks. However, when they exist, these can be used to display real time metrics and to help Deepchecks improve all other alerts. They are also scanned for inconsistencies and patterns that don’t make sense.

Why Deepchecks?

ML Validation Of training data and ML model

Read more
slide image

Observability of ML in production

Read more
slide image

Alerting about various issues in live ML systems

Read more
slide image

Detecting Mismatches between research and production environments

Read more
slide image

Quick Querying of problematic production data

Read more
slide image

Recent blog posts

When You Shouldn’t Use Ensemble Learning

Train ML Systems
Using Competition to Train ML Systems

Create Unbiased ML Models
How to Create Unbiased ML Models

Subscribe to our newsletter

Do you want to stay informed?
Keep up-to-date with industry news, the latest trends in MLOps, and observability of ML systems.

Subscribe to our newsletter: