Meet Deepchecks Open Source
Deepchecks Open Source is a python library for data scientists and ML engineers. The package includes extensive test suites for machine learning models and data, built in a way that’s flexible, extendable and editable.
How Does It Work?
Suites are composed of checks. Each check contains outputs to display in a notebook and/or conditions with a pass/fail output.
Conditions can be added or removed from a check;
Checks can be edited or added/removed to a suite;
Suites can be created from scratch or forked from an existing suite.
Key Features & Checks
Suites of Checks

suite = full_suite()
result = suite.run(train_dataset=ds_train, test_dataset=ds_test, model=rf_clf)

Methodology Issues

check = BoostingOverfit()
result = check.run(train_ds, validation_ds, clf)

Distribution Checks

check = TrainTestDrift()
result = check.run(train_dataset=train_dataset, test_dataset=test_dataset, model=model)

Performance Checks

check = SegmentPerformance(feature_1='work', feature_2='hours-per-week')
Result = check.run(validation_ds, model)


Open Source & Community
Recent Blog Posts
Want to Learn More About Deepchecks Pro?
Training
Inspect the model together with the data used for training, validation and testing. This can typically be done in the “notebook” environment with no need for production data.
Production
Check to see if the production data differs from the training data or changes over time. This is typically done in the production environment, but relies on aggregated data from the training phase.
New version releases
Check to see if a new “challenger” model performs better than the last version, or if new unexpected behavior is introduced. This can be done by comparing model to model or data to data (train & prod data both work).