🎉 Deepchecks raised $14m!  Click here to find out more 🚀

CI/CD

Enable automated CI/CD for your ML models by testing that your model meets your production KPIs regarding performance, train-test drift, and data integrity.

Deepchecks CI/CD puts a quality stamp on your models before you deploy them to production.

CI/CD

Why Test ML & Data in Your CI/CD?

Test Each New Version, Not Just the First One

Test Each New Version, Not Just the First One

Before deploying your machine learning model to production for the first time, you probably explored it inside and out. What about the next model versions, with only minor differences from the original model?

Set Up Processes That Enable You to Scale

Set Up Processes That Enable You to Scale

Will your top team members spend their precious time thoroughly validating each new version? Will that continue to happen as you have more models and variants for each model?

Adopt Best Practices from Software Engineering

Adopt Best Practices from Software Engineering

Here’s an alternative: build well-defined tests that run on your model and data each time there is a new version — just like CI/CD does in “classic software”!

How Can Deepchecks Help You?

result = my_custom_suite.run(prod_ds, model)
assert result.passed()

Some tools can assist with streamlining different aspects of validating ML (explainability, slicing and dicing, etc.). Deepchecks offers a one-stop shop that applies to many different use cases. The assets you build while you’re building tests during the research phase are later utilized for CI/CD and monitoring.

Once you start testing your model and data as part of the CI/CD, your ML system quality will not only be improved, but you will also see improvement in internal processes and handoffs between different team members.

Open Source & Community

Deepchecks is committed to keeping the ML validation package open-source and community-focused.

Subscribe to Our Newsletter

Do you want to stay informed? Keep up-to-date with industry news, the latest trends in MLOps, and observability of ML systems.