ML Testing is typically conducted in the research phase and is the first component in Continuous ML Validation.
Testing is an extremely important part of any software development process, and for various reasons hasn’t evolved as consistently and healthily as some other components in the ML tech stack. It’s time to change that.
Adopt Best Practices
While most software teams have clear methodologies around testing and QA, this is a mess for most teams of ML Practitioners.
Are you using ML testing best practices? Do you have standardized processes?
Standardize Research Reviews
Some teams don’t really do ML testing at all. Other teams have well-defined research review processes, usually involving tedious work that your top researchers do not have the time for!
Set Up Processes That Enable You to Scale
You will need rigorous testing to speed up model development and maintenance. Usually, that requires a lot of coding. The best way to enable scale is to use a testing framework covering all models with minimal coding efforts.
You Already Know You Should Start
Yes, you already know we’re right about this. You just never had a simple solution at your fingertips. Well, now you do. And it's free!
How Deepchecks Open Source Supports Testing
from deepchecks.tabular.suites import train_test_validationvalidation_suite = train_test_validation(...)validation_suite.run(train_ds, test_ds)
As a Python package, Deepchecks Open Source is highly customizable and integrates perfectly within Jupyter or Pycharm (no external interfaces or dashboard!).