Deepchecks can plug in to your ML pipelines wherever they are.
We support:Phase 1: Validation of the training data and the ML model
Training Data
Model
Phase 2: Ongoing testing and monitoring of the production data and the ML model
Data Sources
Improved observability of the ML system are obtained by connecting to the data in it’s raw format, across all of the relevant data sources.Input Data
Monitoring of the input data in production, before and after various phases of the preprocessing. These are constantly compared to the historic data as well as to the corresponding data in the original training set.Model
Results stored during the pre-launch analysis of the model are used to determine the severity of different issues that are detected.Predictions
Monitoring of the model’s predictions, looking for annomalies and patterns regarding which types of mistakes the model may be making.Labels
Ground truth labels are not mandatory for the use of Deepchecks. However, when they exist, these can be used to display real time metrics and to help Deepchecks improve all other alerts. They are also scanned for inconsistencies and patterns that don’t make sense.