Introduction
As we announce our $14M in funding, weâre also going open source with AI/ML monitoring. In this post Iâll tell a bit about our journey at Deepchecks, this new milestone, and where weâre headed
At Deepchecks, weâve made it our mission to make a dent in the way machine learning (ML) models are validated. Since the launch of Deepchecks for testing ML models in January 2022, weâve seen an overwhelmingly positive response, with over 2,700 stars and more than 650,000 downloads. This success has made âtesting MLâ an integral part of the AI ecosystem. Now, weâre taking it to the next level by announcing the General Availability (GA) of our open-source monitoring solution for ML models in production, starting June 2023.
Before I dive in, hereâs a sneak peek:

Sneak peek at what you get after you install. For those of you that have hands-on orientation + arenât yet too rusty, try it by following the monitoring quickstart from our GitHub (and please starď¸ ď¸it if you like!): https://github.com/deepchecks/deepchecks
If you donât need the background and click here to skip to the actual announcement, or here to try the open-source monitoring via our GitHub.
ChatGPT: Seeing the Power of AI Democratization with Our Own Eyes
When we founded Deepchecks, we anticipated that ML model testing would become indispensable as AI democratization progressed and as software engineers start building AI-based products without external assistance. We spoke about this with the first investors we ever met, and team members that joined our amazing team typically believed in this before they met us.
However, the recent explosion of LLM-based driven applications has publicly confirmed our belief, and to be honest â the trajectory seems faster than even we anticipated. As more people develop AI applications without being data scientists, the demand for reliable testing and validation tools becomes more and more central. Edge cases & cryptic model behavior are increasingly becoming the bottlenecks for turning cool demos into products, not just for ML people, but for mainstream software engineers as well.

Example of alleged ChatGPT bias (source)
Pioneering the Comprehensive ML Validation Suite
Our goal at Deepchecks is to enable continuous ML/AI validation for all. Our journey started with the open-source testing modules, and step by step weâre expanding this to a one-stop solution for continuous validation, encompassing testing during the research phase, monitoring, CI/CD testing, and auditing. This wouldnât have been possible without the robust foundation established by our testing package, a project that goes back more than 1.5 years ago.
And this isnât the only dimension in which Deepchecks is expanding. In our pursuit of a holistic offering, weâve broadened the scope of our testing package to support various data types. While we initially focused on tabular data, weâve since released support for computer vision (CV) and natural language processing (NLP) models, and are also building a testing solution for LLMs/GenAI (reach out if youâd like to join our beta program!).

Example of an expansion of Deepchecks: Support of testing NLP models and data, a module that was just released last week. Learn more at https://github.com/deepchecks/deepchecks
Ok, and Now Finally, for the Grand Reveal: Open-Source Monitoring
Our testing component is đ¤ fully open-source and geared toward the research phase, but we initially planned for our monitoring component to be closed (and optimized for companies already using Deepchecks testing). However, we discovered some significant needs for open-source monitoring within the community, that changed our perception.
Picture a group leader at a Fortune 1000 company who wants to prioritize ML monitoring but is hampered by its complexity, so itâs postponed to the next quarter. With the open-source ML monitoring component, a proactive junior ML Engineer can set up the system over the weekend and present it to the team on Monday, making a point to the team that MLOps doesnât have to be so complicated.

Illustration of an ML Engineer that spins up an open source after being told that monitoring is important but for the next quarter đ.
This type of need exists both for companies that want a âstay free forever solutionâ, but also for amazing teams with significant engineering abilities & scale, that are looking either for an interim solution when their use case is just beginning to scale, or just for the ability to set up a quick POC without needing to send out sensitive data or to vouch for a solution they havenât even tried yet in discussions with their IT team.
To address this need, weâre offering Deepchecks monitoring as an open-source repo, that delivers a powerful and comprehensive set of features, including:
- Support for monitoring up to one model per deployment
- Root cause analysis abilities that start in the UI and continue in automatically generated Jupyter notebooks
- Basic user management
Our aim is to benefit the community without compromising our business model. This approach allows teams working with sensitive data to try our product without sharing their data with third parties, and easily streamlining the approval process with IT teams.
Try the open-source monitoring via our GitHub if you havenât yet.
So What Is Our Business Model?
So given that weâre beginning to release open-source repos with various parts of our core logic, you may be wondering, whatâs the catch? Whatâs our business model?
The answer is pretty straightforward: Itâs a classic open-core strategy. There are a few groups of features that arenât part of the open-source, such as advanced security/identity management, more scalable deployment options, a centralized dashboard for many models, and audit/compliance templates. A subset of these features is important for the teams weâd like to sell to, but the open-source is still powerful enough without them to be attractive for most teams.
We have quite a few interesting modules in our roadmap. One of the more interesting modules weâre building is dedicated support for testing & evaluating LLMs. If this field is interesting for you I recommend you join LLMOps.space, a global community weâre beginning to put together for LLM practitioners. See you there!
In the meantime, if youâd like to read similar stuff in the future, follow me on LinkedIn or subscribe to Deepchecksâ newsletter.