If you like what we're working on, please  star us on GitHub. This enables us to continue to give back to the community.

What the Proposed European Union AI ML Regulations mean for You

This blog post was written by Inderjit Singh as part of the Deepchecks Community Blog. If you would like to contribute your own blog post, feel free to reach out to us via blog@deepchecks.com. We typically pay a symbolic fee for content that’s accepted by our reviewers.

With Artificial Intelligence (AI) becoming more and more integrated into our daily lives, it’s about time AI technology gets its own set of regulations and guidelines. The European Union has done just that. The EU AI Act is an important regulation that influences the future of artificial intelligence. In this blog, we look at what exactly is being proposed in this act, and how it will impact people and AI technology’s development in business.

What it Proposes

Types of Systems That Fall Under This Regulation

The new EU legislation on AI differentiates and divides the AI systems into three separate categories:

Unacceptable-risk AI Systems: any form of AI that can manipulate and influence the general public, including personal identification AI (e.g., face recognition systems that can be used in public spaces).
High-risk AI Systems: systems that have a long term impact on the lives of an individual are covered in this category, these will include creditworthiness evaluation systems, non-public biometric systems (face recognition used by law enforcement to identify criminals).
Limited and Minimal-risk AI Systems: this includes the system that can’t have a long term impact on the lives of the individual (e.g., conversational chatbots, customer segmentation filters).


Proposed Requirements for Organizations

The European regulation of AI systems proposes restrictions on models depending on their risk level (determined by the categories above).

The systems that are categorized in the Unacceptable-risk category would no longer be permitted in the European Union.

The High-risk Systems will be subject to a significantly large set of requirements:
– Human oversight. This will require human oversight in a number of developmental aspects (e.g., dataset validation, model inference monitoring for things like biased decisions).

– Data governance and management. Clauses relevant to this scrutinizes the policies, identification of owners, rules, and metrics used to manage data availability and access.

– Accuracy, robustness, and cybersecurity. The models should not be susceptible to small perturbations when making their decisions.

– Technical documentation. The act requires the organizations to maintain technical documentation for the model architectures and all the subsequent changes.

– Record keeping and logging. AI firms will have mandatory logs for all requests served or inferences made by the machine learning models.

– Transparency and provision of information to users. The users are to be made aware of what information related to them is going to be used to train the models and subsequently make predictions.

– Registration with the EU-member-state government. The firms serving customers with their models in the EU are obligated to register with one of the European Union member states (https://gravityfalls.fandom.com/wiki/List_of_cryptograms).

– Conformity assessment. Conformity refers to how well an underlying software complies with the set standards. The conformity assessment will be done for AI models and data pipelines before they are released for production inference.

– Postmarket monitoring system All production models must have post-production monitoring systems to ensure the model predictions do not significantly deviate from the numbers seen after the training process.

Open source package for ml validation

Build Test Suites for ML Models & Data with Deepchecks

Get StartedOur GithubOur Github

Impact on People

The clauses in this act that require humans to review and approve the data validation should theoretically prevent humans from being harmed by the autonomous decisions made by AI. This instills confidence through bolstering safeguards around AI-enabled systems. The bill will also give people better control over how their data is being used to make any significant decisions. Recommendation systems use a significant amount of data points to suggest social media posts or videos for respective platforms, but users are seldome aware about which data points led to specific recommendations.

The initial draft requires the data sets to be used for training High-risk Systems be free of errors and comprehensible enough that humans can easily understand how the AI system operated to arrive at a specific decision. That will be challenging – it must be as if a human verified the data sets. Back then, the PredPol model identified hotspots of future crimes to be areas with minority communities. Ensuring this kind of bias won’t happen again requires hundreds of hours to guarantee  flawless data sets. With the introduction of conformity, instances of, for example,  AI acting against a specific social or ethnic group will be reduced since firms will be held liable

Impact on Technology Development

The initial proposal stipulates fines of up to 30 million USD or 6% of the worldwide sales – which, for some the tech giants, can be a remarkably large sum. Monetary impact aside, the main concern is the further prolonged time from when a problem is conceived to the implementation of the productionized model, owing to the requirements around human validation for training data sets. If we are trying to come up with a model for financial product recommendations (which  draws information from social media or other financially related information, such a dataset for multiple users can run into billions of data points. Nowadays we just implement models on small random samples and test it without manually shifting through the data sets. A manual review to ascertain such a data set is free from bias is going to take remarkably longer, further lengthening the development deployment cycle.

Some of the requirements in the act (at least, in its current version) are currently physically impossible to fulfill right now. The draft states that the data sets should be flawless and human-supervised. With the data sets for ML models running into billions of GBs, it is outright impossible for a human to go through each record and ensure the data sets are vetted properly.

While developing solutions, a developer or an AI practitioner will need to keep the following in mind:

Category of the model being created (if the model is high-risk or low-risk).
Implement risk management, as part of the overall architecture.

Comprehensive technical documentation, with the documentation including the inner functioning of the models, with a key emphasis on explainability part.

– Emphasis on Explainable AI.

Conformity assessment, as part of the data pipeline.

Production monitoring systems must be improved, with results logged for record keeping (e.g., previously, only a fraction of the predictions were saved for review to assess for drifts. Now, in compliance with the act, all predictions must be recorded).

How the Industry can Prepare

Developers and AI firms should establish action plans to build their AI risk management programs, as depicted in the diagram below. There needs to be an algorithm involved in the creation and maintenance of the AI system to assess the risk mitigation measures and identify potential risks at various stages of development. This helps guarantee such measures are built into the model from the beginning of the development cycle. They should also review the  various stages we are compliant with the regulatory requirements. After the first A/B tests, we may want to add extra features, but these features might not be yet disclosed to the users – we’ve to make sure we conform to the rules.



While the initial proposal for this law seems to be a well-intended attempt at creating greater transparency and increasing oversight on the functions of AI, some of the components (i.e., human analysis of training data) might stifle innovation by slowing down the experimentation cycle and increasing the time for the feedback loop to complete. The final decision on this is at least a year away from now – the GDPR took 4 years for the negotiations to be completed and six to come into force in the European Union. The final version was significantly improved after due consultations with the concerned parties. We should expect a similar implementation this time as well, where the legitimate concerns from the AI community are addressed.

Subscribe to Our Newsletter

Do you want to stay informed? Keep up-to-date with industry news, the latest trends in MLOps, and observability of ML systems.

Related articles

How to Choose the Right Metrics to Analyze Model Data Drift
How to Choose the Right Metrics to Analyze Model Data Drift
What to Look for in an AI Governance Solution
What to Look for in an AI Governance Solution

Identifying and Preventing Key ML PitfallsDec 5th, 2022    06:00 PM PST

Register NowRegister Now