On April 21st, the Europe fit for the Digital Age commission proposed new rules in order to regulate AI systems across the EU.
Margrethe Vestager, Executive Vice-President of the commission, said:
“On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”
Margrethe Vestager, commissioner for Europe fit for the Digital Age commission speaking about the proposed regulations (source)
The proposed rules take a risk-based approach dividing use-cases into the following categories:
Unacceptable risk – these systems will be banned, includes AI systems that manipulate human behavior, and systems that allow “social scoring” by governments (such as that used by China).
High-risk – applications and systems that can put life and health of citizens at risk (e.g. transportation, health), law enforcement systems, education and employment, and essential services. These systems will be subject to strict obligations including comprehensive risk assessment, high quality unbiased datasets, detailed logging and documentation, appropriate human oversight and more.
Limited or minimal risk – applications and systems that do not fall under the other categories. Systems that generate content such as chatbots or “deep fakes” must be transparent and disclose whether the content is generated by an AI system and not by a human.
According to the proposal, companies that do not comply with these regulations may be fined up to 20,000,000 EUR or 4% of the company’s worldwide annual turnover, whichever is higher.
Will These Regulations Apply Outside the EU
While the EU does not have the power to enforce its’ regulations on countries outside the EU, there is reason to believe that these regulations may be adopted widely as did the GDPR regulations that address data protection and privacy issues. It is also likely that companies won’t want to give up on the EU’s market share, and thus it might make sense for them to comply with the regulations even if they are not enforced globally. Alternatively, it’s possible that there may be multiple competing initiatives (for example, the future of AI act suggested by US congress) for government regulation of AI, that will coexist.
How Will This Affect Data Science Teams?
Currently, data science teams are often focused on creating a model that is “good enough”, and then the model is launched into production without proper monitoring or a thorough testing process. In order to comply with the EU rules and regulations, data science teams will need to make a mind shift towards a more organized process that enables control and transparency of ML models and AI systems in general. We predict that the following common practices and concepts will become more and more prevalent in the industry if the EU regulations are widely accepted.
Explainable AI (XAI)
XAI enables users and developers to understand what goes on in the model’s “head” (source)
High-risk AI systems will be required to produce not only a prediction, but an explanation of their predictions. Thus when a wrong prediction is made, the Data Science team will be able to account for the error and determine the cause of a given mistake.
Creating Unbiased ML Models
Data science teams will work on debiasing their ML models. Common approaches include eliminating protected attributes in the preprocessing stage, attempting to train with an additional fairness goal (e.g. adversarial debiasing), and adjusting predictions at the postprocessing stage in order to satisfy requirements such as statistical independence between a given feature (e.g. race, gender) and the target.
End to End Monitoring
Companies will allocate more resources to monitoring solutions for their ML models, in order to comply with regulations. Such solutions should notify when predictions may be biased, or when model performance deteriorates. Monitoring systems may be developed in-house, or offered as a third party service.
Introducing Compliance Responsibility
Companies will appoint a senior data scientist or a team lead as responsible for the regulatory compliance of the company’s models. Perhaps practices such as red team vs. blue team from the cybersecurity world will be incorporated in the context of fairness and robustness of ML models.
To conclude, as AI systems become increasingly central in various industries that have a large impact on our lives, there is a growing need for regulation and standards to ensure trust and fairness in AI. The EU proposal may become the global standard for such regulations. While these regulations may be a pain to deal with, we believe that companies will also gain value from having ML models that are properly monitored and tested. When done correctly, the required practices will help companies be in control of their models and detect potential errors early on. Finally, it may take a while before any such regulations are enforced, but companies that are well prepared will benefit.
If you would like your organization to be better prepared for this upcoming “tidal wave”, feel free to reach out to us via this link, and we’d love to discuss further.