The EU is actively developing strategies to drive its digital economy, foster innovation, and ignite a revolution. Recognizing the immense potential of artificial intelligence (AI), Members of the European Parliament (MEPs) emphasize the need for human-centric AI legislation. They aim to establish a robust framework that prioritizes trustworthiness, enforces ethical standards, safeguards jobs, promotes the development of competitive “AI made in Europe” products, and exerts influence on global standards. On 21 April 2021, the Commission unveiled its proposal for AI regulation. Subsequently, on 14 June 2023, MEPs adopted the Parliament’s negotiating position on the AI Act. The next phase involves commencing discussions with EU member states in the Council to finalize the details of the law’s implementation. In this article, we will delve into the key components of the EU’s AI Act, examine its benefits and challenges, explore the principles governing trustworthy AI, analyze its impact on generative AI, and the future of the regulation.
European Parliament’s foremost priority is to ensure the safety, transparency, traceability, non-discrimination, and environmental friendliness of AI systems used within the EU. Additionally, they strive to establish a unified definition for AI that is technology-neutral, capable of encompassing both present and future AI systems. The EU AI Act holds tremendous significance as a legislative measure that can shape the future of AI within the European Union. Although the Act is currently being negotiated, it is evident that the EU is fully committed to regulating AI in a manner that safeguards individuals, society, and the environment. Once approved, these regulations will become the world’s first comprehensive rules governing AI.
The EU AI Act adopts a risk-based approach in its regulation of AI. The rules are designed to assess the level of risk associated with AI systems and impose corresponding obligations on both providers and users. While some AI systems may present minimal risk, they still require evaluation to ensure compliance with the regulations. The Act establishes a framework that addresses the varying levels of risk from artificial intelligence and outlines the necessary obligations to be fulfilled by stakeholders in relation to these risks.
Unacceptable risk systems: The EU AI Act strictly prohibits AI systems that pose unacceptable risks and are considered detrimental to individuals. These banned systems encompass various categories, including:
- Cognitive behavioral manipulation targeting individuals or specific vulnerable groups, such as voice-activated toys, encouraging dangerous behavior in children.
- Social scoring, which involves classifying individuals based on behavior, socioeconomic status, or personal characteristics.
- Real-time and remote biometric identification systems, such as facial recognition technology.
- “Post” remote biometric identification systems, except in law enforcement cases for the investigation of serious crimes, and only with judicial authorization.
- Biometric categorization systems that employ sensitive characteristics like gender, race, ethnicity, citizenship status, religion, or political orientation.
- Predictive policing systems that rely on profiling, location, or past criminal behavior.
- Emotion recognition systems used in law enforcement, border management, workplace environments, and educational institutions.
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases, which violates human rights and the right to privacy.
These AI systems are deemed high-risk and explicitly prohibited under the EU AI Act due to the potential harm they pose to individuals and their rights.
High-risk systems: The EU AI Act classifies AI systems that pose risks to safety or fundamental rights into two main categories:
- AI systems are used in products governed by the EU’s product safety legislation, including toys, aviation equipment, cars, medical devices, and lifts.
- AI systems within eight specific domains that require registration in an EU database. These areas are:
- Biometric identification and categorization of individuals.
- Management and operation of critical infrastructure.
- Education and vocational training.
- Employment, worker management, and access to self-employment.
- Access to essential private services, public services, and benefits.
- Law enforcement.
- Migration, asylum, and border control management.
- Assistance in legal interpretation and application of the law.
All high-risk AI systems will undergo an assessment before their market introduction and will continue to be evaluated throughout their lifecycle. Furthermore, Members of the European Parliament (MEPs) have expanded the scope of high-risk areas to include potential harm to individuals’ health, safety, fundamental rights, or the environment. They have also included AI systems aimed at influencing voters in political campaigns and recommender systems employed by social media platforms with more than 45 million users, as specified by the Digital Services Act, into the high-risk category.
Generative AI: Generative AI models such as ChatGPT, Bard AI, Stable Diffusion, and Midjourney must adhere to transparency requirements under the EU AI Act. These requirements include:
- Disclosing that AI-generated the content: Users should be aware that the output they interact with is generated by an AI system rather than a human.
- Designing the model to prevent the generation of illegal content: AI providers must ensure that their models are designed to prevent the generation of content that violates legal regulations.
- Publishing summaries of copyrighted data used for training: AI providers should provide summaries of copyrighted data utilized during the training of their models, ensuring transparency regarding the sources and types of data used.
In addition, MEPs have imposed certain obligations on providers of foundation models, representing a rapidly advancing area in AI development. These providers must protect fundamental rights, health and safety, the environment, democracy, and the rule of law. To fulfill these obligations, they must assess and mitigate risks associated with their models, comply with design, information, and environmental requirements, and register their models in the EU database. These measures ensure that providers of foundation models are accountable for the consequences of their AI systems and actively strive to uphold a wide range of societal and legal standards.
Minimal risk systems: AI systems with limited risk should comply with minimal transparency requirements to promote informed decision-making. Users should be provided with relevant information to understand when interacting with AI. However, AI systems with minimal or no risk, such as spam filters, typically do not fall within the scope of these regulations. Nonetheless, informing users when engaging with AI systems that generate or manipulate image, audio, or video content remains crucial, such as deepfakes. By ensuring transparency in these cases, users are empowered to make conscious choices and gain awareness of AI’s role in their interactions.
Balancing Innovation and Citizens' Rights
MEPs have introduced exemptions to the regulations for research activities and AI components distributed under open-source licenses to foster AI innovation. The new legislation also encourages the establishment of regulatory sandboxes, controlled environments facilitated by public authorities, where AI can be tested before its deployment.
MEPs are committed to enhancing citizens’ rights by enabling them to file complaints regarding AI systems and receive explanations for decisions made by high-risk AI systems that significantly affect their rights. Additionally, they have reformed the role of the EU AI Office, assigning it the responsibility of monitoring the implementation of the AI regulations.
Principles of Trustworthy AI
The EU AI Act incorporates and upholds the seven principles of trustworthy AI, which are as follows:
- Human agency and oversight: AI systems must not replace human decision-making in areas essential for human dignity.
- Technical robustness and safety: AI systems should be designed and developed to minimize the risk of harm to individuals, society, and the environment.
- Privacy and data governance: AI systems must respect individuals’ privacy rights and adhere to appropriate data governance measures.
- Transparency: Providers of AI systems are required to make certain information available to users, including details about the system’s functioning and training.
- Diversity, non-discrimination, and fairness: AI systems should not be used to discriminate against individuals based on protected characteristics and must promote diversity and fairness.
- Societal and environmental well-being: AI systems should contribute to the well-being of all individuals, both in the present and for future generations.
- Accountability: Providers of AI systems must be able to explain the system’s workings and be accountable for the decisions made by their AI systems.
The EU AI Act is a comprehensive legislation ensuring the trustworthy operation of AI systems. It incorporates seven principles that provide a comprehensive framework for the development and utilization of AI, guaranteeing its safety, ethicality, and societal benefits. This Act represents a significant milestone in ensuring that AI is used in alignment with these fundamental principles. The EU AI Act encompasses specific requirements for high-risk AI systems and sets forth a clear framework for their regulation. It is currently undergoing negotiations, but it reflects the EU’s commitment to regulating AI in a manner that safeguards individuals, society, and the environment.
The Future of the EU AI Act
The EU AI Act is still under negotiation, but the Act will have a significant impact on the future of AI regulation in the EU. The Act is expected to be finalized in 2023, and the member states will implement it.
The following steps for the EU AI Act include:
- The European Parliament and the Council of the European Union will continue negotiating the Act.
- The European Commission will prepare to implement acts, such as guidelines and regulations.
- The member states will implement the Act into their national laws.
However, the future of AI regulation in the EU is uncertain. The EU AI Act is still under negotiation, and whether it will be finalized in 2023 is unclear. Even if the Act is finalized, it is unclear how the member states will implement it. The EU AI Act is a significant piece of legislation, but it is just one piece of the puzzle. The EU will need to continue to develop its AI regulatory framework to ensure that AI is used in a safe, ethical, and beneficial way.
The potential impact of the EU AI Act on the global AI landscape is significant. The Act is the first comprehensive legislation on AI and is likely to be used as a model by other countries. The Act could also lead to increased cooperation between nations on AI regulation.
Here are some of the potential benefits of the EU AI Act:
- The Act could help ensure that AI is used safely, ethically, and beneficially.
- The Act could help protect individuals from AI’s potential harms, such as discrimination and privacy violations.
- The Act could ensure that the EU is a leader in developing and using AI.
Here are some of the potential challenges of the EU AI Act:
- The Act could be complex and difficult to implement.
- The Act could stifle innovation in the AI sector.
- The Act could be challenged by companies that do not want to be subject to its requirements.
Overall, the EU AI Act is a significant piece of legislation that can shape the future of AI in the EU and around the world. The Act is still under negotiation, but it is clear that the EU is committed to regulating AI in a way that protects individuals, society, and the environment.