In a landmark development, the European Commission welcomes the pivotal political agreement reached between the European Parliament and the Council on the Artificial Intelligence Act (AI Act), first proposed by the Commission in April 2021. Marking a historic moment, this agreement signifies a significant leap forward in establishing a comprehensive legal framework for Artificial Intelligence (AI) on a global scale.
The AI Act, designed to transcribe European values into a new era, adopts a risk-based approach, dividing AI systems into three categories: minimal risk, high-risk, and unacceptable risk.
Key Highlights of the Artificial Intelligence Act:
1. Minimal Risk:
• AI systems like recommender systems or spam filters fall under minimal risk.
• These systems enjoy a free pass with no obligations, emphasizing responsible innovation on a voluntary basis.
2. High Risk:
• AI systems categorized as high-risk must adhere to stringent requirements.
• Requirements include risk-mitigation systems, high-quality data sets, detailed documentation, human oversight, and robustness.
• Examples of high-risk AI systems encompass critical infrastructures, medical devices, educational access systems, recruitment systems, law enforcement applications, and more.
3. Unacceptable Risk:
• AI systems posing a clear threat to fundamental rights will be banned.
• Prohibited applications include those manipulating human behavior, such as certain voice-assisted toys encouraging dangerous behavior in minors, ‘social scoring' systems by governments or companies, and specific uses of predictive policing.
• Some applications of biometric systems, such as emotion recognition systems at the workplace and real-time remote biometric identification for law enforcement purposes in publicly accessible spaces, are also prohibited.
4. Specific Transparency Risk:
• Users must be informed when interacting with AI systems like chatbots.
• AI-generated content, such as deep fakes, needs to be labeled.
• Providers must ensure synthetic content is marked in a machine-readable format, allowing users to detect artificially generated or manipulated content.
Companies failing to comply with the rules will face fines ranging from €35 million or 7% of global annual turnover (whichever is higher) for banned AI applications, to €7.5 million or 1.5% for supplying incorrect information. Proportionate caps are in place for SMEs and startups.
Dedicated rules for general-purpose AI models ensure transparency along the value chain. Powerful models with systemic risks face additional binding obligations related to risk management, incident monitoring, model evaluation, and adversarial testing.National competent market surveillance authorities will supervise rule implementation at the national level. The creation of a European AI Office within the European Commission will coordinate at the European level, becoming the first global body enforcing binding rules on AI.
The political agreement awaits formal approval by the European Parliament and the Council. The AI Act will become applicable two years after entry into force, with specific provisions enforced earlier.
In a bid to bridge the transitional period, the Commission will launch an AI Pact, convening AI developers globally to voluntarily implement key AI Act obligations ahead of legal deadlines.
The European Union continues its commitment to promoting trustworthy AI at international levels through forums like the G7, OECD, Council of Europe, G20, and the UN. Recent support for the G7 leaders' agreement on International Guiding Principles reinforces the EU's dedication to shaping global AI standards.
The Joint Research Centre (JRC) has played a crucial role, providing independent, evidence-based research to shape the EU's AI policies and ensure effective implementation. The AI Act stands as a testament to the EU's commitment to fostering responsible AI innovation while safeguarding fundamental rights.
Click here to read the original press release.
More News in this Theme