
EU AI Act

Thierry Breton (European Commissioner for Internal Market)
Summary
The EU AI Act is the first comprehensive framework aimed at regulating artificial intelligence (AI) systems within the European Union. Published on 12 July 2024 and effective from 1 August 2024, the Act sets out rules for the development, use, and market placement of AI technologies. The full enforcement of the Act is set to take effect by 2 August 2026, allowing time for gradual implementation.
The Act categorises AI systems into four risk levels: unacceptable, high, limited, and minimal. Unacceptable-risk AI systems are prohibited, including those used for social scoring or real-time biometric surveillance in public spaces. High-risk AI systems are subject to strict requirements for safety, transparency, and human oversight. These systems, commonly used in sectors like healthcare, law enforcement, and education, must comply with data governance, risk management, and human involvement standards. Limited-risk AI systems, such as chatbots or AI-generated deepfakes, must meet transparency obligations, informing users that they are interacting with AI. Minimal-risk AI systems, like those used in spam filters or video games, are largely exempt from regulation.
The AI Act applies to providers, deployers, and distributors of AI systems in the EU and also to those outside the EU whose AI systems or outputs are used within the Union. Providers of general-purpose AI models, such as foundation models like ChatGPT, must adhere to specific transparency and governance rules. The Act also requires post-market monitoring and reporting of incidents for high-risk AI systems.
The AI Act consists of 12 main titles that cover areas such as general provisions, prohibited AI practices, high-risk AI systems, transparency obligations, general-purpose AI, governance, and innovation support. Key sections include transparency requirements, risk management, data governance, record-keeping, and the establishment of regulatory sandboxes to foster innovation, particularly for SMEs.
Additionally, the European Artificial Intelligence Board is created to advise and assist in applying the Act consistently across Member States. Other bodies, such as the Advisory Forum and Scientific Panel of Independent Experts, provide technical expertise and advice on AI regulation. Member States will designate national competent authorities to ensure proper market surveillance and compliance.
The Act establishes the AI Office, responsible for monitoring compliance, issuing guidance, and coordinating enforcement across the 27 EU Member States. Penalties for non-compliance range from €7.5 million to €35 million, or 1.5% to 7% of annual global turnover, depending on the severity of the breach.
History
The EU Artificial Intelligence Act (EU AI Act) began its development in April 2021 when the European Commission officially proposed the regulation. The proposal was part of a larger initiative to regulate the rapidly evolving field of artificial intelligence (AI) in the European Union. The goal was to create a legal framework that ensures the safe and trustworthy development, use, and marketing of AI systems while promoting innovation and protecting public health, safety, and fundamental rights.
The origins of the AI Act are rooted in the European Commission's White Paper on Artificial Intelligence, published in February 2020. The paper outlined the need for a regulatory framework that could address the potential risks of AI technologies while fostering innovation. Following extensive discussions with industry stakeholders, policymakers, and the public, the Commission decided to move forward with a comprehensive AI regulation.
The AI Act was officially proposed on 21 April 2021. Its primary objective was to create a risk-based approach to AI regulation, categorising AI systems based on their potential risks to users. This includes provisions for banning certain harmful AI applications and setting requirements for high-risk AI systems used in critical sectors such as healthcare, education, and law enforcement. It also introduced transparency obligations for AI systems with limited risk and minimal oversight for low-risk AI applications.
Over the next few years, the proposed regulation went through several rounds of negotiation and revision. On 13 March 2024, the European Parliament passed the Act by a significant majority. The Council of the European Union gave its unanimous approval on 21 May 2024, solidifying the Act's legal standing.
On 12 July 2024, the AI Act was published in the Official Journal of the European Union, making it an official regulation. The Act came into force on 1 August 2024. However, the majority of its provisions will be enforced gradually, with full implementation expected by 2 August 2026. This timeline allows stakeholders, including AI system providers and deployers, sufficient time to comply with the new regulations.
One of the major milestones in the Act's history is the establishment of the AI Office. This body was created to oversee the implementation and enforcement of the AI Act across the 27 EU Member States. The AI Office operates under the Directorate-General for Communication Networks, Content and Technology (DG CNECT) and is responsible for monitoring compliance, coordinating enforcement efforts, and developing codes of practice for AI systems, particularly general-purpose AI models. The Office also plays a critical role in the development of regulatory sandboxes, where companies, especially small and medium-sized enterprises (SMEs), can test AI systems in a controlled environment.
The Act categorises AI systems into four risk levels—unacceptable, high, limited, and minimal. Unacceptable-risk AI systems, such as those used for real-time biometric identification in public spaces, are prohibited. High-risk AI systems, such as those used in critical infrastructure or employment, must adhere to strict safety, transparency, and oversight measures. Limited-risk AI systems are subject to transparency obligations, while minimal-risk AI systems, such as those used in video games, are not heavily regulated.
The AI Act consists of 12 main titles, each containing several articles that outline specific rules and guidelines. These titles cover various aspects of AI regulation, including general provisions, prohibited AI practices, classification of high-risk AI systems, transparency obligations, governance, and innovation support. The Act also establishes a framework for market monitoring, market surveillance, and the creation of regulatory sandboxes.
The Act’s enforcement mechanism includes a system of fines for non-compliance, ranging from €7.5 million to €35 million or 1.5% to 7% of global annual turnover, depending on the severity of the violation. These penalties ensure that AI providers and deployers adhere to the regulations.
As of the present, the EU AI Act is seen as a landmark piece of legislation, setting the standard for AI regulation globally. It has a significant extraterritorial impact, requiring even non-EU companies to comply if their AI systems are used within the EU. The AI Office continues to work with national authorities to ensure smooth enforcement and compliance across the Union.
In 2024, the Act remains in its early enforcement stages, with the AI Office developing further guidelines, coordinating with Member States, and preparing for the full enforcement of the Act by 2026. The regulation is expected to have a major impact on the AI industry worldwide, influencing future AI governance frameworks in other regions.
Goals and Purpose
The goals and purpose of the EU AI Act are focused on ensuring the safe and responsible development, use, and regulation of artificial intelligence within the European Union. Below are the key points:
- Regulating AI Systems: The AI Act sets out clear rules for how AI systems can be developed, used, and marketed in the EU. The goal is to make sure that AI technologies are safe, transparent, and do not harm individuals or society.
- Risk-Based Approach: The Act classifies AI systems into different risk levels – unacceptable, high, limited, and minimal risk. This allows for different levels of regulation based on how likely an AI system is to cause harm.
- Protection of Fundamental Rights: A key goal is to protect people’s rights, such as privacy and safety, when interacting with AI systems. This includes making sure AI systems do not discriminate or violate human rights.
- Promoting Innovation: The Act also aims to support innovation, particularly for small and medium-sized enterprises (SMEs). It encourages AI development while ensuring safety and compliance through regulatory sandboxes, where new AI technologies can be tested under supervision.
- Supporting Transparency: The Act requires that certain AI systems, particularly high-risk ones, are transparent. This means users need to know when they are interacting with AI, how decisions are made, and the data used to train the AI systems.
- Accountability and Compliance: The AI Act ensures that AI providers, developers, and users are accountable for the systems they create or deploy. The AI Office monitors compliance and works with national authorities to enforce the rules.
- International Leadership: By setting clear standards for AI, the EU aims to be a global leader in AI governance, influencing how AI is regulated worldwide.
Impact
The EU AI Act is expected to have a significant impact on the development, use, and regulation of artificial intelligence (AI) systems in the European Union and globally. By establishing clear rules and guidelines, the Act ensures that AI technologies are used safely and responsibly, protecting the public from harmful applications. It focuses on safeguarding fundamental rights, such as privacy, safety, and non-discrimination, ensuring that AI systems do not infringe on these rights.
One of the Act's major impacts is the risk-based regulation of AI systems, categorising them into four levels: unacceptable, high, limited, and minimal risk. This allows for more stringent oversight of high-risk applications, especially in critical areas like healthcare, education, and law enforcement, while supporting innovation in lower-risk areas.
The creation of the AI Office plays a vital role in monitoring compliance, ensuring that AI providers and deployers meet the Act’s requirements. The Act also promotes innovation, particularly for small and medium-sized enterprises (SMEs), by providing controlled environments through regulatory sandboxes to test AI technologies.
Globally, the Act sets a benchmark for AI governance, influencing other regions to adopt similar frameworks. Its extraterritorial reach means non-EU companies must also comply if their AI systems are used within the EU, shaping international AI standards.
References
- EU AI Act: first regulation on artificial intelligence | Topics| European Parliament
- EU Artificial Intelligence Act | Up-to-date developments| EU Artificial Intelligence Act
- AI Act | Shaping Europe's digital future - European Union| European Union
- The Act Texts| EU Artificial Intelligence Act
- Artificial Intelligence Act| Wikipedia
- What is the EU AI Act?| IBM
- EU AI Act - EU Artificial Intelligence Act| EU AI Act Table of Contents
- The EU AI Act: What it means for your business| EY
- The AI Act Explorer| EU Artificial Intelligence Act
- AI Watch: Global regulatory tracker - European Union| White & Case LLP
- High-level summary of the AI Act| EU Artificial Intelligence Act
- AI Act as a neatly arranged website – Legal Text| ai-act-law.eu
- EU AI Act| International Association of Privacy Professionals
- Long awaited EU AI Act becomes law after publication in| White & Case LLP
- EU Artificial Intelligence Act Published| Inside Privacy
- EU Artificial Intelligence Act Published| Inside Privacy
- Navigating the EU AI Act: implications for regulated digital| Nature
- What the EU AI Act means for HR| Mercer LLC
- Artificial Intelligence Act: Home| artificialintelligenceact.com
- The EU AI Act: A Quick Guide| Simmons & Simmons
- Entry into force of the European AI Regulation| CNIL
- EU AI Act Published in the Official Journal of the European| Ropes & Gray LLP
- The EU Artificial Intelligence Act: AI in the Balance| Institute for Defence Studies and Analyses
- The EU's AI Act: Review and What It Means for EU and Non| Pillsbury Winthrop Shaw Pittman
- Looking ahead to the EU AI Act| CMS Law Tax
- EU AI Act adopted by the Parliament: What's the impact for| Deloitte
- How businesses can thrive under the EU's AI Act| Norton Rose Fulbright
- Preparing for the EU AI Act| IBM
Dive deeper into fresh insights across Business, Industry Leaders and Influencers, Organizations, Education, and Investors for a comprehensive view.

Thierry Breton (European Commissioner for Internal Market)