The long-awaited Artificial Intelligence Act has finally reached the end of the legislative process. On the 21stof May 2024, the European Council approved the final text of the regulation which will regulate artificial intelligence in the European Union.
This achievement for the EU has been in the works since the first proposal of the regulation in April 2021. This led to up to the political agreement between the European Council and Parliament in December 2023, and finally the adoption of the Act by the European Parliament in March 2024 in a landslide vote.
In its announcement, the European Council describes the Act as “ground breaking law” and “flagship legislation”. Many have expressed their admiration for the EU’s determination in regulating such advancements in technology, whilst ensuring individuals’ rights are protected. Mathieu Michel, Belgian State Secretary for Digitalisation, Administrative Simplification, Protection of Privacy and Buildings Agency, remarks:
"The adoption of the AI act is a significant milestone for the European Union. This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies.”
The AI Act in a Nutshell
The entities targeted by the Regulation cover a range of players in a typical supply chain from AI providers (those developing AI systems) and AI deployers (those using AI systems under their authority), to product manufacturers and importers.
Seemingly extending beyond the parameters of the European Union, the regulation applies to AI providers and deployers placing AI systems on EU markets or where the results of AI are being used in the EU, irrespective whether the provider or deployer is established within or outside the EU.
The Act adopts a risk-based approach, with more stringent rules applying to AI systems which pose higher risks to the rights and fundamental freedoms of individuals. In an ambitious move, General Purpose AI Models also fall within the Act’s scope. Certain AI applications are prohibited outright by the AI act, whilst other applications of AI deemed a “minimal risk”, are not subject to the regulation and can be used freely. Those posing limited risk are simply subject to transparency obligations, where AI providers must make it clear that AI is involved in the service or tool being made available.
The Act places most emphasis on “high risk” AI systems which although not prohibited, are subject to numerous obligations under the regulation. This includes establishing risk management and quality management systems, conducting fundamental rights impact assessment, and adhering to transparency obligations.
Entry into Force
The AI Regulation is set to apply two years after it enters into force, with specific provisions applying earlier and other provisions being afforded a lengthier period. Providers, deployers and users alike await the Act’s publication in the Official Journal of the EU for its final countdown, in a new digital age for Europe.