EU AI Act

Updated December 3, 2024

 

The proposed European Union Artificial Intelligence Act is the world's most prominent AI legislation. 

 

The AI Act was published in the Official Journal of the EU on July 12, 2024 (see 'Legislative Process' below for details on how it was adopted) and entered into force on August 1, 2024. The AI Act will fully take effect two years after entry into force (i.e. in 2026), with some obligations applying earlier or later (see timelines from Future of Life Institute and International Association of Privacy Professionals).

 

Brief summaries of some key provisions of the AI Act are below. More detailed summaries are available from various sources including:

 

Regulatory Framework for AI Systems

The EU AI Act will regulate AI "systems" (i.e. software applications), using a definition of "artificial intelligence" that is based upon the one adopted by the OECD. AI systems are divided into the following categories (subject to some general exceptions, e.g. for military/defense systems):

  • prohibited systems -- These include systems for manipulation of human behavior, social scoring, predictive policing, emotion recognition in the workplace, and (with certain exceptions) real-time remote biometric identification by law enforcement in public spaces.
  • high-risk systems -- This is most important regulatory category in terms of commercial impact, and includes systems that pose a “significant risk” to health, safety, or fundamental rights, and are intended to be used for specified purposes including (but not limited to) education, employment, critical infrastructure, public services, law enforcement, border control and administration of justice. High-risk systems will be subject to detailed regulation, including obligations associated with risk management, data governance, monitoring and documentation / record-keeping. The strictest obligations apply to "providers" of high-risk models, with "deployers" of high-risk models also subject to significant obligations.
  • limited risk systems -- Systems with limited potential for manipulation are subject to transparency requirements, including informing users that they are interacting with an AI system and identifying AI-generated content.
  • minimal risk systems -- AI systems not falling into the above categories are not subject to regulation.

 

Companies will be able to show compliance with AI Act requirements (under defined circumstances) by adhering to standards that are being adopted. The European Commission Joint Research Centre published a Science for Policy Brief Harmonised Standards for the European AI Act in October 2024, and this October 2024 article by law firm Skadden explains the standardization process.

 

Some have suggested that this regulatory approach may be too prescriptive and/or inflexible. One of the architects of the AI Act has suggested that it may disadvantage EU companies, and others have pointed out holes in coverage of the AI Act that may require further legislation to remedy (such as AI enabling bioweapons).

 

Regulation of "General Purpose AI Models"

The AI Act defines "general purpose AI [GPAI] model" to be one that "displays significant generality and is capable to competently perform a wide range of distinct tasks", with certain exceptions. GPAI models will be subject to obligations regarding (a) technical documentation and information, (b) copyright policies and (c) summaries of training data, except that obligations (b) and (c) do not apply to GPAI models that are made public (including model weights). (AI Act, Arts. 3(44b) & 52c)

 

GPAI models posing "systemic risk" subject to stricter obligations. Models posing "systemic risk" are those that (a) have "high impact" based upon technical evaluations, (b) are so designated by the European Commission or (c) are trained using computing power greater than 1025 floating point operations. (AI Act, Art. 52a) In addition to the general obligations for GPAI models, "systemic risk" models will be subject to obligations regarding (1) model evaluation and testing, (2) risk management, (3) incident reporting and (4) cybersecurity. (AI Act, Art. 52d)

 

Transparency requirements for GPAI models enter into force on August 1, 2025, while full risk assessment and mitigation obligations for GPAIs with system risk enter into force on August 1, 2027. The Stanford Center for Research on Foundation Models has published a detailed analysis Foundation Models Under the EU AI Act (August 2024).

 

Open Source AI Models

The AI Act applies reduced obligations to open source AI models (1) for which model weights and information on model architecture and training are made publicly available and (2) that are not monetized. 

 

Regulatory / Responsible Bodies

The AI Act establishes new institutions:

  • an AI Office within the European Commission (advised by a scientific panel of independent experts), which became operational on June 16, 2024 and will initially have approximately 140 employees in the following units:
    1. Regulation and Compliance, to ensure uniform application of the AI Act
    2. AI Safety unit, to identify risks and mitigation measures of highly capable general-purpose models
    3. Excellence in AI and Robotics, to support R&D
    4. AI for Societal Good, to engage in beneficial AI projects
    5. AI Innovation and Policy Coordination
  • an AI Board comprising representatives of EU member states (advised by an advisory forum of companies, academics and NGOs), which is meeting for the first time in September 2024.

 

Each EU member state must appoint one or more national authorities with responsibility for implementation of the AI Act and associated market surveillance.

 

Law firm Freshfields produced useful, concise summary of EU and national regulators responsible for enforcing the AI Act in July 2024.

 

Implementation

The European Commission has begun to take a variety of specific implementation actions for the AI Act, including:

 

The European Parliament Internal Market and Consumer Protection (IMCO) committee and the Civil Liberties, Justice and Home Affairs (LIBE) committee have established a joint working group on implementation of the AI Act.

 

A summary of 130 European Commission / EU AI Office implementation actions for the AI Act is in this July 2024 blog by European Parliament staffer Kai Zenner.

 

Legislative Process

Key steps in adoption of the EU AI Act were:

 

  • May 2021 -- The European Commission proposed a draft of the AI Act.
  • June 2023 -- The European Parliament proposed amendments to the AI Act, including fairly strict provisions to address "foundation models" (i.e. large language models, diffusion models and similar), which had risen rapidly to prominence since the legislation was proposed.
  • December 2023 -- Political agreement on the key provisions of the AI Act was reached in "trilogue" negotiations among the European Commission, the Council of the European Union (comprising representatives of EU member states) and the European Parliament. The agreement retains provisions on foundation models, but these are substantially less restrictive than those proposed by the European Parliament. Following the political agreement, the European Commission published Q&As on the AI Act.
  • February 2024 -- The Committee of Permanent Representatives (acting for the EU Council) endorsed a proposed final draft of the AI Act. The European Parliament committees on Internal Market Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE) then endorsed the proposed text, and the European Parliament published an updated version of the AI Act.

  • March 2024 -- The European Parliament gave final approval to the AI Act on March 13, 2024.

  • April 2024 -- The European Parliament released a final, corrected version of the AI Act, addressing minor drafting errors.

  • May 2024 -- The AI Act received final approval from the Council of the EU on May 21, 2024.

  • July 2024 -- The AI Act was published in the Official Journal of the EU on July 12, 2024.