Skip to main content
AI Act European Commission

After a remarkable journey that began in April 2021 with the European Commission's proposal, this long-awaited moment has finally arrived: the AI Act came into force on 1 August 2024.

The need for regulation

The EU’s AI Act represents the response to the transformative impact of artificial intelligence across various sectors. While AI has brought significant advancements in many fields, its rapid evolution has also introduced challenges, including the proliferation of misinformation, privacy concerns, and potential biases in decision-making processes.

To address the risks posed by both current and future advancements in AI, the European Union has embarked on an ambitious journey to construct a comprehensive regulatory framework. The AI Act, represents the world's first all-encompassing legal structure designed to govern artificial intelligence.

The AI Act adopts a risk-based approach, categorising AI systems according to their potential impact. High-risk applications, such as those employed in healthcare or critical infrastructure, are subject to stringent requirements, including thorough testing and certification procedures. In contrast, minimal-risk systems face less stringent oversight. The legislation explicitly prohibits AI systems that pose significant threats to individuals or society, including those designed to manipulate human behaviour or perpetuate discrimination.

By establishing clear guidelines while promoting innovation, the AI Act aims to promote a trustworthy environment for AI development and deployment. It offers support to startups and small and medium-sized enterprises (SMEs), positioning the EU as a global leader in ethical AI practices.

Risk-based approach:

As mentioned before, a central aspect of the AI Act is its risk-based approach to regulate AI systems, imposing varying degrees of requirements depending on the potential harm they pose.

Unacceptable Risk

AI systems considered to represent an unacceptable level of risk to individuals or society are prohibited. These systems include applications such as:

  • Social scoring systems that evaluate individuals based on social behaviour or personal characteristics.
  • Systems designed to manipulate or exploit vulnerable groups through cognitive behavioural manipulation.
  • Real-time remote biometric identification systems, including facial recognition, in public spaces.

There are some limited and circumscribed exceptions to this prohibition, primarily for law enforcement purposes.

High-Risk AI

AI systems identified as posing significant risks to health, safety, or fundamental rights are classified as high-risk. These systems are subject to rigorous regulation, including:

  • Those incorporated into products regulated by EU product safety legislation, such as toys, aviation, and medical devices.
  • Systems employed in critical infrastructure management, education, employment, law enforcement, and migration control.

These high-risk AI systems must undergo stringent testing, risk assessments, and certification processes before being placed on the market.

Transparency Requirements

While generative AI models like ChatGPT are not categorised as high-risk, they are subject to specific transparency obligations such as:

  • Clear disclosure to users that the content is AI-generated.
  • Measures to prevent the generation of illegal content.
  • Publication of summaries regarding the copyrighted data used in training the model.

These requirements aim to protect users and ensure the responsible development of generative AI technologies.

GPAIs

GPAIs models are renowned for their versatility and wide-ranging capabilities, and are strongly regulated by the AI Act. All GPAI model providers are subject to a set of requirements, such as to create detailed documentation, share information with downstream users, comply with copyright laws, and maintain data transparency. These measures aim to promote accountability and enable informed decision-making for those implementing GPAI technologies.

For GPAI models identified as posing systemic risks, additional obligations will be enforced. These include conducting adversarial testing, performing thorough risk assessments, promptly reporting significant incidents to authorities, and implementing robust cybersecurity measures. 

To facilitate compliance, the EU will encourage adherence to codes of practice developed collaboratively with industry stakeholders. These codes will address GPAI-specific challenges, focusing on risk identification and management. 

Governance and Enforcement

To ensure effective implementation and enforcement of the regulation, the AI Act establishes a robust governance structure distributed among several bodies:

  1. The European Commission's AI Office: this will serve as the primary implementation body at the EU level and the enforcer for rules on general-purpose AI models. The AI Office will:
    • Monitor the effective implementation and compliance of general-purpose AI model providers.
    • Accept complaints from downstream providers regarding upstream providers' infringements.
    • Conduct evaluations of general-purpose AI models to: a) Assess compliance when information gathered through its investigative powers is insufficient. b) Investigate systemic risks, particularly in response to qualified reports from the scientific panel of independent experts.
  2. The European Artificial Intelligence Board: This body will ensure uniform application of the AI Act across EU Member States and act as the primary forum for cooperation between the Commission and Member States.
  3. A scientific panel of independent experts: This panel will offer technical advice and input on enforcement, including issuing alerts about risks associated with general-purpose AI models.
  4. An advisory forum: Composed of diverse stakeholders, this forum will provide guidance to the AI Office.

Penalties

The AI Act introduces several penalties for non-compliance:

  • Up to 7.5 million euros or 1.5 % of global turnover for supplying incorrect information to authorities.
  • Up to 15 million euros or three per cent of global annual turnover for non-compliance with specific obligations related to high-risk AI systems.
  • Up to 35 million euros or seven % of global annual turnover (whichever is higher) for the most serious infringements, such as the use of prohibited AI practices.

Transitional period

On 2 August 2026, the regulation’s third phase of implementation will start, marking the effective date for the majority of the act’s provisions, including those on high-risk systems.

To bridge the transitional period before full implementation, the European Commission has initiated two key measures:

  1. The AI Pact: this initiative invites AI developers to voluntarily adopt key obligations of the AI Act ahead of the legal deadlines, fostering early compliance and responsible AI development.
  2. Guidance and Co-regulatory Instruments: the Commission is actively developing guidelines and facilitating the creation of co-regulatory instruments such as standards and codes of practice. As part of this effort, they have issued a call for participation in developing the first general-purpose AI Code of Practice.

These proactive steps aim to ensure a smooth transition to the new regulatory framework, promoting responsible AI development and deployment across the European Union while the AI Act's provisions are phased in.