The European Union's AI Act entered full enforcement in March 2026, marking the end of a phased implementation period that began with the Act's passage in 2024. The regulation now applies to all AI systems deployed within the EU, regardless of where the developer is based.

What the Act requires

The AI Act establishes a risk-based framework. High-risk systems — those used in hiring, credit scoring, healthcare, law enforcement and critical infrastructure — must complete formal conformity assessments, register in a central EU database, and implement ongoing monitoring.

General-purpose AI compliance

For large foundation model providers, new GPAI (General Purpose AI) obligations now apply. OpenAI, Anthropic, Google and Mistral have all published their compliance documentation, covering training data transparency, copyright disclosures, and adversarial testing results.

Enforcement teeth

Violations of the AI Act can result in fines of up to €35 million or 7% of global annual turnover — whichever is higher. The newly established European AI Office, headquartered in Brussels, is responsible for enforcement and has already opened preliminary inquiries into several companies.

The regulation is being watched closely by legislators in the US, UK and Asia as a potential template for domestic AI governance frameworks.