Skip to content
AI

The EU AI Act, Decoded

After three years of negotiations, Europe’s AI rulebook is finally here. The question is no longer “will AI be regulated?” but “how, and when?” From risk categories to real-world impact, this is the EU AI Act.

If it feels like Europe has been talking about regulating artificial intelligence forever, that is because it almost has.

After more than three years of negotiations, rewrites, lobbying, and late-night trilogues, the EU’s Artificial Intelligence Act finally entered into force on August 1st, 2024. The legislation is the world’s first comprehensive, legally binding framework for AI.

Since then, implementation has followed a phased approach, with different obligations officially coming into effect over two years (from February 2025 to August 2027). So if you’re a founder, operator, or investor, the real question is not what has been adopted, but what actually applies today?

And what is still very much in motion.

What is the EU AI Act?

At its core, the AI Act is a risk-based regulation. Rather than banning or “blessing” AI as a whole, it categorizes AI systems according to the level of risk they pose to people’s safety, rights, and freedoms.

There are four tiers:

  • Unacceptable risk: AI systems that are banned outright, such as social scoring, certain forms of biometric surveillance, or systems designed to manipulate behaviour without users’ awareness.
  • High-risk: AI systems that handle personal or sensitive data and are used in regulated areas such as hiring, education, credit scoring, healthcare, or law enforcement. These systems are allowed, but only under strict conditions related to data quality, transparency, human oversight, and risk management.
  • Limited-risk: AI systems such as chatbots or generative AI tools, which must meet transparency obligations - for example, clearly informing users that they are interacting with AI or that content has been AI-generated.
  • Minimal-risk: The vast majority of everyday AI tools (spam filters, photo enhancement, recommendation engines), which remain largely unregulated.

The ambition is clear: ensure that AI used in Europe is safe, transparent, non-discriminatory, and under human oversight, while still allowing innovation to happen.

How did we get here?

The EU AI regulatory journey began in April 2021, when the European Commission published its first proposal for AI regulation. What followed was nearly three years of intense back-and-forth between the Commission, the European Parliament, and Member States.

Parliament pushed hard for strong safeguards: human oversight, traceability, environmental considerations, and a technology-neutral definition that wouldn’t be obsolete in five years. Industry groups, meanwhile, warned against overregulation, compliance overload, and Europe “regulating itself out of innovation”.

By June 2024, a compromise text was finally adopted. Several obligations had been softened. Some were delayed. General-purpose AI models received a bespoke regime. The result: a framework that is undeniably ambitious, but also complex.

Is the AI Act finished?

Legislatively, yes. Practically, not at all.

The Act entered into force on 1 August 2024, but its application is phased:

  • 2025: bans on unacceptable-risk AI apply, along with early obligations around AI literacy.
  • 2 August 2026: the bulk of obligations take effect, particularly for high-risk AI systems.
  • 2027–2030: further provisions roll out depending on system type.

Simply put, Europe has passed the law, but we are still very much in the implementation chapter.

Who is impacted, and who isn’t?

The AI Act has an extremely broad scope. It applies to any company that provides, deploys, or imports AI systems used in the EU, regardless of where that company is based. US, UK, or Asian AI providers are firmly in scope if their systems touch the European market.

That said, not everyone is equally impacted. Startups and SMEs using off-the-shelf, low-risk AI tools will see limited obligations. Companies developing or deploying high-risk systems, particularly in regulated sectors, face the heaviest compliance burden.

The key question every organisation must answer is simple: What AI do we use, and where does it sit in the risk taxonomy?

What are the main obligations?

For companies in scope, the AI Act is less about paperwork and more about governance. Core obligations include:

  • Mapping and documenting AI use cases: for example, listing all AI systems used internally or sold externally, what they do, where they are deployed, and what data they rely on.
  • Classifying systems by risk: determining whether a system falls into minimal, limited, or high-risk categories, based on its use case and potential impact on individuals.
  • Defining roles: clarifying whether a company acts as an AI provider (developing or placing an AI system on the market) or a deployer (using an AI system in its operations), as obligations differ significantly between the two.
  • Ensuring transparency toward users: informing users when they are interacting with an AI system, when content is AI-generated, or when automated decision-making is involved.
  • Implementing human oversight: ensuring that a trained human can intervene, override, or stop an AI system when necessary, particularly for high-risk use cases.
  • Monitoring systems over time: tracking performance, bias, errors, and unintended outcomes once systems are deployed, rather than treating compliance as a one-off exercise.

Whatever your stance on regulation, these obligations are to be taken seriously. Non-compliance can be expensive: fines can reach €35 million or 7% of global annual turnover, whichever is higher.

Pushback, pauses, and regulatory fatigue.

Unsurprisingly, the AI Act has not landed quietly.

Big Tech has repeatedly criticized it as cumbersome and innovation-slowing. In 2025, executives from Meta and OpenAI publicly warned Europe against moving too slowly while the rest of the world accelerates. At the same time, French regulators such as Arcep have cautioned that Europe risks repeating past mistakes if it fails to pair regulation with serious AI infrastructure investment.

Even startups and funds started to push back.

 In June 2025, nearly 50 European tech leaders, including Airbus, BNP Paribas, Partech, and Pigment, signed an open “stop the clock” letter calling for a two-year pause in implementation. Their argument: the rules are complex, guidance is still missing, and the technology is evolving faster than the law.

Brussels’ response was blunt: there is no stopping the clock!

Enter the Digital Omnibus

Against this backdrop of regulatory strife comes the Digital Omnibus, an attempt by the Commission to simplify, align, and de-conflict Europe’s growing stack of digital rules (AI Act, GDPR, DSA, DMA).

For the AI Act, the Omnibus makes several meaningful adjustments:

  • Centralizing enforcement under the EU AI Office: a new body within the European Commission responsible for overseeing general-purpose AI models and coordinating enforcement across Member States.
  • Easing registration requirements for some lower-risk systems: for example, reducing documentation obligations for non-high-risk AI tools used internally.
  • Delaying transparency obligations for generative AI until early 2027.
  • Expanding real-world testing options for high-risk AI.
  • Giving SMEs more flexibility on quality management systems: allowing lighter compliance processes where risks are limited, and resources are constrained.

These changes do not rewrite the AI Act, but they do acknowledge a simple reality: compliance is hard when guidance is incomplete.

So where are we today?

As of now, the AI Act is law, but not yet fully operational. Most companies still have time to comply with the regulations, but not unlimited time. The smart move for founders is not panic compliance, but preparation: mapping AI use, understanding risk exposure, and building internal literacy.

A final thought

The EU AI Act will likely do for AI what GDPR did for data: frustrate many, shape global norms, and eventually become the baseline. The real risk for European companies is not the regulation itself; it is waiting too long to engage with it.

Decoded, the message is simple: the rules are coming. The question is whether Europe’s innovators will choose to meet them early… or scramble later.

Comments

Latest