The EU AI Act Goes Live Today and Nobody Knows What It Means

The EU AI Act officially entered enforcement today. Companies have six months to comply. I spent the last week talking to legal teams, and the consensus is: nobody has any idea what compliance actually looks like.

The law is 459 pages. The guidance documents are longer.

Here’s what’s clear: AI systems are categorized by risk level. High-risk applications—hiring tools, credit scoring, law enforcement—face strict requirements. Transparency obligations for generative AI. Outright bans on certain uses like social credit scoring.

Here’s what’s unclear: basically everything else.

One lawyer I spoke with represents three major tech companies. She’s hired consultants to interpret the law. The consultants disagree on fundamental questions. “We might not know if we’re compliant until someone sues us,” she said.

The penalties are brutal: up to 7% of global revenue. For a company like Google? That’s a potential $21 billion fine. Nobody’s taking this lightly.

But implementation is chaos. I talked to the head of compliance at a mid-size AI company. They’ve allocated $4 million just to figure out which regulations apply to their products. They’re not even fixing anything yet—just understanding requirements.

Smaller companies are screwed. A startup building an AI recruiting tool told me they’re spending 40% of their runway on compliance. Not product development. Not customer acquisition. Lawyers and auditors.

The stated goal is protecting citizens from harmful AI. Noble. The actual effect is cementing the dominance of companies rich enough to hire armies of compliance experts.

There’s a dark irony here: the EU wrote these rules to rein in big tech. Instead, they’re creating a moat around big tech that startups can’t cross.

One provision requires “meaningful human oversight” of high-risk AI systems. What does meaningful mean? Nobody knows. How much oversight? Unclear. Documentation requirements? Open to interpretation.

Companies are defaulting to maximum compliance because the alternative is existential risk. An AI safety researcher called this “regulation by ambiguity.” When the rules are vague, companies over-comply to be safe.

The transparency requirements for generative AI are particularly messy. You must disclose when content is AI-generated. Sounds simple. But what counts as AI-generated? If I use AI to edit one sentence in a 1,000-word article, is the whole thing AI-generated?

Nobody knows.

OpenAI, Anthropic, Google—they all published transparency reports. They’re beautiful documents full of vague promises and non-committal language. Technically compliant, practically meaningless.

Some companies are just pulling out of Europe. An AI-powered medical diagnostics startup told me they’re geofencing EU users. The compliance cost exceeds potential European revenue. Easier to just exit the market.

That’s the opposite of what the law intended.

The EU argues they’re setting global standards. Maybe. California is already copying portions of the AI Act. Other jurisdictions are watching. If compliance becomes globally standard, the EU wins.

But right now? It’s expensive chaos.

I asked a Brussels-based policy expert if this was intentional. He laughed. “Lawmakers understand AI about as well as AI companies understand law. This bill is a negotiation document between 27 countries written by people who mostly don’t understand the technology. Of course it’s incoherent.”

The optimistic take: this is version 1.0. The rules will clarify over time through enforcement cases and guidance updates. Eventually, we’ll have workable standards.

The pessimistic take: by the time the rules are clear, AI will have evolved three generations and none of this will be relevant anymore.

Meanwhile, compliance teams are hiring as fast as they can, legal bills are mounting, and everyone’s just hoping they don’t accidentally violate a rule nobody understands yet.

Welcome to the regulated AI future. It’s expensive, confusing, and nobody knows if it’ll actually make AI safer.