Article content
The Only Lever Left
74% of companies listed in Europe use American email providers. 89% of German enterprises consider themselves technologically dependent on foreign providers. The AI Act exists in this context. Reading it only as a compliance problem means missing the picture.
TL;DR: The AI Act is industrial policy. Europe is in structural technological dependence and regulation is the only lever where it still has global weight. The “Brussels Effect” (ability to export standards) is contested but likely for high-risk AI systems. In November 2025 the Digital Omnibus delayed implementation by 16 months, but the direction remains the same. Those reading the AI Act only as a regulatory checklist are looking at the tree and missing the forest.
The numbers on Europe’s technological position are known to insiders. They rarely enter the AI Act debate.
A Proton report from October 2025 analyzed DNS records of European listed companies: 74% use American email providers. Not startups: companies listed on stock exchanges, with governance and security obligations. A Bitkom survey of German companies with over 20 employees reveals that 89% consider themselves technologically dependent on foreign providers.
The EPRS report from the European Parliament completes the picture. Of the 100 largest global digital platforms by market cap, only 2% of the combined value is European. In cloud computing, hyperscalers, foundation AI models, Europe is a net importer.
This context changes how you read the AI Act. It’s not just about protecting European citizens from algorithms. It’s about using the only lever Europe has left to negotiate its position in a market dominated by others.
The Mechanism
The term “Brussels Effect” was coined by Anu Bradford in 2012 and developed in her 2020 book. The thesis is direct: the EU, thanks to its market size and institutional quality, manages to export its standards globally.
The mechanism works two ways. The de facto effect: companies wanting access to the European market adopt EU standards elsewhere, because maintaining two versions costs more than one. The de jure effect: other governments copy European rules because they work and reduce the cost of designing regulation from scratch.
GDPR is the canonical example. Privacy laws inspired by European regulation have been adopted in Brazil, Japan, California. Tech companies extended many GDPR protections to non-European users to simplify operations. The form of European regulation spread beyond the Union’s borders.
On the AI Act, academic literature is more nuanced.
A 2022 GovAI paper analyzed the conditions for Brussels Effect applied to artificial intelligence. The conclusion: de facto and de jure effects are likely, especially for high-risk systems from large American tech companies. Microsoft, Google, Meta operate in Europe with recruiting, credit, and content moderation systems. They’ll need to comply. And for many of these companies, it’s more economical to apply one global standard than to segment products by market.
The paper also identifies limits. The Brussels Effect works best when the EU market is unavoidable (it is for big tech), when regulation is perceived as high-quality (contested), and when credible alternatives don’t exist (China offers a different model). For low-risk AI systems or companies not operating in Europe, the effect will be smaller or absent.
An article on Policy Review proposes a complementary frame: the AI Act as “experimentalist governance”. Not a model to export wholesale, but one approach among many in a context of technological uncertainty. Interaction with other regulatory models (United States, United Kingdom, China) will be more cooperative and less unidirectional than the Brussels Effect frame suggests.
The synthesis: the Brussels Effect on AI exists but is contested and uncertain. It’s not guaranteed that European rules become global standard. It’s not guaranteed they remain irrelevant. The game is open.
The Tactical Adjustment
In November 2025, the European Commission proposed the Digital Omnibus. The package includes AI Act modifications that generated headlines about “Europe backing down”.
The facts: requirements for high-risk AI systems now apply later, about 16 months later. The new deadline is December 2027 for Annex III systems (recruiting, credit, healthcare), August 2028 for those embedded in regulated products. It’s a significant delay.
But the AI Act’s structure remains intact. Risk categories remain the same. Obligations remain the same. What changes is the calendar, not the destination.
The Digital Omnibus is a tactical adjustment, not a strategic reversal. Europe is calibrating timing, not abandoning direction. Those reading the delay as “backing down” are confusing speed with trajectory.
The Missing Frame
Conversation on the AI Act in Italy revolves almost entirely around compliance. Which systems fall into high-risk categories. How much conformity costs. What sanctions you risk. These are legitimate questions, but incomplete.
The missing context is the numbers from the start. 74% dependence on email. 89% perception of technological dependence. 2% of value in European digital platforms. In this frame, the AI Act is not a regulatory conformity problem. It’s a tool in a larger game about Europe’s position in the global technology market.
Europe has few levers. It has no hyperscalers. It doesn’t have the dominant foundation models. It doesn’t have the venture capital base of the United States or the deployment scale of China. What it has is a 450-million-person market and institutional capacity to regulate that other blocs don’t.
Using this lever to influence global standards is industrial policy. Calling it just “consumer protection” is an incomplete description. Treating it only as “compliance” is missing the picture.
Microsoft has made alignment with European regulation an element of positioning. Meta chose the opposite path, delaying model releases in Europe and pressuring for weaker rules. They’re different strategies reflecting different readings of where the market is going. Neither treats the AI Act as a simple checklist.
Maybe we should ask why we do.
Sources
Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford University Press.
Siegmann, C. & Anderljung, M. (2022). The Brussels Effect and Artificial Intelligence: How EU regulation will impact the global AI market. GovAI, arXiv:2208.12645.
Policy Review. (2025). Brussels effect or experimentalism? The EU AI Act and global standard-setting.
European Commission. (2025). Digital Omnibus on AI Regulation Proposal.
European Parliamentary Research Service. (2025). European Software and Cyber Dependencies.
TechReport. (2025). Europe’s Digital Dependence: The Risks of the EU’s Reliance on US Tech.