In a timely Platforum9 Session, we addressed one of the most pressing global policy issues in Ai today. Professor Andrej Savin, Professor in IT Law & Internet Law at Copenhagen Business School examined the growing regulatory divide between the US and Europe on artificial intelligence.
With his extensive expertise in EU AI policy formulation, Savin provided key insights into how America’s de-regulatory approach under the Trump administration directly conflicts with Europe’s comprehensive AI Act, creating significant compliance challenges for businesses operating across both jurisdictions.
Historical Context
The current US approach to AI regulation isn’t unprecedented. Savin traced the philosophical roots of liberal regulation back to the Clinton administration’s 1997 decision to regulate the internet loosely, rather than as a telecommunications service. “The Americans at that point chose to regulate it loosely because we simply didn’t know what the threats were,” Savin explained. However, the critical difference today is that “we have a pretty clear idea where the threats with artificial intelligence may come from.”
This historical parallel underscores the ideological nature of the current policy divergence. Unlike the 1990s internet landscape where uncertainty justified minimal regulation, today’s AI environment presents well-documented risks including bias, discrimination, and opacity in decision-making processes.
Europe’s Risk-Based Regulatory Framework
The EU AI Act represents a fundamentally different philosophical approach. “The EU’s regulatory approach is essentially product safety law, “Savin clarified. “It’s not a rights-based regulation. It does not assign rights that you can litigate in court. It is a risk-based regulation.”
The legislation categorises AI systems into four risk levels: prohibited, high risk, large language models, and everything else. This framework requires companies to assess risks and implement corresponding mitigation measures. While Savin acknowledged the approach is “burdensome,” he defended its necessity: “We do know that AI is risky. We do know that it’s biased. We do know that it discriminates.”
The enforcement mechanism includes substantial financial penalties and potential bans, though Savin noted that implementation remains largely untested. “We have no experience. We’ve just started looking at this, and we can’t really gauge” the practical impact of enforcement.
America’s Strategic De-regulation
The Trump administration’s AI strategy extends far beyond simple de-regulation. Implemented through presidential decree rather than legislation, it combines industrial policy with ideological positioning. Most concerning is the use of public procurement as an enforcement mechanism.
“No government contracted AI system will be allowed unless they’re what they call free from ideological bias,” Savin revealed. This criterion is entirely political—targeting systems designed to address discrimination or promote gender equality. The administration will use “requests for information” to screen potential suppliers, asking whether they maintain diversity, equality, and inclusion policies.
This approach has immediate extraterritorial effects. US embassies in Europe have already begun sending letters requesting confirmation that suppliers “do not have diversity, equality, and inclusion policy”—a requirement that has shocked European partners accustomed to mandatory equality frameworks.
The Infrastructure Advantage
Perhaps most significantly, the US strategy removes environmental constraints on AI infrastructure development. This creates substantial competitive advantages for American data centre operators while potentially exacerbating climate impacts. “Every time you type in a query in ChatGPT, that costs you a certain number of points in terms of your impact on nature,” Savin noted, highlighting the environmental costs of AI deployment.
The infrastructure implications extend beyond environmental concerns to resource allocation. Reports from Arizona describe water shortages linked to data centre operations—a preview of potential global resource conflicts as AI deployment accelerates.
Compliance Challenges for Global Businesses
For multinational companies, the regulatory divergence creates complex compliance matrices. “You essentially have to comply with every legal system where you intend to export,” Savin explained. While large technology companies possess the resources to manage multiple regulatory frameworks, small and medium enterprises face disproportionate burdens.
The situation is complicated further by US state-level legislation. Despite federal de-regulation efforts, states like California, Texas, Georgia, Florida, and Minnesota maintain their own AI regulations. The new federal strategy aims to limit state authority through the Federal Communications Commission, but the patchwork of requirements remains challenging.
European Response and Future Prospects
Despite pressure to modify the AI Act, Savin advocates maintaining the current course. “The only thing that makes sense is to stay the course with the current EU AI Act,” he argued, though acknowledging the legislation will require revision “in three or five years.”
The fundamental challenge for Europe lies not in regulatory approach but in structural economic factors. “If you want money to do something in the States, you go to a venture capitalist. If you want to do the same in Europe, you go to a bank, and banks are conservative lenders.” These capital market differences, combined with more rigid bankruptcy laws and fragmented markets, create innovation disadvantages that regulation alone cannot address.
However, Savin identified potential opportunities arising from America’s increasingly restrictive immigration policies. “One of the reasons why the United States is so successful is that its immigration policy has been very liberal. If you had a degree in engineering, computer engineering, you could relatively easily find a job.” The current administration’s hostility to immigration could redirect talent flows toward Europe, provided the EU reforms its own immigration framework.
Implications for Legal Practice
The regulatory complexity creates substantial opportunities for lawyers. The AI Act regulates not only system producers but “everybody in the value chain,” making compliance determination challenging. “If I import AI technology from the United States and then modify that technology, both the original producer in the United States and me sitting in Europe would be under various parts of the AI Act.”
This complexity extends to use case analysis, requiring detailed assessment of whether specific AI applications fall under the Act’s scope and determining appropriate compliance packages. The work is substantial enough that, ironically, AI compliance tools may themselves become essential for managing AI compliance requirements.
Conclusion: A Fragmented World
The US-Europe AI policy divergence represents more than regulatory disagreement—it embodies fundamentally different visions of technological governance. The US de-regulatory approach prioritises innovation speed and market dominance, while Europe emphasises risk mitigation and ethical considerations.
For businesses operating globally, this creates unprecedented compliance complexity requiring careful navigation of competing regulatory philosophies. The stakes extend beyond immediate compliance costs to fundamental questions about AI’s role in society, environmental sustainability, and democratic governance.
As Savin concluded, cooperation with the current US administration appears impossible, leaving Europe to “form alliances elsewhere and hope for the best in the next three and a half years.” The outcome of this regulatory competition will likely determine not only which approach proves more effective but which vision of AI governance becomes the global standard.