In a recent Platforum9 session, Professor Andrej Savin, an expert in information technology law, provided valuable insights into the European AI Act, its global context, and what organizations need to know to navigate this complex regulatory landscape. Drawing from his experience researching how EU legislation affects business strategy and organizational structures, Savin offered a comprehensive overview of what makes the European approach distinct and how companies can adapt.
The European Approach vs Global Regulatory Models
The European Union has adopted a horizontal approach to AI regulation, meaning the AI Act covers all artificial intelligence applications through a single comprehensive framework. This contrasts sharply with approaches elsewhere:
- United Kingdom: Moving toward sector-specific laws, believing this offers more flexibility and will attract companies
- United States: Taking a “wait and see” approach, acting only when specific problems emerge
- China: Competing with the US while focusing on maintaining control over Chinese AI companies
What distinguishes the EU approach is its foundation in product safety rather than rights-based law. “When you look at the AI Act, it looks very impressive and it’s huge,” Savin explained, “but it isn’t a rights-based law, so it isn’t something you can litigate in court. It’s a product safety approach.”
A Complex Regulatory Landscape
The AI Act doesn’t exist in isolation but interacts with numerous other regulations, creating a complex compliance environment. Organizations must navigate:
- GDPR and privacy regulations
- Labor laws (particularly when using AI in hiring)
- Constitutional issues and fundamental rights
- Copyright legislation
- Cybersecurity requirements
This regulatory intersection creates significant challenges. For example, when training AI models, different data sources fall under different GDPR requirements: “If you’re purchasing a dataset, you have indemnity clauses. If you’re scraping from the web, you need a valid legal basis. If you’re using customer information, you may have consent, but does that consent cover AI training?”
Surprisingly, according to Savin, many companies report that “at the moment it’s not the AI Act compliance that’s the problem. It’s the GDPR compliance.”
Global Reach of EU Regulation
The AI Act’s scope extends beyond EU borders, applying to any organization:
- Located in Europe
- Targeting European users
- Producing AI that enters the European market
“If you’re in any way connected with Europe and using, producing, or deploying AI, you simply assume that you’re covered,” Savin noted. While this extraterritorial approach follows the GDPR model, Savin doubts the AI Act will have the same “Brussels effect” of influencing global standards: “GDPR was very clear to understand and it was easy for people to see the value. With this complicated risk-based network of intertwining rules, I think it’s much more difficult for people to see the value and adopt that model.”
Risk-Based Compliance: Understanding Your Obligations
The AI Act establishes a tiered approach to regulation based on risk categories, with different compliance requirements for each level:
- Prohibited AI – completely banned applications
- High-risk AI – subject to extensive requirements
- Large language models – specific obligations for these systems
- Lower-risk applications – fewer requirements
Your position in the AI value chain significantly impacts your compliance burden. Producers of AI face the most extensive requirements, while users and deployers have fewer obligations.
“If you are not a producer of AI, the set of obligations that apply to you is relatively narrow,” Savin explained. “As a user or deployer, you have a significantly smaller set of obligations than if you’re a producer.”
Organizations must conduct continuous risk assessments for high-risk AI and large language models, forcing companies to evaluate their AI systems and mitigate potential harms. This represents a shift from a litigation mindset to a compliance-oriented approach.
Management and Cross-Functional Responsibility
One of Savin’s most crucial points concerned organizational structure and leadership responsibility. AI compliance cannot be delegated solely to IT or legal departments but requires board-level engagement and cross-functional teams.
“All digital laws require in one way or another that management be involved,” Savin emphasized. “You have to have compliance-oriented culture from the top. If the top isn’t into this, and it’s not just a phrase like ‘our mission and vision’, it has to be ‘we need to understand what’s happening, who’s in charge of what.'”
Companies often mistakenly assign AI compliance solely to their IT departments, only involving legal counsel when problems arise—by which point, it’s too late. Instead, Savin recommends:
- Board-level understanding of AI risks
- Cross-functional compliance teams
- Integration of risk assessment into business processes before launching new digital products
- Product teams using compliance checklists
- Involving lawyers early in the development cycle
Third-Party AI Tools and Modification Thresholds
For organizations using third-party AI tools, major providers like Microsoft generally offer compliance-ready versions. “If you go to Microsoft, they would give you the version that is GDPR compliant, that has the right liability clauses, the right indemnity built in,” Savin noted.
However, companies need to be cautious about modifications to existing models. The AI Act contains thresholds where customizations may reclassify a company as a producer rather than merely a user: “If you modify it, and if that modification goes above a certain mathematical threshold, which I think at the moment is 10 to the value of 22 flops, then you are assumed to be the producer.”
Practical Challenges: Limited Guidance and Emerging Case Patterns
Organizations face significant practical challenges implementing AI Act compliance due to limited guidance. While the legislation is in effect, supporting materials like the code of practice for large language models haven’t been finalized. “There’s very little guidance,” Savin acknowledged. “There’s been a lot of debate because it seems they’ve watered it down after the Americans pressured them.”
Despite this uncertainty, case patterns are emerging in specific domains:
- Copyright: Large language models facing lawsuits over training data
- Employment: Issues with AI in hiring processes and workplace monitoring
- Product liability: Concerns around AI in vehicles and other products
Compliance as a Business Advantage
Rather than viewing AI regulation as merely a burden, Savin suggests organizations adopt “bespoke compliance”—tailored approaches that add business value rather than checking generic boxes.
“Good compliance is also good business,” Savin argued. “Compliance by design where you have to work out what your compliance pattern is and see value in it. Then you compete on the idea that you are compliant and people will go to you because they know you respect GDPR and other regulations.”
Looking Forward
While the EU approach is demanding, Savin believes most requirements are reasonable. “I can’t really point to anything in the AI Act where I have a good case to say, ‘this is a disastrous approach that will slow things down dreadfully.’ A lot of the things it suggests are relatively reasonable.”
The challenge lies not in any specific requirement but in navigating the complexity of the overall system, particularly for smaller organizations without extensive compliance resources. For those seeking help, Savin recommends AI compliance checkers that provide initial direction through simple questionnaires, as well as professional networks where experts share insights.
As AI regulation continues to evolve globally, organizations must develop compliance strategies that balance innovation with responsibility. Those that embrace “compliance by design” may find themselves with not just reduced legal risk, but also a competitive advantage in a marketplace increasingly concerned with AI ethics and safety.