Demystifying the EU AI Act

Andrej Savin (Professor, CBS Law), Sid Ali Boutellis (Legal Tech Expert), and Yuliia Habriiel (Founder, the eyreACT) explored what the EU AI Act is actually designed to do, who it applies to, and what “getting compliant” looks like in practice, particularly for smaller firms and teams integrating third-party AI components into products and services.

The EU AI Act in One Paragraph

Savin’s core framing of the Act was clarifying: the EU AI Act is primarily product safety legislation, not a catch-all “AI ethics” law. That distinction matters because several issues people assume are covered by it, such as liability and broader fundamental rights frameworks, largely sit elsewhere. The Act’s operating logic is risk-based: it focuses on AI technologies, systems, and (in specific cases) models that are considered unsafe, then attaches obligations based on category and role.

Risk Tiers: What Falls Where (and Why it Matters)

Savin and Boutellis repeatedly returned to the idea that your first job is to determine which bucket you’re in because that dictates your obligations.

Prohibited AI
Savin noted these are relatively narrow in day-to-day corporate contexts, but include categories such as social scoring, emotional recognition, behaviour manipulation, and untargeted scraping of facial images. If you are here, you’re not “mitigating”; you’re stopping.

High-risk AI systems
This is where more organisations find themselves. Savin highlighted broad domains that can trigger high-risk classification, including medical devices, vehicles, HR, education, access to essential services (e.g., insurance and banking), and critical infrastructure. The key point: the scope is wide, and crucially, there is no meaningful de minimis threshold. Smaller firms can face serious obligations if their use case qualifies.

Typical high-risk requirements discussed included:

  • Fundamental rights impact assessments
  • Conformity assessments
  • Controls and obligations around datasets
  • Risk management and quality management systems
  • Data governance

General-purpose AI and large language models
Savin distinguished obligations here as primarily targeting providers of GPAI/LLMs, and those who significantly modify them. “Ordinary users” have little direct burden, but the session repeatedly stressed that your role can change depending on what you do with a system.

Roles, Responsibility, and the “Value Chain Trap”

A major practical warning came regarding responsibility across the AI value chain (including what was referenced as Article 25): you can be treated as a “provider” (with provider-level obligations) if you:

  • Put your name/trademark on a system
  • Substantially modify it
  • Change its intended purpose

In plain terms: teams can take something “off the shelf”, integrate it into their product, and accidentally inherit obligations they didn’t budget for. The view from the discussion was that many organisations underestimate this risk because procurement and product teams don’t instinctively see rebranding and repurposing as regulatory triggers.

Compliance Reality: Cost, Tooling, and Avoiding Magical Thinking

Compliance is expensive. Boutellis’ view was pragmatic: compliance needs to be treated holistically, and tools can help, but only if teams avoid the fantasy that a “plug-in” makes the problem disappear.

Savin reinforced this with a caution: software and consultancy can be either helpful or harmful, depending on whether leadership understands the risk posture and whether there are procedures in place. If a firm hopes a third party will “turn up and solve compliance”, it’s heading in the wrong direction.

Habriiel described an alternative approach: rather than using AI to “assess AI”, her team operationalises obligations via rule-based logic, classifies risk level from structured inputs, and then runs evidence management workflows (manual or via API) to identify gaps and prepare for audit and investor scrutiny. The key theme: compliance is becoming an evidence discipline, not a one-off document exercise.

Standards as the Practical Bridge (and Why ISO 42001 Keeps Coming Up)

Boutellis suggested organisations looking for readiness signals should pay attention to recognised frameworks and standards, especially ISO/IEC 42001 (AI management systems). He framed standards as a way to create a credible posture before formal enforcement pressure lands, and cited market dynamics where customers and partners push ISO requirements, making them de facto expectations.

Savin added an important policy signal: even proposals to delay certain high-risk obligations have been linked to the need for guidelines and standards to mature, suggesting standards will become part of the “how” of compliance in practice, not just a nice-to-have.

Extraterritorial Reach and the “Brussels Effect”

Habriiel was unambiguous: non-EU firms can fall under the Act if their AI is used in the EU or affects people in the EU. The headquarters location does not save you if your market does not stop at Europe.

On global alignment, Savin drew a sharp distinction with GDPR. GDPR was comparatively linear; the AI Act’s risk-based architecture is more complex, and other jurisdictions (US, UK, China, Canada) are developing different regulatory approaches. The likely business reality: firms that want EU market access will comply, but exporting the EU model wholesale may be harder than it was with GDPR.

Enforcement, Timelines, and What’s Happening Already

An audience question from Ron Given pressed on enforcement mechanics. Savin explained enforcement will sit primarily with national authorities, not the Commission, mirroring familiar GDPR patterns and raising the possibility of differences between Member States.

Habriiel emphasised something more immediate: regardless of the formal enforcement ramp, investors and enterprises are already asking compliance questions in due diligence. In her view, this effectively pulls compliance forward: teams may be asked to show that they have assessed risk and budgeted for compliance now, not later.

Sean Groeger noted Ireland’s draft bill and raised the longstanding concern of how EU regulations get transposed locally, particularly given Ireland’s concentration of international HQs and the risk of perceived “gold plating”.

Closing Note

The shared message was being steady and actionable: treat the EU AI Act as a risk-and-evidence operating model, not a last-minute legal fire drill. Know your category, know your role in the value chain, and build compliance into product development early because the market (not just regulators) is moving in that direction.

Related

The Sharpest People in Legal Aren’t Networking – They’re Building Rooms

Judging vs Arbitrating: An Inside Perspective

Is AI Increasing Transaction Time?

The New Era of Submissions Management

New Skillsets & Mindsets for AI Lawyers of the Future

Related

The Sharpest People in Legal Aren’t Networking – They’re Building Rooms

Judging vs Arbitrating: An Inside Perspective

Is AI Increasing Transaction Time?

The New Era of Submissions Management

New Skillsets & Mindsets for AI Lawyers of the Future

The EU AI Act – Ethics at the Core?

AI & the Future of Law: What Students Should Be Learning Now

Why Lawyers Need to Understand Business

Private Practice vs In-House: Choosing the Right Legal Career Path

Beyond Big Law: Exploring Different Legal Career Paths

Breaking Into Law: Early Careers at Kingsley Napley

Does a Master’s Degree Improve Your Career Prospects?

SQE Smart: Preparing for the SQE and Legal Interviews

The Legal CV Blueprint & Cover Letters that Convert

Early Careers – The Mishcon Perspective

From Application to Offer: How to Win a Training Contract

Introducing the Legal Business Analyst

Investment Arbitration’s Tightrope

Managing Borders On Autopilot: Showcasing A Vertical AI For Global Immigration

How Your Firm Can Support Your Personal Brand

How to Achieve Your Best Rankings Yet

How to Get the Best Out of Your Legal Tech Providers

Legal Tech Solutions For Your Practice

Why Digital Transformation Is a People Problem: Confidence, Incentives and Culture Beat Tools

Can You Afford to Arbitrate? Impecuniosity and Arbitral Agreements

How AI Is Rewriting Legal Business Development

The Elevator Pitch

Legal Technology and the Underserved Aspects of Legal Research: A Patent Law Perspective

Digital Transformation in Big Law

The Copyright Dilemma with Claude

Bulking Up Your Practice: 5 Ways To Make Yourself Indispensable As A Lawyer

The Legal AI Monthly Round-Up

Why Global Collaboration is Key to Building Your Arbitration Practice

What Makes A Firm AI Native?

Ai Is Not About Tech, Its About Jobs!

Transitioning From Lawyer to BD Professional

How Lawyers Can Effectively Serve Start Ups?

The Impact of Coaching from the Wolf Theiss Perspective

How the EU AI Act Regulates High Risk AI

Ai Heavyweight Anthrophic Takes Aim at Legal?

The Importance of Higher Education for your Legal Career

Guerrilla Warfare in Arbitration: Myth, Reality and Remedies

How to Nail your Legal Interview

From Big Law to Legal Tech

Demystifying the EU AI Act

How to Ensure Junior Lawyers are Properly Trained in an Age of AI

Visualising to Understand Legal Documentation

An Early Lawyer’s Perspective on AI Adoption

A Year in Arbitration: Recap and Highlights of 2025

Coaching for Better Feedback and Time Management

How Mergers in Legal Tech Enhance Sales

The Wellbeing Weekend: Energy, Focus and New Purpose

In-house Counsel Expectations from External Counsel

How Do Law Firm Mergers Affect Client Relationships?

Conflict of Interest and Hardening the Soft Law: Where Now?

Get early access
to our community

Shape the future of legal

Apply as a moderator by filling and submitting this form.
We will use the information you provide on this form to be in touch with you. You can change your choice at any time by using the Manage consent link in this widget or by contacting us. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with our Terms.

Get Early Access to our app

We will use the information you provide on this form to be in touch with you. You can change your choice at any time by using the Manage consent link in this widget or by contacting us. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with our Terms.

Please fill out your details

We'll get back to you within 5 working days