How the EU AI Act Regulates High Risk AI

Moderated by Andrej Savin, Professor of Information Technology and Internet Law, Copenhagen Business School, and Sid Ali Boutellis, AI governance consultant and legal tech veteran.

The EU AI Act is fundamentally a risk-based product safety regulation. Savin opened by emphasising that the Act does not create a rights-based framework akin to the GDPR; instead, it imposes layered obligations depending on the level of risk posed by AI systems. Certain AI systems are prohibited outright, while others fall into the category of “high risk” and are subject to extensive mandatory requirements.

This risk-based logic mirrors regulatory approaches in other domains, yet Savin noted that the EU AI Act applies the model more comprehensively than most comparable frameworks worldwide.

Defining “High Risk”: Two Main Categories

High-risk AI systems fall broadly into two categories:

  1. AI systems listed in Annex III
    These include:
    • Biometrics and identification systems, Critical infrastructure, Education, Employment
    and HR systems, Access
    • to essential services: Law enforcement, Migration, asylum, and border control. Administration of justice and democratic processes. Savin highlighted that employment
    HR systems are among the most likely to affect businesses in practice, particularly in recruitment, promotion, task allocation, and workforce management.
  2. AI systems that are safety components of products regulated under existing EU sectoral legislation
    This includes areas such as medical devices, rail systems, aviation, and vehicles, where conformity assessments are already required under other EU laws. For mature, heavily regulated industries, AI compliance may integrate into existing compliance structures rather than introduce entirely new obligations.

Despite the breadth of Annex III, Savin stressed that the number of systems that truly qualify as high risk is smaller than many assume.

No Safe Harbour for SMEs

One of the Act’s most striking features is the absence of a de minimis exemption. There are no carve-outs based on company size, turnover, user numbers, or output volume. SMEs and start-ups are not exempt simply because they are small.

There is, however, a limited and somewhat loosely defined exception for systems that do not pose a significant risk to health, safety, or fundamental rights and that perform purely procedural or supportive tasks. Whether this applies is determined on a case-by-case basis.

Obligations More Than Policy Documents

Once an AI system is classified as high risk, the compliance burden intensifies:

  • Strict documentation and governance requirements
  • Risk assessment and mitigation processes
  • Human oversight obligations
  • Supply chain scrutiny
  • Reporting duties

Boutellis underscored that compliance begins with correct classification. Organisations must critically assess whether their systems fall within high-risk categories and document their reasoning. A pessimistic approach is advisable: assume coverage until proven otherwise.

Importantly, compliance cannot be achieved through a standalone policy document. It must be embedded into the product lifecycle. Savin cautioned against treating AI compliance as a post-development legal check. Legal expertise must sit alongside development teams, particularly where systems are modified or repurposed.

Governance: Board-Level Responsibility

A central theme of the discussion was governance. Savin was unequivocal: AI compliance is a board-level issue.

Delegating responsibility to IT or compliance teams alone is insufficient. Organisations that treat AI governance as an outsourced or technical matter risk both regulatory breach and value destruction. Effective AI governance must be integrated into corporate strategy, with board ownership and organisation-wide awareness.

Boutellis echoed this, noting that many companies struggle not with intent but with practical starting points: identifying their role (provider, deployer, importer, or distributor), mapping obligations, and assigning ownership internally.

Supply Chains and Role Clarity

The Act clearly distinguishes between providers, deployers, and other actors. Providers bear the full spectrum of obligations, while deployers and others face narrower responsibilities.

However, Article 25 significantly broadens exposure. A distributor, importer, or deployer may be treated as a provider if they:

  • Place their name in the AI system
  • Substantially modify the system
  • Alter its intended purpose

These determinations are fact-specific and require careful legal and technical analysis.

Penalties and Practical Risks

While fines mirror other major EU digital regulations, Savin argued that reputational and commercial risk may outweigh financial penalties. A failure to comply could result in product withdrawal, value erosion, and significant reputational damage.

A particularly striking provision discussed was the 15-day reporting obligation for certain high-risk system incidents, a timeframe many organisations are not yet operationally equipped to meet.

International Comparison: EU vs US

Boutellis contrasted the EU AI Act with the US approach. In the United States, the NIST AI Risk Management Framework remains voluntary, yet widely adopted. Its influence stems from continuity with established cybersecurity and governance practices.

The ISO 42001 standard was also highlighted as an emerging benchmark for organisational AI governance. Unlike the EU AI Act, ISO certification is enterprise-focused and strategic, positioning organisations as responsible and trustworthy AI actors.

Savin added that forthcoming European harmonised standards under CEN-CENELEC will play a pivotal role. Systems complying with these standards will benefit from a presumption of conformity under the Act, reinforcing the strategic importance of standardisation.

Lawyers, Consultants, and the Expanding Ecosystem

The discussion made clear that AI compliance will not be served by lawyers alone. A broader ecosystem of consultants, governance specialists, and technical advisers is emerging to bridge the gap between legal requirements and operational implementation.

Unlike GDPR compliance, which can often be externally driven, AI compliance requires real-time collaboration among legal, technical, and strategic teams.

Law Firms and High Risk Classification

A practical question addressed whether law firms themselves might fall under high-risk obligations. Savin suggested that most law firms are unlikely to qualify as high-risk providers unless directly engaged in judicial decision-making processes captured under Annex III. Nevertheless, firms deploying AI tools must still consider broader compliance obligations.

The Strategic Opportunity Today

Both speakers emphasised that compliance should not be viewed solely as a constraint. Strong governance frameworks can become a competitive advantage. In a global AI race dominated by the US and China, Europe may define leadership through standards, trust, and regulatory excellence.

As Savin concluded, compliance, when embedded properly, can become a source of organisational value rather than a bureaucratic burden.

Future sessions are planned on this topic, so keep an eye out for the latest.

Related

How the EU AI Act Regulates High Risk AI

Ai Heavyweight Anthrophic Takes Aim at Legal?

The Importance of Higher Education for your Legal Career

Guerrilla Warfare in Arbitration: Myth, Reality and Remedies

How to Nail your Legal Interview

Related

How the EU AI Act Regulates High Risk AI

Ai Heavyweight Anthrophic Takes Aim at Legal?

The Importance of Higher Education for your Legal Career

Guerrilla Warfare in Arbitration: Myth, Reality and Remedies

How to Nail your Legal Interview

From Big Law to Legal Tech

Demystifying the EU AI Act

How to Ensure Junior Lawyers are Properly Trained in an Age of AI

Visualising to Understand Legal Documentation

An Early Lawyer’s Perspective on AI Adoption

A Year in Arbitration: Recap and Highlights of 2025

Coaching for Better Feedback and Time Management

How Mergers in Legal Tech Enhance Sales

The Wellbeing Weekend: Energy, Focus and New Purpose

In-house Counsel Expectations from External Counsel

How Do Law Firm Mergers Affect Client Relationships?

Conflict of Interest and Hardening the Soft Law: Where Now?

Imposter Syndrome in Law – How to spot it and what to do about it?

The Top 3 Skills Missing from Law Firm Leaders

The Reality Check on Legal AI Adoption in ’25 and What’s next in ’26

‘Who cares as long as we win?’ – Ethics in International Arbitration

How to Make Your End of Year Client Interaction More Efficient for BD

How Will AI Impact Your Business Model?

The Power of Coaching for Lawyers

Building Legal AI at Speed

Arbitration Events – Networking for Success?

Navigating Legal Tech Adoption in your Team

Why Ranking Still Matters for Lawyers

‘Niche’ Arbitrations – Maritime, Sports, IP Arbitrations

Risk Based Digital Compliance in the EU

Transforming Lawyers

The New Alternative Legal Career

How Leaders can help steer successful transformations

Ai’s a Risky Business!

Latest in US Legal Tech

Arbitration Hearings: What, Why, and How?

How to Deal with Professional Disappointment?

Key Takeaways from ITech Conference UK

Sleepwalking into the Future

BD Highlights from the IBA Toronto

Cultural Approaches in Arbitration: Is the Common/Civil Law Divide Real?

Geeking Out After Legal Geek UK

Practising Legal Design

Revisiting Third Party Funding

Rethinking the Law Firm Model

How to Build Effective Client Interaction

What Do GC’s Really Look For?

Arbitration and New International Commercial Courts – True Rivals?

Why Sales is a Dirty Word in Legal?

How Data-Driven Business Development Transforms Legal

Latest In Legal Tech Sales

Third Party Funding in Arbitration – A Dwindling Concept?

Why Flexible Legal Solutions Work?

Leadership and What It Means for Lawyers Today

GPT-5 and its Impact on Legal Drafting

Get early access
to our community

Shape the future of legal

Apply as a moderator by filling and submitting this form.
We will use the information you provide on this form to be in touch with you. You can change your choice at any time by using the Manage consent link in this widget or by contacting us. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with our Terms.

Get Early Access to our app

We will use the information you provide on this form to be in touch with you. You can change your choice at any time by using the Manage consent link in this widget or by contacting us. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with our Terms.

Please fill out your details

We'll get back to you within 5 working days