The EU AI Act – Ethics at the Core?

Yesterdays session opened with Yuliia Habriiel, CEO of eyreACT, a compliance automation platform working closely with EU institutions, and Sid Ali Boutellis, who moderated the discussion with a focus on practical implementation. Habriiel immediately reframed “ethics” away from abstract philosophy towards something far more tangible: enforceable obligations, standards, and liability exposure under the EU AI Act.

The central premise was clear: ethics in the AI Act is not aspirational, it is operational and legally binding.

The Hidden Architecture of the AI Act

Although the Act is widely described as a risk-based framework, Habriiel highlighted that its true foundation lies in fundamental rights, democratic values, and the rule of law. These elements are not merely introductory language; they actively inform how the legislation is interpreted, enforced, and reviewed in practice.

This creates a layered structure in which the normative foundation of rights and ethical principles underpins the operational rules of risk classification and compliance obligations, all of which are interpreted through an enforcement lens grounded in ethics.

For practitioners, this has important implications. Compliance is no longer a matter of ticking regulatory boxes. It must reflect and align with the broader ethical intent embedded within the legislation itself.

Ethics as a Legal Requirement

A defining theme of the discussion was the transformation of ethics into a formal legal requirement. Organisations are now expected to demonstrate explicitly how ethical considerations are embedded in system design. They must maintain verifiable evidence of ethical assessments and integrate these considerations into documentation, workflows, and compliance processes.

This represents a clear shift from simply claiming to be “ethical by design” to being required to prove it through documentation and ongoing validation.

Core Ethical Principles in Practice

Boutellis outlined five widely recognised principles:

  • Do no harm
  • Fairness and non-discrimination
  • Human oversight
  • Explainability
  • Robustness

Habriiel confirmed alignment but stressed that the AI Act extends beyond these principles. Ethics is embedded even in technical areas such as risk management (Article 9), where organisations must anticipate “reasonably foreseeable risks” to health, safety, and fundamental rights.

This forward-looking obligation requires companies to:

  • Anticipate misuse scenarios
  • Consider downstream deployment contexts
  • Justify design decisions proactively

Bias: The Central Operational Challenge

Bias emerged as one of the most critical and complex issues discussed. The AI Act requires organisations to test data for bias during training, validation, and testing stages, to document evidence of mitigation efforts, and to demonstrate that systems do not produce discriminatory outcomes across protected characteristics.

Rather than being treated as an isolated issue, bias is embedded across multiple provisions of the Act, making it a cross-cutting concern. Habriiel emphasised that bias rarely appears alone; it is often linked to broader issues such as questionable data sources, profiling practices, and potential infringements of fundamental rights.

In practice, addressing bias frequently requires more than minor adjustments. It often necessitates rethinking system design, data sourcing, and overall product architecture.

Compliance as Continuous Practice

A key distinction from previous regulatory regimes, such as GDPR, is that the AI Act does not operate on periodic certification. Instead, it introduces a model of continuous compliance. Organisations must maintain up-to-date documentation, track system changes, and be able to provide evidence of compliance at any given moment.

There is no single point at which compliance is achieved and completed. It is an ongoing process that evolves alongside the system itself.

Enforcement and Penalties

The discussion highlighted the significant financial and operational consequences of non-compliance. Penalties can reach up to €35 million or 7% of global turnover for prohibited practices, with lower but still substantial fines for breaches of obligations or for providing incorrect or incomplete information.

However, financial penalties are only part of the picture. Enforcement actions may also include product withdrawal, market restrictions, and lasting reputational damage. Importantly, liability is not limited to developers; it extends across the entire value chain, including deployers, importers, and distributors.

The Business Case for Responsible AI

Boutellis emphasised that, in practice, compliance arguments alone are rarely sufficient to drive internal change within organisations. Instead, the business case for responsible AI rests on broader strategic benefits. These include building trust with customers and stakeholders, improving product quality and resilience, attracting high-calibre talent, and gaining a competitive advantage in increasingly regulated markets.

Habriiel reinforced that ethics is no longer optional. It has become a prerequisite for market access, particularly within the European context.

Global Context: Europe vs the US

The session concluded with a comparison between regulatory approaches in Europe and the United States. The EU has taken a comprehensive approach by embedding ethics directly into binding legislation, whereas the US continues to rely more heavily on voluntary frameworks and sector-specific regulation.

This divergence is expected to grow, with Europe prioritising a rights-based regulatory model and the US focusing more on innovation and competitiveness.

Essential Issues

  • Ethics in AI is now enforceable, not optional
  • Compliance requires evidence, not assertions
  • Bias must be addressed systematically, not superficially
  • Documentation and continuous monitoring are critical
  • The EU is setting a distinct global standard for ethical AI governance

Related

The EU AI Act – Ethics at the Core?

AI & the Future of Law: What Students Should Be Learning Now

Why Lawyers Need to Understand Business

Private Practice vs In-House: Choosing the Right Legal Career Path

Beyond Big Law: Exploring Different Legal Career Paths

Related

The EU AI Act – Ethics at the Core?

AI & the Future of Law: What Students Should Be Learning Now

Why Lawyers Need to Understand Business

Private Practice vs In-House: Choosing the Right Legal Career Path

Beyond Big Law: Exploring Different Legal Career Paths

Breaking Into Law: Early Careers at Kingsley Napley

Does a Master’s Degree Improve Your Career Prospects?

SQE Smart: Preparing for the SQE and Legal Interviews

The Legal CV Blueprint & Cover Letters that Convert

Early Careers – The Mishcon Perspective

From Application to Offer: How to Win a Training Contract

Introducing the Legal Business Analyst

Investment Arbitration’s Tightrope

Managing Borders On Autopilot: Showcasing A Vertical AI For Global Immigration

How Your Firm Can Support Your Personal Brand

How to Achieve Your Best Rankings Yet

How to Get the Best Out of Your Legal Tech Providers

Legal Tech Solutions For Your Practice

Why Digital Transformation Is a People Problem: Confidence, Incentives and Culture Beat Tools

Can You Afford to Arbitrate? Impecuniosity and Arbitral Agreements

How AI Is Rewriting Legal Business Development

The Elevator Pitch

Legal Technology and the Underserved Aspects of Legal Research: A Patent Law Perspective

Digital Transformation in Big Law

The Copyright Dilemma with Claude

Bulking Up Your Practice: 5 Ways To Make Yourself Indispensable As A Lawyer

The Legal AI Monthly Round-Up

Why Global Collaboration is Key to Building Your Arbitration Practice

What Makes A Firm AI Native?

Ai Is Not About Tech, Its About Jobs!

Transitioning From Lawyer to BD Professional

How Lawyers Can Effectively Serve Start Ups?

The Impact of Coaching from the Wolf Theiss Perspective

How the EU AI Act Regulates High Risk AI

Ai Heavyweight Anthrophic Takes Aim at Legal?

The Importance of Higher Education for your Legal Career

Guerrilla Warfare in Arbitration: Myth, Reality and Remedies

How to Nail your Legal Interview

From Big Law to Legal Tech

Demystifying the EU AI Act

How to Ensure Junior Lawyers are Properly Trained in an Age of AI

Visualising to Understand Legal Documentation

An Early Lawyer’s Perspective on AI Adoption

A Year in Arbitration: Recap and Highlights of 2025

Coaching for Better Feedback and Time Management

How Mergers in Legal Tech Enhance Sales

The Wellbeing Weekend: Energy, Focus and New Purpose

In-house Counsel Expectations from External Counsel

How Do Law Firm Mergers Affect Client Relationships?

Conflict of Interest and Hardening the Soft Law: Where Now?

Imposter Syndrome in Law – How to spot it and what to do about it?

The Top 3 Skills Missing from Law Firm Leaders

The Reality Check on Legal AI Adoption in ’25 and What’s next in ’26

‘Who cares as long as we win?’ – Ethics in International Arbitration

How to Make Your End of Year Client Interaction More Efficient for BD

Get early access
to our community

Shape the future of legal

Apply as a moderator by filling and submitting this form.
We will use the information you provide on this form to be in touch with you. You can change your choice at any time by using the Manage consent link in this widget or by contacting us. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with our Terms.

Get Early Access to our app

We will use the information you provide on this form to be in touch with you. You can change your choice at any time by using the Manage consent link in this widget or by contacting us. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with our Terms.

Please fill out your details

We'll get back to you within 5 working days