Yesterdays session opened with Yuliia Habriiel, CEO of eyreACT, a compliance automation platform working closely with EU institutions, and Sid Ali Boutellis, who moderated the discussion with a focus on practical implementation. Habriiel immediately reframed “ethics” away from abstract philosophy towards something far more tangible: enforceable obligations, standards, and liability exposure under the EU AI Act.
The central premise was clear: ethics in the AI Act is not aspirational, it is operational and legally binding.
The Hidden Architecture of the AI Act
Although the Act is widely described as a risk-based framework, Habriiel highlighted that its true foundation lies in fundamental rights, democratic values, and the rule of law. These elements are not merely introductory language; they actively inform how the legislation is interpreted, enforced, and reviewed in practice.
This creates a layered structure in which the normative foundation of rights and ethical principles underpins the operational rules of risk classification and compliance obligations, all of which are interpreted through an enforcement lens grounded in ethics.
For practitioners, this has important implications. Compliance is no longer a matter of ticking regulatory boxes. It must reflect and align with the broader ethical intent embedded within the legislation itself.
Ethics as a Legal Requirement
A defining theme of the discussion was the transformation of ethics into a formal legal requirement. Organisations are now expected to demonstrate explicitly how ethical considerations are embedded in system design. They must maintain verifiable evidence of ethical assessments and integrate these considerations into documentation, workflows, and compliance processes.
This represents a clear shift from simply claiming to be “ethical by design” to being required to prove it through documentation and ongoing validation.
Core Ethical Principles in Practice
Boutellis outlined five widely recognised principles:
- Do no harm
- Fairness and non-discrimination
- Human oversight
- Explainability
- Robustness
Habriiel confirmed alignment but stressed that the AI Act extends beyond these principles. Ethics is embedded even in technical areas such as risk management (Article 9), where organisations must anticipate “reasonably foreseeable risks” to health, safety, and fundamental rights.
This forward-looking obligation requires companies to:
- Anticipate misuse scenarios
- Consider downstream deployment contexts
- Justify design decisions proactively
Bias: The Central Operational Challenge
Bias emerged as one of the most critical and complex issues discussed. The AI Act requires organisations to test data for bias during training, validation, and testing stages, to document evidence of mitigation efforts, and to demonstrate that systems do not produce discriminatory outcomes across protected characteristics.
Rather than being treated as an isolated issue, bias is embedded across multiple provisions of the Act, making it a cross-cutting concern. Habriiel emphasised that bias rarely appears alone; it is often linked to broader issues such as questionable data sources, profiling practices, and potential infringements of fundamental rights.
In practice, addressing bias frequently requires more than minor adjustments. It often necessitates rethinking system design, data sourcing, and overall product architecture.
Compliance as Continuous Practice
A key distinction from previous regulatory regimes, such as GDPR, is that the AI Act does not operate on periodic certification. Instead, it introduces a model of continuous compliance. Organisations must maintain up-to-date documentation, track system changes, and be able to provide evidence of compliance at any given moment.
There is no single point at which compliance is achieved and completed. It is an ongoing process that evolves alongside the system itself.
Enforcement and Penalties
The discussion highlighted the significant financial and operational consequences of non-compliance. Penalties can reach up to €35 million or 7% of global turnover for prohibited practices, with lower but still substantial fines for breaches of obligations or for providing incorrect or incomplete information.
However, financial penalties are only part of the picture. Enforcement actions may also include product withdrawal, market restrictions, and lasting reputational damage. Importantly, liability is not limited to developers; it extends across the entire value chain, including deployers, importers, and distributors.
The Business Case for Responsible AI
Boutellis emphasised that, in practice, compliance arguments alone are rarely sufficient to drive internal change within organisations. Instead, the business case for responsible AI rests on broader strategic benefits. These include building trust with customers and stakeholders, improving product quality and resilience, attracting high-calibre talent, and gaining a competitive advantage in increasingly regulated markets.
Habriiel reinforced that ethics is no longer optional. It has become a prerequisite for market access, particularly within the European context.
Global Context: Europe vs the US
The session concluded with a comparison between regulatory approaches in Europe and the United States. The EU has taken a comprehensive approach by embedding ethics directly into binding legislation, whereas the US continues to rely more heavily on voluntary frameworks and sector-specific regulation.
This divergence is expected to grow, with Europe prioritising a rights-based regulatory model and the US focusing more on innovation and competitiveness.
Essential Issues
- Ethics in AI is now enforceable, not optional
- Compliance requires evidence, not assertions
- Bias must be addressed systematically, not superficially
- Documentation and continuous monitoring are critical
- The EU is setting a distinct global standard for ethical AI governance