Moderated by Andrej Savin, Professor of Information Technology and Internet Law, Copenhagen Business School, and Sid Ali Boutellis, AI governance consultant and legal tech veteran.
The EU AI Act is fundamentally a risk-based product safety regulation. Savin opened by emphasising that the Act does not create a rights-based framework akin to the GDPR; instead, it imposes layered obligations depending on the level of risk posed by AI systems. Certain AI systems are prohibited outright, while others fall into the category of “high risk” and are subject to extensive mandatory requirements.
This risk-based logic mirrors regulatory approaches in other domains, yet Savin noted that the EU AI Act applies the model more comprehensively than most comparable frameworks worldwide.
Defining “High Risk”: Two Main Categories
High-risk AI systems fall broadly into two categories:
- AI systems listed in Annex III
These include:- Biometrics and identification systems, Critical infrastructure, Education, Employment
- to essential services: Law enforcement, Migration, asylum, and border control. Administration of justice and democratic processes. Savin highlighted that employment
- AI systems that are safety components of products regulated under existing EU sectoral legislation
This includes areas such as medical devices, rail systems, aviation, and vehicles, where conformity assessments are already required under other EU laws. For mature, heavily regulated industries, AI compliance may integrate into existing compliance structures rather than introduce entirely new obligations.
Despite the breadth of Annex III, Savin stressed that the number of systems that truly qualify as high risk is smaller than many assume.
No Safe Harbour for SMEs
One of the Act’s most striking features is the absence of a de minimis exemption. There are no carve-outs based on company size, turnover, user numbers, or output volume. SMEs and start-ups are not exempt simply because they are small.
There is, however, a limited and somewhat loosely defined exception for systems that do not pose a significant risk to health, safety, or fundamental rights and that perform purely procedural or supportive tasks. Whether this applies is determined on a case-by-case basis.
Obligations More Than Policy Documents
Once an AI system is classified as high risk, the compliance burden intensifies:
- Strict documentation and governance requirements
- Risk assessment and mitigation processes
- Human oversight obligations
- Supply chain scrutiny
- Reporting duties
Boutellis underscored that compliance begins with correct classification. Organisations must critically assess whether their systems fall within high-risk categories and document their reasoning. A pessimistic approach is advisable: assume coverage until proven otherwise.
Importantly, compliance cannot be achieved through a standalone policy document. It must be embedded into the product lifecycle. Savin cautioned against treating AI compliance as a post-development legal check. Legal expertise must sit alongside development teams, particularly where systems are modified or repurposed.
Governance: Board-Level Responsibility
A central theme of the discussion was governance. Savin was unequivocal: AI compliance is a board-level issue.
Delegating responsibility to IT or compliance teams alone is insufficient. Organisations that treat AI governance as an outsourced or technical matter risk both regulatory breach and value destruction. Effective AI governance must be integrated into corporate strategy, with board ownership and organisation-wide awareness.
Boutellis echoed this, noting that many companies struggle not with intent but with practical starting points: identifying their role (provider, deployer, importer, or distributor), mapping obligations, and assigning ownership internally.
Supply Chains and Role Clarity
The Act clearly distinguishes between providers, deployers, and other actors. Providers bear the full spectrum of obligations, while deployers and others face narrower responsibilities.
However, Article 25 significantly broadens exposure. A distributor, importer, or deployer may be treated as a provider if they:
- Place their name in the AI system
- Substantially modify the system
- Alter its intended purpose
These determinations are fact-specific and require careful legal and technical analysis.
Penalties and Practical Risks
While fines mirror other major EU digital regulations, Savin argued that reputational and commercial risk may outweigh financial penalties. A failure to comply could result in product withdrawal, value erosion, and significant reputational damage.
A particularly striking provision discussed was the 15-day reporting obligation for certain high-risk system incidents, a timeframe many organisations are not yet operationally equipped to meet.
International Comparison: EU vs US
Boutellis contrasted the EU AI Act with the US approach. In the United States, the NIST AI Risk Management Framework remains voluntary, yet widely adopted. Its influence stems from continuity with established cybersecurity and governance practices.
The ISO 42001 standard was also highlighted as an emerging benchmark for organisational AI governance. Unlike the EU AI Act, ISO certification is enterprise-focused and strategic, positioning organisations as responsible and trustworthy AI actors.
Savin added that forthcoming European harmonised standards under CEN-CENELEC will play a pivotal role. Systems complying with these standards will benefit from a presumption of conformity under the Act, reinforcing the strategic importance of standardisation.
Lawyers, Consultants, and the Expanding Ecosystem
The discussion made clear that AI compliance will not be served by lawyers alone. A broader ecosystem of consultants, governance specialists, and technical advisers is emerging to bridge the gap between legal requirements and operational implementation.
Unlike GDPR compliance, which can often be externally driven, AI compliance requires real-time collaboration among legal, technical, and strategic teams.
Law Firms and High Risk Classification
A practical question addressed whether law firms themselves might fall under high-risk obligations. Savin suggested that most law firms are unlikely to qualify as high-risk providers unless directly engaged in judicial decision-making processes captured under Annex III. Nevertheless, firms deploying AI tools must still consider broader compliance obligations.
The Strategic Opportunity Today
Both speakers emphasised that compliance should not be viewed solely as a constraint. Strong governance frameworks can become a competitive advantage. In a global AI race dominated by the US and China, Europe may define leadership through standards, trust, and regulatory excellence.
As Savin concluded, compliance, when embedded properly, can become a source of organisational value rather than a bureaucratic burden.
Future sessions are planned on this topic, so keep an eye out for the latest.