As Europe continues to refine its approach to regulating artificial intelligence, this session brought together Professor Andrej Savin, Professor of Information Technology Law at Copenhagen Business School, and Sid Ali Boutellis, legal and AI strategist, and together, they unpacked the latest developments surrounding the EU AI Act and the newly proposed Digital Omnibus package.
The discussion centred on what the Omnibus proposal actually changes, why implementation delays have emerged, and how businesses, lawyers, and regulators are grappling with a rapidly evolving AI landscape.
A Regulatory Framework Built on Risk
Savin explained that while the EU AI Act is often described as comprehensive legislation, it is fundamentally a product safety framework built around risk classification. Rather than regulating all AI equally, the Act focuses primarily on eliminating unsafe AI systems while imposing layered compliance obligations on high-risk systems and general-purpose AI models.
Importantly, although the AI Act is already in force, its obligations are being phased in gradually across different timelines. The Omnibus proposal does not dismantle the AI Act itself; instead, it introduces targeted revisions intended to simplify implementation and respond to growing concerns from industry regarding compliance burdens and regulatory uncertainty.
One of the most significant proposed changes is the postponement of enforcement deadlines for high-risk AI obligations. Savin noted that the original August 2026 deadline is expected to move to December 2027, giving organisations additional time to prepare for compliance.
Why Europe Is Revisiting the AI Act
The session explored the wider political and economic context behind the Omnibus initiative. Savin challenged the increasingly common narrative that Europe’s weaker AI competitiveness stems primarily from over-regulation. Instead, he argued that structural issues such as fragmented markets, limited venture capital availability, and outdated bankruptcy laws play a far greater role in slowing European innovation.
That said, pressure from industry has clearly influenced policymakers. The Omnibus proposal reflects a broader effort to simplify overlapping compliance requirements and create more flexibility for businesses attempting to navigate multiple regulatory frameworks simultaneously. Savin described the initiative as largely “technocratic” rather than revolutionary, designed more to reduce duplication and administrative burden than to fundamentally alter Europe’s regulatory philosophy.
The conversation highlighted examples where companies could otherwise face double compliance obligations under both sector-specific regulations and the AI Act itself, particularly in areas such as medical devices and conformity assessments.
The Missing Standards Problem
A major theme throughout the session was the absence of operational standards and guidance needed to make compliance workable in practice.
Savin explained that many of the high-risk obligations under the AI Act were intended to rely heavily on harmonised European standards being developed through CEN and CENELEC processes. However, these standards have not yet been finalised, creating uncertainty for organisations attempting to prepare for compliance.
The delays, Savin argued, are not necessarily caused by technological complexity alone, but by the nature of the European standardisation process itself. Standards development involves extensive consultation, technical scrutiny, and detailed negotiations over language and implementation. While slow, this process is designed to create durable and legally robust frameworks rather than rushed temporary guidance.
This tension between the speed of technological change and the slower pace of institutional governance emerged as one of the defining challenges facing AI regulation globally.
Risk, Uncertainty, and the Limits of Regulation
A particularly compelling section of the discussion focused on the distinction between “risk” and “uncertainty”.
Savin argued that the AI Act is formally built around risk-based compliance, yet many AI developments increasingly fall into the category of genuine uncertainty, situations where harms and outcomes cannot yet be reliably predicted or modelled. Referencing emerging frontier AI concerns, including Anthropic’s handling of advanced models, the conversation explored whether current legal frameworks are equipped to govern systems whose future capabilities remain fundamentally unknown.
Boutellis added a US perspective, observing that much of the most aggressive AI development is currently occurring within defence and military applications, where regulatory constraints remain comparatively limited. He noted that, in practice, businesses often approach legal counsel only after problems arise rather than embedding governance into system design from the outset.
This prompted a broader discussion around proactive governance and the evolving role of lawyers in AI-enabled organisations.
The Changing Role of Lawyers
The session concluded with a wider reflection on how AI is transforming legal practice itself.
Savin stressed that involving lawyers early in product design and business strategy creates measurable value by identifying risks before they become legal liabilities. He referenced research showing that early legal integration preserves business value and improves governance outcomes.
They argued that AI may create an opportunity for lawyers to evolve beyond their traditional role as blockers or risk managers and instead become trusted strategic partners embedded within organisations. However, both speakers acknowledged the challenges this transformation presents for legal education and professional development.
As junior legal work becomes increasingly automated, concerns are emerging around how future lawyers will develop judgment, commercial awareness, and leadership capabilities. The discussion highlighted the growing importance of curiosity, business understanding, and interdisciplinary thinking as essential skills for the next generation of legal professionals.
Ultimately, the session underscored that Europe’s AI regulatory journey remains very much a work in progress. The Omnibus proposal may provide businesses with breathing space, but the deeper questions surrounding governance, uncertainty and institutional readiness are only beginning.