Andrej Savin (Professor, CBS Law), Sid Ali Boutellis (Legal Tech Expert), and Yuliia Habriiel (Founder, the eyreACT) explored what the EU AI Act is actually designed to do, who it applies to, and what “getting compliant” looks like in practice, particularly for smaller firms and teams integrating third-party AI components into products and services.
The EU AI Act in One Paragraph
Savin’s core framing of the Act was clarifying: the EU AI Act is primarily product safety legislation, not a catch-all “AI ethics” law. That distinction matters because several issues people assume are covered by it, such as liability and broader fundamental rights frameworks, largely sit elsewhere. The Act’s operating logic is risk-based: it focuses on AI technologies, systems, and (in specific cases) models that are considered unsafe, then attaches obligations based on category and role.
Risk Tiers: What Falls Where (and Why it Matters)
Savin and Boutellis repeatedly returned to the idea that your first job is to determine which bucket you’re in because that dictates your obligations.
Prohibited AI
Savin noted these are relatively narrow in day-to-day corporate contexts, but include categories such as social scoring, emotional recognition, behaviour manipulation, and untargeted scraping of facial images. If you are here, you’re not “mitigating”; you’re stopping.
High-risk AI systems
This is where more organisations find themselves. Savin highlighted broad domains that can trigger high-risk classification, including medical devices, vehicles, HR, education, access to essential services (e.g., insurance and banking), and critical infrastructure. The key point: the scope is wide, and crucially, there is no meaningful de minimis threshold. Smaller firms can face serious obligations if their use case qualifies.
Typical high-risk requirements discussed included:
- Fundamental rights impact assessments
- Conformity assessments
- Controls and obligations around datasets
- Risk management and quality management systems
- Data governance
General-purpose AI and large language models
Savin distinguished obligations here as primarily targeting providers of GPAI/LLMs, and those who significantly modify them. “Ordinary users” have little direct burden, but the session repeatedly stressed that your role can change depending on what you do with a system.
Roles, Responsibility, and the “Value Chain Trap”
A major practical warning came regarding responsibility across the AI value chain (including what was referenced as Article 25): you can be treated as a “provider” (with provider-level obligations) if you:
- Put your name/trademark on a system
- Substantially modify it
- Change its intended purpose
In plain terms: teams can take something “off the shelf”, integrate it into their product, and accidentally inherit obligations they didn’t budget for. The view from the discussion was that many organisations underestimate this risk because procurement and product teams don’t instinctively see rebranding and repurposing as regulatory triggers.
Compliance Reality: Cost, Tooling, and Avoiding Magical Thinking
Compliance is expensive. Boutellis’ view was pragmatic: compliance needs to be treated holistically, and tools can help, but only if teams avoid the fantasy that a “plug-in” makes the problem disappear.
Savin reinforced this with a caution: software and consultancy can be either helpful or harmful, depending on whether leadership understands the risk posture and whether there are procedures in place. If a firm hopes a third party will “turn up and solve compliance”, it’s heading in the wrong direction.
Habriiel described an alternative approach: rather than using AI to “assess AI”, her team operationalises obligations via rule-based logic, classifies risk level from structured inputs, and then runs evidence management workflows (manual or via API) to identify gaps and prepare for audit and investor scrutiny. The key theme: compliance is becoming an evidence discipline, not a one-off document exercise.
Standards as the Practical Bridge (and Why ISO 42001 Keeps Coming Up)
Boutellis suggested organisations looking for readiness signals should pay attention to recognised frameworks and standards, especially ISO/IEC 42001 (AI management systems). He framed standards as a way to create a credible posture before formal enforcement pressure lands, and cited market dynamics where customers and partners push ISO requirements, making them de facto expectations.
Savin added an important policy signal: even proposals to delay certain high-risk obligations have been linked to the need for guidelines and standards to mature, suggesting standards will become part of the “how” of compliance in practice, not just a nice-to-have.
Extraterritorial Reach and the “Brussels Effect”
Habriiel was unambiguous: non-EU firms can fall under the Act if their AI is used in the EU or affects people in the EU. The headquarters location does not save you if your market does not stop at Europe.
On global alignment, Savin drew a sharp distinction with GDPR. GDPR was comparatively linear; the AI Act’s risk-based architecture is more complex, and other jurisdictions (US, UK, China, Canada) are developing different regulatory approaches. The likely business reality: firms that want EU market access will comply, but exporting the EU model wholesale may be harder than it was with GDPR.
Enforcement, Timelines, and What’s Happening Already
An audience question from Ron Given pressed on enforcement mechanics. Savin explained enforcement will sit primarily with national authorities, not the Commission, mirroring familiar GDPR patterns and raising the possibility of differences between Member States.
Habriiel emphasised something more immediate: regardless of the formal enforcement ramp, investors and enterprises are already asking compliance questions in due diligence. In her view, this effectively pulls compliance forward: teams may be asked to show that they have assessed risk and budgeted for compliance now, not later.
Sean Groeger noted Ireland’s draft bill and raised the longstanding concern of how EU regulations get transposed locally, particularly given Ireland’s concentration of international HQs and the risk of perceived “gold plating”.
Closing Note
The shared message was being steady and actionable: treat the EU AI Act as a risk-and-evidence operating model, not a last-minute legal fire drill. Know your category, know your role in the value chain, and build compliance into product development early because the market (not just regulators) is moving in that direction.