Our latest session hosted a deep-dive with Professor Andrej Savin of Copenhagen Business School, on the organisational realities of “risk-based” digital compliance in the EU. Savin unpacked how the AI Act, GDPR, cybersecurity, and platform rules shift responsibility onto businesses themselves – and why in-house counsel and boards now sit at the centre of AI governance.
The discussion also touched on the squeeze facing small and medium-sized enterprises (SMEs), the limits of external advisers and “AI compliance tools”, and the growing role of international standards in making this complexity manageable.
Risk-based regulation goes global
Savin explained that almost all major pillars of digital regulation – privacy/data protection, cybersecurity, platform regulation, and AI – are now explicitly risk-based. Rather than prescribing a single, uniform approach, legislators define broad categories (such as “high risk” or “prohibited”), then expect companies to determine what those risks mean in their own context.
This is not confined to the EU. Similar models are emerging in the United States, China, and through Council of Europe initiatives, while risk-based approaches have long been familiar in sectors such as food, pharmaceuticals, finance, and anti-money laundering. For any business selling into the EU, risk-based compliance is becoming a practical reality, regardless of where they are headquartered.
Compliance is a leadership and governance issue
While compliance functions can be placed in different parts of the business, Savin stressed that AI and digital compliance are, above all, top management accountability issues. Laws such as the GDPR and NIS2 explicitly link key roles (like the data protection officer) to the board, and increasingly spell out that boards are responsible for understanding and overseeing digital risks.
Gannon pressed on what this means in practice for companies without large in-house teams. Savin’s view is that the board is the only body with a genuine bird’s-eye view of the organisation: it must set the tone, allocate ownership, and ensure that governance is coherent across business units and jurisdictions, rather than allowing fragmented, siloed efforts.
The pivotal role of in-house counsel
Savin argued that in-house counsel is the natural coordination point between the board, chief information and compliance officers, external advisers, and technical teams. Boards cannot design policies for every subsidiary or product line, nor can a CIO alone take informed decisions on privacy, cybersecurity, and AI risk classification.
In-house teams are best placed to translate legal obligations into business reality, allocate responsibilities, and ensure that policies are not just drafted but embedded “all day, every day” in operations. Where companies rely heavily on external counsel, someone internally still needs to own the project – otherwise advice remains a stack of documents that never finds its way into processes, trainin,g and tooling.
Why external counsel and software are not silver bullets
Gannon raised a concern many smaller businesses share: if risk is inherently business-specific, how much can external advisers really help? Savin acknowledged that law firms did well in the early GDPR era by offering fairly standardised compliance packages to a relatively discrete problem.
AI compliance is different. Risk depends on the specific data sets, models, use cases and value chain of each company. External counsel can provide frameworks and benchmarks, but they do not hold the granular operational knowledge that determines what is truly “high risk” in a given organisation.
Similarly, Savin cautioned against over-reliance on off-the-shelf “AI compliance tools”. Software can help process documentation and monitor obligations, but it cannot decide what matters strategically, or what level of residual risk the business is prepared to accept.
SMEs in the firing line – but also with an opportunity
The resource challenge is particularly acute for SMEs and start-ups, where margins are thin and access to capital is patchy. Hiring specialist compliance staff can feel like a luxury, yet the cost of getting things wrong – from reputational damage to regulatory fines or litigation – can be existential.
Savin noted, however, that size does not always equal maturity. In some research on cybersecurity in supply chains, more heavily regulated sectors such as finance and insurance proved relatively comfortable with layered compliance, while larger organisations in less regulated sectors struggled with fragmented, uncoordinated approaches.
Smaller companies can sometimes move faster: they have fewer silos, simpler systems, and can build AI governance into their structure from the outset, rather than retrofitting controls onto sprawling legacy environments.
High-risk AI: fewer firms than feared, more obligations than expected
Savin clarified that not every AI use case falls under the “high-risk” regime of the AI Act. High-risk systems are mainly those listed in Annexes I and III – for example, certain regulated products (like medical devices or aircraft components) and specific uses such as HR and recruitment, education, and some judicial and law enforcement contexts.
Pulling a general-purpose tool such as a large language model “off the shelf” and using it for internal productivity will not automatically trigger the full high-risk obligations, unless the system is significantly modified or deployed in a high-risk context.
The complexity arises when companies integrate third-party models into their own products, combine them with proprietary data sets, and then place them on the market. They must then determine whether they are acting as model developers, system providers, or simply users – often while simultaneously shouldering GDPR and cybersecurity duties.
Enforcement, fines and the rise of standards
Headline fines in this space are typically calculated as a percentage of global turnover, echoing the GDPR and other digital laws, meaning that the impact is very different for a global tech giant than for a niche HR software provider. Savin suggested that, statistically, many smaller companies may not face immediate enforcement.
However, the more serious and widespread risks may be operational chaos and loss of trust if governance is weak. Poorly managed AI can erode customer confidence, disrupt internal processes, and reduce the value companies hoped to gain from automation.
To reduce uncertainty – especially across borders – Savin expects international and European standards (ISO, NIST, CEN/CENELEC, and others) to play an increasing role. While no standard will guarantee full compliance, adherence can take organisations a long way towards a defensible, auditable posture and may ultimately function as a reputational “badge of quality” in the eyes of customers and partners.
Practical takeaways for legal and business leaders
Savin and Gannon closed with a set of pragmatic messages for organisations navigating risk-based digital compliance:
- Treat AI governance as a core strategy, not a side project. The board needs regular, informed input on digital risk, not just occasional updates when something breaks.
- Centre in-house counsel in the governance structure. Make legal the hub for coordinating between technical teams, compliance, external advisers, and the board.
- Map your exposure before you buy solutions. Understand where AI is actually used in your value chain, which systems might be high risk, and what is realistically manageable in-house.
- Invest in maturity, not just headcount. Clear delineation of roles, good communication between leadership and operational teams, and an iterative approach to risk assessment matter more than sheer size.
- Use compliance as a differentiator. Being able to explain – credibly – how your organisation manages AI and data risk can become a competitive advantage with clients, regulators, and investors.
Risk-based compliance is not a one-off project or a simple box-ticking exercise; it is an ongoing discipline that will define how digital businesses operate in and beyond the EU for years to come.