In a comprehensive Platforum9 session, Donna O’Leary shared insights from her transition from banking lawyer at top-tier Irish firm Mason Hayes & Curran to AI literacy consultant specialising exclusively in legal practice. With the European AI Act now in effect, her expertise addresses critical compliance requirements facing law firms across the EU.
The Regulatory Reality: EU AI Act Compliance
The European AI Act represents the world’s first comprehensive AI regulation framework, establishing mandatory literacy requirements that many law firms remain unaware of. O’Leary explained the risk-based approach covering prohibited, high-risk, limited-risk, and minimal-risk AI systems.
For most law firms currently using tools like ChatGPT, Claude, or Microsoft Copilot, the primary concern centres on limited-risk systems requiring mandatory AI literacy training. “It’s mandatory to ensure that your staff, as of February this year, have a sufficient level of AI literacy,” O’Leary emphasised.
The enforcement timeline extends to August 2026, with member states appointing competent authorities by August 2025. Ireland has designated eight sector-specific bodies, though the overarching authority remains undetermined.
Beyond Prohibition: The Supervision Dilemma
Many firms maintain blanket AI prohibitions, but O’Leary argued this approach creates false security. “If you can’t monitor it, you really can never know what your employees are using. You have to assume that your employees have access to it and train them.”
The supervision duty presents particular risks. O’Leary illustrated with a scenario where an untrained legal assistant uses ChatGPT for case research, generating fictional precedents that appear in court documents. “You actually have a duty to supervise, and you should have verified what you gave to the court. You can’t say ChatGPT was prohibited—you have to assume the worst and train people.”
This connects to fundamental professional standards including duty of confidentiality, supervision obligations, and the duty not to mislead the court.
Practical Implementation: The 30-Minute Solution
Effective AI literacy training need not be overwhelming. O’Leary’s approach covers three essential pillars:
General Understanding: Staff must comprehend AI capabilities, limitations, and different system types, enabling informed decision-making about benefits, risks, and potential harm.
Sector-Specific Standards: Legal professionals require training on professional obligations including confidentiality, supervision, court honesty, professional independence, and integrity.
Role-Specific Applications: Training must address how different roles within firms interact with AI tools, from partners to legal executives to administrative staff.
“It really takes only half an hour to an hour of training to get people up to speed,” O’Leary noted, with workshops ranging from one to three hours depending on depth required.
Low-Risk, High-Value Use Cases
For firms beginning their AI journey, O’Leary recommends starting with non-client information applications where risk remains minimal. Successful examples include:
Case Summary Generation: Judges reading cases fully, then using AI to create summaries they can verify, saving 1-1.5 hours of drafting time while maintaining accuracy through expert oversight.
Plain English Translation: Converting complex regulations into client-friendly language under 200 words, leveraging lawyer expertise to verify and refine outputs.
Administrative Efficiency: Time sheet review for prohibited words or client-specific requirements, reducing manual review burden while maintaining quality control.
Non-Legal Functions: HR job descriptions, marketing content, and LinkedIn posts for departments where legal confidentiality concerns don’t apply.
The key question: “Would you give this task to a trainee or legal assistant? If this goes massively wrong, what’s the risk to reputation and clients?”
The Ethics and Governance Evolution
AI governance encompasses literacy, ethics, and tool selection decisions. The American Bar Association recently advised examining AI provider ethics and responsible development practices, adding complexity to tool selection.
However, most firms lack resources for comprehensive provider evaluation. O’Leary recommends focusing on EU AI Act compliance as a baseline, then examining founding teams: “If it’s for a law firm, is one of the founders a lawyer? Are they mostly venture capital backed? What’s their goal—revolutionise legal tech or sell quickly?”
Overcoming Resistance Through Understanding
The legal profession’s resistance to AI adoption stems partly from fundamental training differences. “When a client comes to me, they want an answer in the shortest period of time. We are geared to delivering answers rather than helping find our own way,” creating tension with AI’s collaborative approach.
Fear often dissipates through education. “When you do an AI literacy session and understand what you’re dealing with, a lot of the fear disappears. Then there’s curiosity and willingness.”
O’Leary advocates starting with baby steps rather than comprehensive tool implementation, emphasising proper problem identification before solution selection.
Strategic Recommendations
For Individual Lawyers: Invest in understanding AI capabilities and limitations through proper training, focusing on low-risk applications that complement existing expertise.
For Law Firms: Implement mandatory literacy training regardless of current AI policies, develop clear use case guidelines, and establish governance frameworks addressing both compliance and ethical considerations.
For the Profession: Recognise AI literacy as essential professional competency, similar to technology skills that became standard over previous decades.
As O’Leary concluded, “This is brand new technology that we’re faced with. People aren’t educated yet, but the obligation for literacy training is already here.”