In a recent Platforum9 session, Ciara O’Buachalla, an AI governance manager with extensive experience spanning law firm practice, in-house legal work, and AI startup development, shared insights on how in-house legal teams can effectively integrate artificial intelligence into their operations.
Drawing from her unique journey through corporate law, litigation, an MBA in Barcelona, and roles at Amazon and the legal AI startup Donna AI, O’Buachalla provided practical guidance on navigating AI adoption whilst managing risk and compliance.
Why Lawyers Will Lead AI Adoption
O’Buachalla emphasised a fundamental reality: in-house lawyers are natural early adopters of AI technology due to their unique working constraints. Unlike law firm practitioners who can delve deep into legal research, in-house counsel must provide rapid risk assessments and business impact analyses, typically within two to three days.
“If anyone’s going to use AI, it’s definitely going to be in-house lawyers,” O’Buachalla explained. “You’re basically asked a question and you have to give an answer on the risk and the effects on the business.” This pressure for speed, combined with the lawyer’s expertise to spot when AI produces incorrect results, makes AI an invaluable assistant rather than replacement.
The implications extend beyond individual efficiency gains. As in-house teams experience time savings of approximately 30%, they will increasingly expect external law firms to demonstrate similar technological capabilities and efficiency improvements.
The Proof of Concept Approach
O’Buachalla outlined a structured methodology for AI implementation, emphasising that successful adoption requires patience and systematic testing rather than rushing into enterprise agreements.
The POC Framework: The foundation of successful AI integration lies in conducting proper proof of concepts lasting three to six months—significantly longer than the typical two-week or one-month trials offered by vendors. “The POC should be tested between three to six months,” O’Buachalla stressed, noting that in-house legal work is highly varied rather than repetitive, requiring extended evaluation periods to assess genuine utility.
Comparative Testing Strategy: Organisations should test AI tools alongside established platforms like ChatGPT Enterprise to determine whether specialised legal tools provide genuine value beyond enhanced user interfaces. This approach helps distinguish between tools offering substantial improvements versus those that are merely “ChatGPT wrappers” with legal-specific prompts.
Build vs. Buy Considerations: While some organisations consider developing in-house AI solutions, O’Buachalla cautioned about the ongoing maintenance requirements. Success depends on having dedicated ownership for updates and improvements, making the cost-benefit analysis against purchasing established tools particularly important.
Workflow Mapping and Use Case Selection
Before implementing any AI solution, O’Buachalla emphasised the critical importance of documenting existing workflows and identifying specific use cases. For commercial contracts, this means understanding the types of agreements regularly processed, their complexity, and the intake procedures.
The workflow analysis should identify which components can benefit from AI assistance, recognising that complete automation is rarely appropriate. As O’Buachalla noted, “Most of the time the whole workflow cannot be automated or handed over to AI.” The goal is identifying specific questions and processes within the workflow that AI can expedite whilst maintaining legal oversight.
Building a Culture of AI Integration
Successful AI implementation extends far beyond technology deployment to encompass cultural change management. O’Buachalla outlined several proven strategies for driving adoption across legal teams.
Peer-to-Peer Networks: Establishing AI champion networks with representatives from each team creates natural adoption pathways. Having two people per team enables idea sharing and mutual support in this emerging field.
Office Hours and Knowledge Sharing: Regular office hours provide informal support for team members experimenting with AI tools. Additionally, “show and share” sessions where lawyers demonstrate their AI use cases help spread practical knowledge and inspire new applications.
Centralised Knowledge Base: Creating a single source of truth for AI guidance, including documented use cases, proven prompts, and workflow integration examples, supports consistent adoption across the department.
Privacy, Security, and Risk Management
O’Buachalla addressed the complex privacy and security considerations that remain at the forefront of legal teams’ AI concerns. She outlined a tiered approach to data protection:
Data Sanitisation: The most secure approach involves stripping confidential and personal data from any content used with AI systems, essentially using AI as a drafting assistant without exposing sensitive information.
Enterprise-Grade Solutions: For broader implementation, enterprise agreements with established providers like Microsoft’s Copilot or Google’s Gemini Workspace offer contractual protections, including commitments not to train on client data and prompt deletion upon session closure.
Compliance Requirements: Organisations increasingly demand SOC 2 compliance and single sign-on authentication before considering AI tools, automatically excluding many startup solutions from consideration. Contract negotiations should include explicit commitments regarding data protection and training prohibitions.
Future Outlook: The Agent-to-Agent Ecosystem
Looking ahead, O’Buachalla anticipated significant developments in AI agent technology that will transform legal operations. She envisioned systems capable of connecting multiple platforms, enabling lawyers to query across different systems seamlessly—from checking annual leave balances to accessing policy information stored in separate systems.
For legal applications, this could mean AI agents capable of handling different transaction stages or automatically retrieving relevant information from multiple sources. However, O’Buachalla emphasised that authorisation and access controls will become increasingly critical as these systems develop.
The Evolving Role of Legal Professionals
The integration of AI technology represents an opportunity for legal professionals to return to more strategic, advisory functions whilst AI handles routine drafting and research tasks. O’Buachalla suggested this evolution aligns with many lawyers’ original career motivations beyond pure transactional work.
The changing landscape also creates opportunities for younger lawyers who approach new technology with greater openness and willingness to experiment. Their natural adoption of AI tools may help drive broader cultural change within traditionally hierarchical legal organisations.
Conclusion
O’Buachalla’s insights demonstrate that successful AI integration for in-house legal teams requires strategic planning, systematic testing, and cultural change management. Rather than viewing AI as a complete replacement for legal expertise, successful adoption treats these tools as powerful assistants that enable lawyers to work more efficiently whilst maintaining essential oversight and judgement.
The message for legal leaders is clear: AI adoption is not a question of “if” but “how quickly and effectively” teams can integrate these tools whilst maintaining appropriate risk management and professional standards. Those who embrace systematic implementation strategies will likely find themselves with competitive advantages in both efficiency and client service capabilities.