This session, recorded live at the TLTF summit in Austin, Texas, brought together legal tech advisor Cheryl Wilson Griffin and our own Patricia Gannon to explore why AI is not just an innovation opportunity, but a live risk issue for law firms and legal organisations.
From early legal tech to AI risk detective
Wilson Griffin has spent her entire career in legal technology. Since 2002, she has helped to build innovation teams at firms such as Kirkland & Ellis, Mayer Brown, and King & Spalding, before moving into product and startup roles at companies like Lupl and Opus 2. She now advises law firms, in-house legal teams, private equity, and legal tech vendors on how to evaluate, buy, and scale technology.
Her deep dive into AI risk began when a law firm asked her to advise a client on whether to adopt ChatGPT Enterprise. What started as a straightforward technical review quickly exposed uncomfortable gaps: data and logs that were not retained, information that could not be produced later if needed, and safeguards that existed only as contractual promises rather than technical controls. That exercise led Wilson Griffin to map AI risk “from nose to tail”, looking at everything from foundational models to vendor contracts and user behaviour.
Why AI risk is different – and messy
The conversation stressed that AI risk is not just about hallucinations. It is a mix of:
- Opaque data flows: Firms often do not fully understand what their AI tools collect, store or discard, and which logs are available if something goes wrong.
- Contractual vs technical controls: Critical protections may depend on side letters and settings that are not obvious to end-users.
- Regulatory uncertainty: With the EU AI Act and evolving professional rules, there is still little precedent on how courts and regulators will view specific uses of AI.
Wilson Griffin drew a parallel with GDPR: initially, firms “had a heart attack” trying to interpret the rules, but over time they converged on workable patterns. She expects something similar for AI, but only if firms start doing the hard work now of understanding their tools and documenting decisions.
Client transparency and Bar rules
Gannon pressed on the professional conduct angle: Are Bar Associations keeping up?
Wilson Griffin noted that US Bar bodies are engaged, but often “painting with too broad a brush”. For example, the ABA guidance that lawyers should inform clients whenever generative AI is used sounds simple, but quickly becomes unworkable in practice:
- What exactly counts as generative AI as opposed to machine learning that firms have quietly used in e-discovery for years?
- If a tool adds “a teensy tiny bit” of generative AI on top of an existing workflow, does that trigger a new disclosure?
- In insurance defence work, where there is a policyholder and an insurer, who is the client you must notify?
The risk and compliance universe is therefore “a bit of a mess”. Firms cannot wait for perfect clarity; they need pragmatic frameworks now for when, how, and to whom they explain AI use.
Governance for firms of all sizes
For larger firms with innovation teams and cleaner data, AI programmes may feel complicated, but at least resourced. For small and mid-sized practices, the challenge can be paralysing. Wilson Griffin suggested a staged approach:
- Start with client risk appetite – A firm doing sensitive regulatory or reputational work will need tighter controls than a high-volume consumer practice.
- Set basic internal rules – Decide which tools are permitted (ChatGPT, Gemini, Microsoft Copilot, etc.), whether personal licences are allowed, and what must never go into public models.
- Think in “bite-size” decisions – Rather than “rolling out AI”, break the task into small choices: data retention, redaction standards, acceptable use, sign-off points.
- Focus on real use cases by practice area – Pilots work best where there is real pain and a team actively pulling for change, not where people are “voluntold” to join yet another experiment.
Bite-sized innovation has always been how legal change sticks; AI is no different.
Monitoring, labelling, and the billable hour
To manage the vulnerability of human risk at scale, Wilson Griffin sees growing demand for third-party platforms that span a firm’s environment, monitoring the use of tools such as ChatGPT, Gemini, and Harvey. These systems track prompts, usage patterns, and data flows, giving risk teams the visibility they currently lack.
Another emerging need is labelling AI-generated content. Without clear markers, senior lawyers may sign off on documents without realising that key sections were drafted by a model, skipping the detailed supervision that junior work historically received. Watermarking or flagging AI-assisted content could also underpin new billing approaches – whether that is a formal “AI billable hour” or simply more transparent pricing around technology-enabled work.
Wilson Griffin was cautious about predictions, but expects AI to reshape law firm pricing and even equity structures over time, even if the classic billable hour never entirely disappears.
Talent, training, and the “play” mindset
There is also a talent risk in underusing AI. Increasingly, both lawyers and business professionals say they will leave firms that block modern tools altogether. Firms that refuse to engage may find themselves at a disadvantage in recruitment and retention.
The speakers argued for normalising play – a word not often associated with legal practice. Wilson Griffin encourages lawyers to buy a low-cost personal licence for tools like ChatGPT Pro or Gemini and use them in their everyday lives: anywhere they might previously have typed a question into a search engine. By experimenting on non-client matters, they learn how models behave, where they hallucinate, and how to push back (including using the thumbs-down feedback).
Crucially, everyone in the firm should receive at least basic AI and data-risk training, not just the pilot group. Even those not formally “using AI” can accidentally feed privileged or confidential information into risky tools.
And training cannot be a one-off. As platforms, policies, and regulations evolve month by month, firms will need ongoing updates and regular refreshers.
Practical takeaways for law firms
Wilson Griffin and Gannon left listeners with a clear message: the biggest risk is to sit this out. Practical next steps for firms include:
- Map what AI tools are already in use (formally and informally) across the firm.
- Understand, and document, what data each tool collects, retains, and shares.
- Define a simple, firm-wide AI policy: permitted tools, forbidden inputs, escalation routes.
- Offer basic AI and data-risk training to everyone, not just innovators.
- Pilot concrete use cases in practice areas where lawyers are asking for help, rather than imposing technology from above.
- Explore monitoring and labelling solutions so partners know when they are reviewing AI-assisted work.
- Encourage safe experimentation so lawyers can use AI as a “muse” to test ideas, not a black box that silently drafts in the background.
In short: AI is a risky business, but with governance, transparency, and a culture of learning, that risk can be managed – and turned into a genuine advantage for firms and their clients.