Ai’s a Risky Business!

This session, recorded live at the TLTF summit in Austin, Texas, brought together legal tech advisor Cheryl Wilson Griffin and our own Patricia Gannon to explore why AI is not just an innovation opportunity, but a live risk issue for law firms and legal organisations.

From early legal tech to AI risk detective

Wilson Griffin has spent her entire career in legal technology. Since 2002, she has helped to build innovation teams at firms such as Kirkland & Ellis, Mayer Brown, and King & Spalding, before moving into product and startup roles at companies like Lupl and Opus 2. She now advises law firms, in-house legal teams, private equity, and legal tech vendors on how to evaluate, buy, and scale technology.

Her deep dive into AI risk began when a law firm asked her to advise a client on whether to adopt ChatGPT Enterprise. What started as a straightforward technical review quickly exposed uncomfortable gaps: data and logs that were not retained, information that could not be produced later if needed, and safeguards that existed only as contractual promises rather than technical controls. That exercise led Wilson Griffin to map AI risk “from nose to tail”, looking at everything from foundational models to vendor contracts and user behaviour.

Why AI risk is different – and messy

The conversation stressed that AI risk is not just about hallucinations. It is a mix of:

  • Opaque data flows: Firms often do not fully understand what their AI tools collect, store or discard, and which logs are available if something goes wrong.
  • Contractual vs technical controls: Critical protections may depend on side letters and settings that are not obvious to end-users.
  • Regulatory uncertainty: With the EU AI Act and evolving professional rules, there is still little precedent on how courts and regulators will view specific uses of AI.

Wilson Griffin drew a parallel with GDPR: initially, firms “had a heart attack” trying to interpret the rules, but over time they converged on workable patterns. She expects something similar for AI, but only if firms start doing the hard work now of understanding their tools and documenting decisions.

Client transparency and Bar rules

Gannon pressed on the professional conduct angle: Are Bar Associations keeping up?

Wilson Griffin noted that US Bar bodies are engaged, but often “painting with too broad a brush”. For example, the ABA guidance that lawyers should inform clients whenever generative AI is used sounds simple, but quickly becomes unworkable in practice:

  • What exactly counts as generative AI as opposed to machine learning that firms have quietly used in e-discovery for years?
  • If a tool adds “a teensy tiny bit” of generative AI on top of an existing workflow, does that trigger a new disclosure?
  • In insurance defence work, where there is a policyholder and an insurer, who is the client you must notify?

The risk and compliance universe is therefore “a bit of a mess”. Firms cannot wait for perfect clarity; they need pragmatic frameworks now for when, how, and to whom they explain AI use.

Governance for firms of all sizes

For larger firms with innovation teams and cleaner data, AI programmes may feel complicated, but at least resourced. For small and mid-sized practices, the challenge can be paralysing. Wilson Griffin suggested a staged approach:

  1. Start with client risk appetite – A firm doing sensitive regulatory or reputational work will need tighter controls than a high-volume consumer practice.
  2. Set basic internal rules – Decide which tools are permitted (ChatGPT, Gemini, Microsoft Copilot, etc.), whether personal licences are allowed, and what must never go into public models.
  3. Think in “bite-size” decisions – Rather than “rolling out AI”, break the task into small choices: data retention, redaction standards, acceptable use, sign-off points.
  4. Focus on real use cases by practice area – Pilots work best where there is real pain and a team actively pulling for change, not where people are “voluntold” to join yet another experiment.

Bite-sized innovation has always been how legal change sticks; AI is no different.

Monitoring, labelling, and the billable hour

To manage the vulnerability of human risk at scale, Wilson Griffin sees growing demand for third-party platforms that span a firm’s environment, monitoring the use of tools such as ChatGPT, Gemini, and Harvey. These systems track prompts, usage patterns, and data flows, giving risk teams the visibility they currently lack.

Another emerging need is labelling AI-generated content. Without clear markers, senior lawyers may sign off on documents without realising that key sections were drafted by a model, skipping the detailed supervision that junior work historically received. Watermarking or flagging AI-assisted content could also underpin new billing approaches – whether that is a formal “AI billable hour” or simply more transparent pricing around technology-enabled work.

Wilson Griffin was cautious about predictions, but expects AI to reshape law firm pricing and even equity structures over time, even if the classic billable hour never entirely disappears.

Talent, training, and the “play” mindset

There is also a talent risk in underusing AI. Increasingly, both lawyers and business professionals say they will leave firms that block modern tools altogether. Firms that refuse to engage may find themselves at a disadvantage in recruitment and retention.

The speakers argued for normalising play – a word not often associated with legal practice. Wilson Griffin encourages lawyers to buy a low-cost personal licence for tools like ChatGPT Pro or Gemini and use them in their everyday lives: anywhere they might previously have typed a question into a search engine. By experimenting on non-client matters, they learn how models behave, where they hallucinate, and how to push back (including using the thumbs-down feedback).

Crucially, everyone in the firm should receive at least basic AI and data-risk training, not just the pilot group. Even those not formally “using AI” can accidentally feed privileged or confidential information into risky tools.

And training cannot be a one-off. As platforms, policies, and regulations evolve month by month, firms will need ongoing updates and regular refreshers.

Practical takeaways for law firms

Wilson Griffin and Gannon left listeners with a clear message: the biggest risk is to sit this out. Practical next steps for firms include:

  • Map what AI tools are already in use (formally and informally) across the firm.
  • Understand, and document, what data each tool collects, retains, and shares.
  • Define a simple, firm-wide AI policy: permitted tools, forbidden inputs, escalation routes.
  • Offer basic AI and data-risk training to everyone, not just innovators.
  • Pilot concrete use cases in practice areas where lawyers are asking for help, rather than imposing technology from above.
  • Explore monitoring and labelling solutions so partners know when they are reviewing AI-assisted work.
  • Encourage safe experimentation so lawyers can use AI as a “muse” to test ideas, not a black box that silently drafts in the background.

In short: AI is a risky business, but with governance, transparency, and a culture of learning, that risk can be managed – and turned into a genuine advantage for firms and their clients.

Related

Ai’s a Risky Business!

Latest in US Legal Tech

Arbitration Hearings: What, Why, and How?

How to Deal with Professional Disappointment?

Key Takeaways from ITech Conference UK

Related

Ai’s a Risky Business!

Latest in US Legal Tech

Arbitration Hearings: What, Why, and How?

How to Deal with Professional Disappointment?

Key Takeaways from ITech Conference UK

Sleepwalking into the Future

BD Highlights from the IBA Toronto

Cultural Approaches in Arbitration: Is the Common/Civil Law Divide Real?

Geeking Out After Legal Geek UK

Practising Legal Design

Revisiting Third Party Funding

Rethinking the Law Firm Model

How to Build Effective Client Interaction

What Do GC’s Really Look For?

Arbitration and New International Commercial Courts – True Rivals?

Why Sales is a Dirty Word in Legal?

How Data-Driven Business Development Transforms Legal

Latest In Legal Tech Sales

Third Party Funding in Arbitration – A Dwindling Concept?

Why Flexible Legal Solutions Work?

Leadership and What It Means for Lawyers Today

GPT-5 and its Impact on Legal Drafting

AI Enabling Friction to Improve Accuracy and Learning in Law Firms

How to Build Your Personal Brand Within Your Law Firm

How to Make Your Targets in the Last Quartile?

How Are Arbitrators Appointed? Unveiling One of Arbitration’s Mysteries!

How SME Law Firms Can Prepare for the Impact of AI

How to Manage Your International Referrals

New Opportunities for Women in Law

The Board and AI

The Impact of US Ai Policy on Europe

Wellbeing Means Be Well in Legal

Human-Centric AI

Why do Law Firms Struggle to Invest in Change?

Generating Work in Unstable Times

Navigating the Legal Tech Recruitment Landscape

Expedited Arbitration – The What, How and Why?

Alternative Legal Career Options

Redefining the Lawyer’s Professional Identity

Legal Tech Literacy for Law Firms: Building Foundations for the Future

Coaching in Legal

AI Integration for In-House Legal Teams

Non-Lawyers in Arbitration

Bridging the Gap Between Academia and Practice in the Age of AI

Breaking the Taboo Around Money In Legal

Smart Tech for Smart Holidays

From Big Law to Building My Law

How to Make the Right Legal Tech Choices?

Beyond the Hype: AI Agents in Legal Practice

Sanctions and Arbitration: Navigating the New Reality

AI Literacy for Law Firms: What Legal Practitioners Need to Know

Southeast Europe M&A: Investment Opportunities in a Dynamic Region

Trust Me, I’m a Coach: The Opportunity for Coaching in Legal Practice

Lawyer Wellbeing When Handling Legal Tech Implementation

English Arbitration Act 2025 Reforms: Modernising London’s Arbitration Framework

Get early access
to our community

Shape the future of legal

Apply as a moderator by filling and submitting this form.
We will use the information you provide on this form to be in touch with you. You can change your choice at any time by using the Manage consent link in this widget or by contacting us. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with our Terms.

Get Early Access to our app

We will use the information you provide on this form to be in touch with you. You can change your choice at any time by using the Manage consent link in this widget or by contacting us. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with our Terms.

Please fill out your details

We'll get back to you within 5 working days