The Board and AI

Dominique Shelton Leipzig, CEO of Global Data Innovation, and Isabelle Sharon, the company’s data-centric legal analyst, delivered crucial insights into how boards can navigate AI governance to unlock genuine business value whilst mitigating risk. Drawing from their experience training over 300 CEOs and board members across the country in July, Leipzig and Isabelle revealed the fundamental disconnect between AI investment and returns—and provided a practical framework to bridge this gap.

The AI Investment Paradox

Companies are collectively spending trillions to capture AI’s promised $30 trillion contribution to the global economy over the next five years. Yet McKinsey’s recent findings reveal a stark reality: 80% of generative AI projects have delivered zero impact on earnings before interest and taxes.

Leipzig identified the root cause of this disconnect: “When CEOs and board members see headlines of AI going rogue—it brings a paralysis reaction where they want to get into the market but don’t want AI anywhere near their customers, revenue, operations, or strategy.”

This fear-driven approach relegates AI projects to peripheral activities far from core business functions, ensuring they cannot deliver meaningful ROI. The solution lies in governance frameworks that enable confident deployment of AI in mission-critical areas.

The Trust Framework: A Universal Approach to AI Governance

Global Data Innovation has developed the TRUST framework—a comprehensive approach synthesising regulatory requirements from 100 countries across six continents. The framework comprises five essential elements:

T – Triaging: Risk-ranking AI use cases, including business considerations like strategic alignment and measurable ROI alongside regulatory requirements. High-risk applications require enhanced oversight, whilst low-risk applications need minimal intervention.

R – Right Data: Ensuring training data is accurate, properly formatted, and legally compliant. This includes verifying intellectual property rights, privacy rights, and business rights to use the data. As Leipzig noted, even well-intentioned organisations can fail here—Tennessee spent £400 million on an algorithm that ultimately denied legitimate life-sustaining care claims 92% of the time due to incorrectly merged patient files.

U – Uninterrupted Monitoring: Continuous testing and auditing of AI output against company standards. This addresses the fundamental challenge that AI models drift and degrade over time. Leading AI tool makers acknowledge their most advanced reasoning models produce incorrect answers 48-79% of the time, making constant monitoring essential.

S – Supervising Humans: Training personnel at all levels and creating a culture that encourages early detection and reporting of AI issues. This extends beyond technical teams to include junior employees who might first notice problems.

T – Technical Documentation: Maintaining comprehensive logging and metadata to enable rapid model correction when issues arise.

Vendor Risk and Brand Liability

A critical insight from the session concerned vendor relationships. Whilst many organisations use third-party AI solutions, brand liability remains with the primary company regardless of vendor indemnification clauses.

Leipzig cited Rite Aid as a cautionary example: their vendor’s AI misidentified paying customers—including loyalty card members—as criminals, leading to customers being escorted from stores. The resulting Federal Trade Commission investigation focused entirely on Rite Aid, which became the only company banned from using AI in physical stores for five years. The vendor’s name rarely appeared in headlines.

“Your company is going to be the company on the line no matter what,” Leipzig emphasised. “Brand is important, and no amount of indemnification can correct that.”

The TRUST framework applies equally to vendor relationships, enabling organisations to ask critical questions about training data, implement monitoring systems, and maintain oversight of vendor AI applications.

Breaking Down Organisational Silos

Leipzig identified a fundamental challenge: “There are too many lawyers in silos, too many technologists in silos, and too many CEOs talking to consultants rather than their own people.”

The framework creates a shared language bridging technical teams, legal departments, and executive leadership. For technology teams, “uninterrupted testing, monitoring, and auditing” provides clear implementation guidance. For legal teams, the framework addresses regulatory requirements across multiple jurisdictions. For executives, it offers strategic questions that enable confident AI deployment decisions.

Board Oversight and Fiduciary Duty

Under the Caremark decision in the US, boards have a fiduciary duty of oversight that includes staying informed about AI risks and governance. Leipzig outlined six essential questions boards should ask:

  1. How are we triaging our AI use cases?
  2. Do we have the right data to train our systems?
  3. Is uninterrupted testing, monitoring, and auditing in place?
  4. Who is supervising our AI systems and have they been properly trained?
  5. If AI drifts from our standards, do we have the technical documentation to fix the model?
  6. Are we treating this as continuous governance rather than a one-time exercise?

These questions enable boards to exercise meaningful oversight without requiring deep technical expertise in AI systems.

Cultural Transformation and Employee Engagement

The framework emphasises cultural change alongside technical implementation. Global Data Innovation’s trust coalition includes companies like Zappa, which provides virtual coaching for junior employees—traditionally excluded from executive coaching programmes.

This approach recognises that front-line employees often detect AI problems first and need both the skills and confidence to report issues. Creating psychological safety for AI-related concerns prevents small problems from becoming major incidents.

Implementation Strategy

Organisations seeking to implement trustworthy AI should begin with the TRUST framework’s universal principles whilst adapting specific requirements to their regulatory environment and business context. The framework accommodates different legal frameworks—from the EU AI Act’s 17 prohibited use cases to emerging US state legislation—whilst maintaining core governance principles.

Success requires cross-functional teams, board-level commitment, and cultural change that encourages experimentation within defined guardrails. As Leipzig concluded: “When companies know they have a governance framework where they can lead with confidence with AI, then they can go into the AI use cases that are really going to matter to the bottom line.”

Related

The Board and AI

The Impact of US Ai Policy on Europe

Wellbeing Means Be Well in Legal

Human-Centric AI

Why do Law Firms Struggle to Invest in Change?

Related

The Board and AI

The Impact of US Ai Policy on Europe

Wellbeing Means Be Well in Legal

Human-Centric AI

Why do Law Firms Struggle to Invest in Change?

Generating Work in Unstable Times

Navigating the Legal Tech Recruitment Landscape

Expedited Arbitration – The What, How and Why?

Alternative Legal Career Options

Redefining the Lawyer’s Professional Identity

Legal Tech Literacy for Law Firms: Building Foundations for the Future

Coaching in Legal

AI Integration for In-House Legal Teams

Non-Lawyers in Arbitration

Bridging the Gap Between Academia and Practice in the Age of AI

Breaking the Taboo Around Money In Legal

Smart Tech for Smart Holidays

From Big Law to Building My Law

How to Make the Right Legal Tech Choices?

Beyond the Hype: AI Agents in Legal Practice

Sanctions and Arbitration: Navigating the New Reality

AI Literacy for Law Firms: What Legal Practitioners Need to Know

Southeast Europe M&A: Investment Opportunities in a Dynamic Region

Trust Me, I’m a Coach: The Opportunity for Coaching in Legal Practice

Lawyer Wellbeing When Handling Legal Tech Implementation

English Arbitration Act 2025 Reforms: Modernising London’s Arbitration Framework

Latest in Legal Tech Innovation: The U.S. Perspective

Collaborating to Build Effective Legal Tech:

The Italian Legal Connection: An Evolving Market Overview 

Commercial Skills: The Missing Piece in Legal Education

The European AI Act: Understanding Compliance in a Risk-Based Regulatory Framework

Business Development Support: A Catalyst for Legal Career Progression

International Arbitration Forum: Major Trends Revealed

Navigating Your Career in Big Law: Insights from Perkins Coie’s Ian Bagshaw and Natalie Thomas

Essential Legal Tech Skills for Today’s Lawyers

What Law Firms Are Really Looking For When Recruiting Trainees: Insights from Julian Yarr

Embracing AI in Legal Recruitment: How Candidates Can Leverage Technology for Success

Building Your LinkedIn Professional Presence

Navigating the SQE: Expert Insights on Preparation for Aspiring Solicitors

Interview Prep Techniques

Securing Your Training Contract in the UK & Ireland

Mastering Your Elevator Pitch: Tips & Tricks for Law Students and Early Career Lawyers

Reskilling for the Future: New Skills for Lawyers to Succeed

Building an Effective AI Strategy for Legal Teams: Insights from Jonathan Williams

Building a Lean Legal Enterprise

How Legal Operations Can Elevate Law Firm Performance: Insights from Vadym Kuzmenko

Selecting and Implementing Legal AI: Lessons from Bird & Bird

Developing an Effective CRM Strategy for Modern Law Firms

Legal News & Views | Law Firm Consolidation and Trade Tensions: Reshaping the Global Legal Landscape

How Delegation Can Accelerate Your Legal Career

The Spiritual Dimension of Peak Performance for Lawyers

Human Capital Trends 2025: Navigating the Future of Talent in the Legal Industry

Branding Yourself as a Lawyer: Building an Authentic Professional Identity

How to Streamline Your BD Activity to Be More Effective

Legal Hiring Trends: Insights from a UK Based Veteran Recruiter

Get early access
to our community

Shape the future of legal

Apply as a moderator by filling and submitting this form.
We will use the information you provide on this form to be in touch with you. You can change your choice at any time by using the Manage consent link in this widget or by contacting us. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with our Terms.

Get Early Access to our app

We will use the information you provide on this form to be in touch with you. You can change your choice at any time by using the Manage consent link in this widget or by contacting us. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with our Terms.

Please fill out your details

We'll get back to you within 5 working days