Dominique Shelton Leipzig, CEO of Global Data Innovation, and Isabelle Sharon, the company’s data-centric legal analyst, delivered crucial insights into how boards can navigate AI governance to unlock genuine business value whilst mitigating risk. Drawing from their experience training over 300 CEOs and board members across the country in July, Leipzig and Isabelle revealed the fundamental disconnect between AI investment and returns—and provided a practical framework to bridge this gap.
The AI Investment Paradox
Companies are collectively spending trillions to capture AI’s promised $30 trillion contribution to the global economy over the next five years. Yet McKinsey’s recent findings reveal a stark reality: 80% of generative AI projects have delivered zero impact on earnings before interest and taxes.
Leipzig identified the root cause of this disconnect: “When CEOs and board members see headlines of AI going rogue—it brings a paralysis reaction where they want to get into the market but don’t want AI anywhere near their customers, revenue, operations, or strategy.”
This fear-driven approach relegates AI projects to peripheral activities far from core business functions, ensuring they cannot deliver meaningful ROI. The solution lies in governance frameworks that enable confident deployment of AI in mission-critical areas.
The Trust Framework: A Universal Approach to AI Governance
Global Data Innovation has developed the TRUST framework—a comprehensive approach synthesising regulatory requirements from 100 countries across six continents. The framework comprises five essential elements:
T – Triaging: Risk-ranking AI use cases, including business considerations like strategic alignment and measurable ROI alongside regulatory requirements. High-risk applications require enhanced oversight, whilst low-risk applications need minimal intervention.
R – Right Data: Ensuring training data is accurate, properly formatted, and legally compliant. This includes verifying intellectual property rights, privacy rights, and business rights to use the data. As Leipzig noted, even well-intentioned organisations can fail here—Tennessee spent £400 million on an algorithm that ultimately denied legitimate life-sustaining care claims 92% of the time due to incorrectly merged patient files.
U – Uninterrupted Monitoring: Continuous testing and auditing of AI output against company standards. This addresses the fundamental challenge that AI models drift and degrade over time. Leading AI tool makers acknowledge their most advanced reasoning models produce incorrect answers 48-79% of the time, making constant monitoring essential.
S – Supervising Humans: Training personnel at all levels and creating a culture that encourages early detection and reporting of AI issues. This extends beyond technical teams to include junior employees who might first notice problems.
T – Technical Documentation: Maintaining comprehensive logging and metadata to enable rapid model correction when issues arise.
Vendor Risk and Brand Liability
A critical insight from the session concerned vendor relationships. Whilst many organisations use third-party AI solutions, brand liability remains with the primary company regardless of vendor indemnification clauses.
Leipzig cited Rite Aid as a cautionary example: their vendor’s AI misidentified paying customers—including loyalty card members—as criminals, leading to customers being escorted from stores. The resulting Federal Trade Commission investigation focused entirely on Rite Aid, which became the only company banned from using AI in physical stores for five years. The vendor’s name rarely appeared in headlines.
“Your company is going to be the company on the line no matter what,” Leipzig emphasised. “Brand is important, and no amount of indemnification can correct that.”
The TRUST framework applies equally to vendor relationships, enabling organisations to ask critical questions about training data, implement monitoring systems, and maintain oversight of vendor AI applications.
Breaking Down Organisational Silos
Leipzig identified a fundamental challenge: “There are too many lawyers in silos, too many technologists in silos, and too many CEOs talking to consultants rather than their own people.”
The framework creates a shared language bridging technical teams, legal departments, and executive leadership. For technology teams, “uninterrupted testing, monitoring, and auditing” provides clear implementation guidance. For legal teams, the framework addresses regulatory requirements across multiple jurisdictions. For executives, it offers strategic questions that enable confident AI deployment decisions.
Board Oversight and Fiduciary Duty
Under the Caremark decision in the US, boards have a fiduciary duty of oversight that includes staying informed about AI risks and governance. Leipzig outlined six essential questions boards should ask:
- How are we triaging our AI use cases?
- Do we have the right data to train our systems?
- Is uninterrupted testing, monitoring, and auditing in place?
- Who is supervising our AI systems and have they been properly trained?
- If AI drifts from our standards, do we have the technical documentation to fix the model?
- Are we treating this as continuous governance rather than a one-time exercise?
These questions enable boards to exercise meaningful oversight without requiring deep technical expertise in AI systems.
Cultural Transformation and Employee Engagement
The framework emphasises cultural change alongside technical implementation. Global Data Innovation’s trust coalition includes companies like Zappa, which provides virtual coaching for junior employees—traditionally excluded from executive coaching programmes.
This approach recognises that front-line employees often detect AI problems first and need both the skills and confidence to report issues. Creating psychological safety for AI-related concerns prevents small problems from becoming major incidents.
Implementation Strategy
Organisations seeking to implement trustworthy AI should begin with the TRUST framework’s universal principles whilst adapting specific requirements to their regulatory environment and business context. The framework accommodates different legal frameworks—from the EU AI Act’s 17 prohibited use cases to emerging US state legislation—whilst maintaining core governance principles.
Success requires cross-functional teams, board-level commitment, and cultural change that encourages experimentation within defined guardrails. As Leipzig concluded: “When companies know they have a governance framework where they can lead with confidence with AI, then they can go into the AI use cases that are really going to matter to the bottom line.”