Session on the 3rd of March 2025
“It’s up to us as attorneys to raise the flag, share with our leaders, and our business leaders what the best practices are to ensure success with AI and avoid harms,” states Dominique Shelton Leipzig, CEO of the LA-based Global Data Innovation, during a recent Platforum9 session alongside Ron Given, former Mayer Brown and Wolf Theiss Partner, and Coach to Legal. Their insights reveal how lawyers can move from being late-comers to strategic partners in the AI and cybersecurity conversation.
Entering the Opera at the Intermission
The legal profession often finds itself playing catch-up with technological innovation. “I love your analogy of coming in at the opera at the intermission and trying to catch up,” Shelton Leipzig notes, responding to moderator Ron Given’s concerns. “This is a position that we as lawyers have found ourselves in at other times in the advancement of technology… with the advent of the Internet 1.0, there was a sense that the law could be potentially hampering innovation.”
This disconnect between legal and business teams has been costly. “What that ultimately cost us is over $30 trillion in losses to our global economy in the areas of cyber privacy,” Shelton Leipzig warns. The challenge now is to avoid repeating this pattern with AI.
The TRUST Framework for AI Governance
Shelton Leipzig offers a practical frameworkโusing the anagram TRUSTโto help business leaders navigate AI implementation:
- T – Triage: Differentiate between high-risk and low-risk AI projects with help from outside counsel to understand what might be prohibited in certain jurisdictions
- R – Right data: Ensure correct data with proper IP rights, privacy rights, and business rights
- U – Uninterrupted testing: Continuously monitor and audit AI outputs against company guardrails
- S – Supervision: Maintain human oversight to correct course when models drift
- T – Technical documentation: Maintain proper logging data, metadata, and transparency documentation
This framework requires cross-disciplinary collaboration between lawyers, data scientists, and other expertsโa team approach that positions lawyers as valuable counselors rather than mere compliance officers.
The Cost-Benefit Reality
When questioned about the cost of implementing such governance measures, Shelton Leipzig is emphatic: “It’s not as expensive as one would think.” She points to the cautionary tale of the State of Tennessee, which spent $400 million on an AI system to automate Medicare and Medicaid benefits, only to have a federal court rule that the AI had “drifted and had denied benefits under the program in error over 90 percent of the time.”
Had proper governance been in place, the problems could have been detected immediately. “The incremental distance is negligible when we’re talking about the exposure at hand,” she explains, comparing it to tech companies that lost billions in market cap by failing to address privacy concerns early.
Beyond the Private Sector
The TRUST principles apply equally to government and public sector organisations. “For the government to really be able to exercise oversight and ensure that what they’re getting is what they need, awareness of these trust principles is critical,” Shelton Leipzig notes. Both private companies and government entities need effective governance frameworks to achieve excellence with AI while avoiding harms.
As she concludes, with proper AI governance, both legal and business teams can feel like “they’ve got this.” The key is providing executives with “the five questions they can ask in the next five days in their companies to really make the difference in terms of achieving excellence with their AI.”