In last weeks Platforum9 session, Alexander Irschenberger, external lecturer at Queen Mary University of London and legal tech expert with 10 years of product development experience, addressed the evolving landscape of legal education in the AI era. His insights revealed significant gaps between academic preparation and industry needs whilst advocating for fundamental changes in how law schools approach technology education.
From Scepticism to Advocacy
Irschenberger’s perspective on lawyers and technology has undergone a dramatic transformation. For the past decade, he advocated against pushing lawyers to adopt technology: “We don’t have the mindset. We should probably wait around.”
However, large language models fundamentally changed his outlook: “The last year changed for me, because language models… produce language, read language, they are actually a really good fit for lawyers.” He now recognises lawyers’ unique advantages: “Lawyers are potentially the best users of these tools because we know how to write an instruction… but we’re also fairly sceptical and we like to challenge things.”
The University Problem and Student Evolution
Irschenberger delivered a sharp critique of legal academia’s response to AI: “Universities in general are falling behind. They are… out of touch with reality.” The primary issue stems from fear: “We fear what we don’t understand. And it’s always easier to just say no.”
Most universities initially banned ChatGPT rather than teaching proper usage. He advocates for proactive education: “Taught the students how to deal with bias, hallucinations, data infrastructures, compliance around these tools. That would be better than just saying, ‘this new thing. That’s cheating’ because it’s not cheating. It’s just a tool.”
Only six or seven universities offer credit-giving courses on legal tech, creating a fundamental mismatch between academic preparation and professional expectations. However, student attitudes have transformed dramatically. Initially, “out of a classroom of 100… maybe a handful were prone to technology.” Today, “90%, if not 95% use tools like ChatGPT on a weekly basis.”
Skills Evolution
Despite student engagement, workplace implementation reveals concerning patterns: “Director and junior partner level uses AI the most. And a lot of the young associates are not using it at all.” This creates a “schism” where those most capable of learning lack confidence, whilst those with authority lack expertise to teach.
Rather than focusing on plagiarism prevention, Irschenberger advocates for constructive AI integration across curricula: “There’s probably not a course that shouldn’t include AI in some way.” He highlighted negotiation training as exemplary: “Will you become a better negotiator if you can train up against an AI tool… That’s probably better than reading a book.”
When addressing concerns about AI replacing traditional junior lawyer tasks, he offered historical perspective: “In the 1890s, everyone was able to ride a horse… but I don’t think a lot of people can actually ride a horse today because it’s not a skill that’s necessary.”
He advocates for elevated expectations: “A first year associate currently should be able to draft as a fifth year associate drafted documents a couple of years ago. Then you should teach your young associates the skills that are very human in nature, like empathy, and social skills.”
Vendor Claims
Irschenberger highlighted concerning misinformation from established vendors: “Companies like LexisNexis, Thomson Reuters… claim, ‘we build an AI tool that is hallucination free.’ That’s BS because that’s not possible.” When Stanford and Yale studies demonstrated these tools hallucinate at similar rates to ChatGPT, it exposed the danger of vendor marketing to uninformed users.
Effective AI policies must move beyond prohibition: “A lot of people are focusing on… restrictive approaches… The points of the policy should be to teach people how to ride the bike… focus on quality of input and output.”
Career Positioning
For students, AI competency represents career differentiation: “In your skills section, in your accomplishment section, I would list things that you’ve done using technology, because… you become better. So not just faster, but also better at your job.”
Rather than seeking comprehensive platforms, he recommends focused adoption: “The market is still quite early… adopt small… you don’t know who’s gonna win.” He suggests evaluating tools based on immediate needs: “Does it solve my use case? If it solves my use case? Okay, go ahead. We can replace it in a year if we want to.”
Conclusion
Irschenberger’s insights reveal a critical disconnect between academic preparation and professional expectations in the AI era. Universities must move beyond fear-based prohibition toward constructive integration that prepares graduates for technology-enabled practice.