In our live session last Friday, George Hannah, a second-year solicitor apprentice at Lewis Silkin and founder of Best Practice, joined us as moderator to explore AI adoption from the “younger lawyer perspective” how it’s shaping day-to-day legal work, learning and training, client interactions, and early-career development; Hannah also shared insights from his LinkedIn writing on legal AI and his weekly Best Practice newsletter, which tracks and summarises key developments across legal tech.
Choosing law via apprenticeship
Hannah described the decision to pursue an apprenticeship route after noticing the limited contact hours on traditional law degrees (and the cost trade-off). He applied widely (29 apprenticeships), acknowledging early applications were weaker, but the repetition built skill, especially for modern hiring formats like video interviews.
Training today: SQE, structure, and digital-by-default practice
Hannah explained the apprenticeship structure: a study day each week and a competency framework mapping “knowledge, skills and behaviours” towards SQE endpoint assessments. In his current seat (business immigration), practice is already paperless, so “lawyering basics” include digital hygiene: filing cleanly, sharing PDFs rather than editable documents, and handling work product securely and consistently.
How interest in AI actually starts inside firms
Hannah didn’t enter law as a “tech person”. His interest accelerated when his firm rolled out a legal AI tool (e.g., a Harvey-style deployment) and he saw how general models were being adapted for legal workflows. That quickly widened into tracking the explosion of specialised tools across practice areas (IP, real estate, and beyond), and spotting an opportunity for early-career lawyers to build credibility by being fluent, curious, and practical.
The upside: access, speed, and more democratic learning
A core theme was accessibility. AI can act like an on-demand tutor: when a concept doesn’t click during study, juniors can get immediate explanations, reducing reliance on expensive tutoring. Hannah noted “guided learning” style features that can test understanding before explaining a more active learning loop than simply requesting an answer.
The risk: cognitive offloading and fragile judgment
Hannah dug into a serious concern: juniors may accept AI outputs as “correct” before they’ve built the internal radar for what looks right or wrong. Hannah’s worry wasn’t just a factual error; it’s that over-reliance can slow the development of professional judgement, which is what clients ultimately pay for.
Why the office still matters
Hannah emphasised that apprenticeship-like learning still happens, but it’s materially easier in person. “Peer-to-peer” help and quick checks are more immediate when you can turn to someone beside you, rather than waiting for messages and email replies. The implication: remote-first patterns may slow the rate at which juniors absorb craft knowledge.
Making experimentation real: time, incentives, and permission
Hannah highlighted emerging approaches to drive adoption:
- Allocating time explicitly to experiment (he cited an example of counting experimentation towards targets).
- Creating incentives (he cited a firm tying an AI-usage threshold to a shared bonus pool).
The shared point: adoption doesn’t happen by posters and policies alone; it needs permission, time, and clear guardrails.
Clients are already using AI – and asking lawyers to validate it
Hannah reported hearing (from more senior colleagues) that clients now arrive with “ChatGPT said…” and want confirmation. Many in-house teams also have their own tools, shifting expectations and changing what “value” looks like.
The bottleneck: human review, liability, and sign-off
A sharp insight: AI may accelerate junior drafting, but delivery to clients can remain slow because qualified lawyers must review and sign off. That “human-in-the-loop” is both the safety net and the throughput constraint, and it’s where accountability, insurance, and risk management currently concentrate.
Takeaways for early-career lawyers
- Use AI to learn, not to outsource thinking. Ask it to quiz you, explain gaps, and critique your draft but keep ownership of accuracy and reasoning.
- Build a “wrongness detector”. Don’t just check facts; interrogate structure, assumptions, missing steps, and whether the output fits the context.
- Treat prompts like professional instructions. Be specific about jurisdiction, audience, risk tolerance, and what counts as a reliable source.
- Prioritise proximity to mentors. If you can, spend meaningful time in the office where rapid feedback loops build competence faster.
- Push for practical enablement. Training + guardrails + time allocation beat generic “be innovative” messaging every time.