Uwais Iqbal, an AI engineer and founder of Simplexico, unpacked Anthropic’s latest move into legal workflows and why it prompted such an outsized reaction across legal tech and adjacent markets. He explored what “agentic” tooling looks like in practice, why the market response matters less than the underlying capability shift, and how legal teams should adapt their processes, governance, and skills to capture value safely.
Key Themes
1) From “model access” to agentic workflows
Iqbal outlined how Anthropic’s tooling evolved from developer-first agent workflows (Claude Code) to a more general-purpose “co-worker” model (Claude Cowork), then into domain-specific extensions, including legal-focused capabilities aimed primarily at in-house work such as NDA triage and contract review.
The pivotal change is not simply “better answers”, but the packaging of multi-step work into repeatable, agent-led processes that can operate across a directory of files and return a usable work product.
2) Why was the market reaction so dramatic
Iqbal argued the shock was largely vendor- and investor-facing: if a frontier model provider offers “good enough” legal workflows at a lower cost, it compresses differentiation for tools that primarily wrap the same underlying models with a different interface.
He also noted the announcement coincided with a broader market repricing of exposure to disruption risk in legal tech and adjacent information/publishing segments.
3) The “80% problem” and the oversight bottleneck
A key real-world risk is that “80% accurate” can still be materially unsafe in legal contexts, and scaling AI use often shifts the bottleneck to review and sign-off.
Iqbal’s view: AI will not be perfect (neither are humans), but the lawyer’s role shifts from line-by-line production to higher-level supervision, setting review frameworks, interrogating outputs, and validating against requirements. This mirrors what is already happening in software development, where engineers increasingly specify, test, and verify rather than write every line.
4) Skills: prompting, verification, and AI governance
The group highlighted that competence now includes knowing how to challenge outputs, spot hallucinations, and steer models through structured prompting skills that develop through practice and training.
Anna Jankov pointed to growing demand for AI governance, particularly around LLM use and the compliance and risk lens (including data protection).
5) Process change beats tool rollout
Iqbal described a practical adoption pattern: start with education, identify bottlenecks, select a few high-ROI use cases, then decide whether to build or buy. He warned that “blank slate” enterprise rollouts often underperform because teams still need to translate workflows into repeatable patterns themselves.
He advocated targeting specific workflows with user champions, iterating quickly to an MVP, and baking adoption into the design, often deploying into a client’s environment for security, and leveraging models already available via common enterprise stacks.
6) What changes in 3-5 years
Iqbal’s projection: non-complex, transactional legal work is increasingly absorbed by models where a clear task description gets you most of the way. More complex workflows will still require design work, document grounding, orchestration, and integration into how teams actually operate.
What Practitioners Need to Know
- Treat agentic tools as process change initiatives, not software deployments: map steps, define red flags, and codify review frameworks before scaling.
- Plan for the oversight bottleneck: design review tiers, sampling strategies, and escalation rules so senior capacity does not become the choke point.
- Build skills deliberately: structured prompting, verification, and “challenge the model” techniques should be trained like any other professional competency.
- Differentiate on workflow and UX, not model access: vendors and teams should focus on orchestration and embedded workflows, not wrappers.