Introduction by Kate Ludlow, CEO and Consultant at Saxton Bampfylde
In part one of our interview with Sir Robert Buckland, we heard about leadership during crisis. Now in conversation with Saxton Bampfylde CEO, Kate Ludlow and Senior Advisor, Philip Rodney, Sir Robert turns to one of his key concerns as former Lord Chancellor: how artificial intelligence should – and shouldn’t – transform our justice system.
You have written extensively about the future of AI in the justice system – both the positive aspects and your concerns. In particular, you’ve emphasised that “human intuition, emotional intelligence, sheer common sense” are essential parts of a good judge. How can we preserve this human element while leveraging AI’s capabilities?
I’m not a Luddite—AI’s potential is immense. But the legal profession is understandably cautious. Lawyers are bound by confidentiality and worried about hallucinations and misinformation, which are correctly disciplinary matters. Lawyers are increasingly getting into trouble for this.
What lawyers and judges need is a reliable data set for research and case building, subject to continuous human oversight to ensure the data is clean, free of bias, and evolving to reflect the law. There’s a danger that if we devolve responsibility and leave it to the machine, we’ll repeat past mistakes. The Horizon scandal is a good example—old technology that didn’t work, with no way of challenging it, resulting in hundreds of injustices.
“I don’t think we’re ready for an AI jury yet. The whole idea of a jury is your equals looking at your actions based on their human experiences.”
We need certain principles: algorithmic humility, continuous checking, contextual sensitivity. For example, if an automated system is issuing loads of parking tickets in one area, this should be flagged up so we can interrogate why this is the case. Is a particular demographic being disproportionately affected? We need technologists and lawyers working together in oversight committees to ensure ethical use.
In the most important cases—freedom of the individual, criminal cases, family arrangements—you need human input. Assessment of witness credibility is the province of the judge and jury. I don’t think we’re ready for an AI jury yet. The whole idea of a jury is your equals looking at your actions based on their human experiences.
But we can bring in agentic AI immediately to help with administration, to speed up the process. So many court orders aren’t complied with or properly monitored. Agentic AI can help clerks and judges administer the system more efficiently. Assistive technology can help courts marshal facts and present things to jurors clearly.
Having been in a driverless car in San Francisco, after the first few minutes of terror, it’s fine. Familiarity will breed confidence, not contempt. But all it will take is one or two bad cases and there’ll be public inquiries that set back progress. That’s why, from the get-go, you need this ethical dimension, led by lawyers and technicians.
The good news is the MoJ appointed a Head of AI who’s a capable specialist, and their AI Action Plan accords with what I’ve been saying. I hope we’ll be seeing movement there in a positive way.
“The algorithm’s wish to connect all the dots runs counter to the reality of court, where you’re thinking, ‘I’ve heard the evidence, but there are gaps I can’t tie together neatly.’ Justice is imperfect. The algorithm wants to please you with a perfect solution—it’s got an answer for everything, and that’s dangerous.”
What practical steps should courts take to address both the opacity and potential for embedding prejudices in AI systems?
We’ve got to tread carefully. It’s tempting to demand a fully explicable system, but clever lawyers will work out ways to lean into how the algorithm works so as to influence it. This can make biases worse.
My view is you need calibrated transparency—explain things in layman’s terms to the public, but fuller explanation might be available for the participants. When you’re challenging a government decision in a judicial review, the duty of candour can be frustrated if a black box is making the calls. The technology to provide explanation exists and must be part of the chain when public authorities make decisions.
There’s clear evidence that feeding historic data into a system leads to embedded bias. The COMPAS (Correctional Offender Management for Alternative Sanctions) system in the US, used for bail decisions, has been shown to be prejudicial to black men because of racial biases in historic data. You’ve got to clean that up before using the system.
If you just input historic information, how does the law develop? The law is about judges saying, “This is 2025, and this is how the law is developing now.” The ability of judges to reinterpret and help evolve the law is very important to the common law tradition.
There’s a joke that perhaps best explains the problem. A large language algorithm walks into a bar. The barman asks what it would like to drink. The algorithm looks round and says, “I don’t know, what’s everybody else drinking?” It’s a derivative system not capable of independent thought. It produces the most plausible outcome based on mathematics, but that’s not logical or rational deduction.
The algorithm’s wish to connect all the dots runs counter to the reality of court, where you’re thinking, “I’ve heard the evidence, but there are gaps I can’t tie together neatly.” Justice is imperfect. The algorithm wants to please you with a perfect solution—it’s got an answer for everything, and that’s dangerous.
How can we prevent a “race to the bottom” where jurisdictions with weaker standards use AI to create competitive advantage?
I’m confident that if you use AI without thought, you’ll end up with a less reliable, bargain basement justice system that won’t command confidence. Users will be wary. An AI system could improve corrupt jurisdictions, but it’s only as good as the data fed into it by human controllers—no guarantee against corruption, particularly in authoritarian regimes.
In fact, it could disguise oppressive aims. The power AI gives would have been dreamt of by Stalin, Mao, and Hitler, especially in today’s surveillance society. Rule of law jurisdictions must lead to avoid this.
I don’t think we’ll get international regulation anytime soon. The current world order is moving away from that—strong men want to do deals in closed rooms, the UN is sclerotic. Europe’s attempts to legislate on AI have resulted in reduced inward investment.
The UK is in an unusual position as we haven’t regulated globally on AI yet. It’s going to be up to the legal sector itself to lead. We can set an example through judicial and legal leadership, setting guardrails now, showing we’re not afraid of AI but want to use it properly and responsibly, with automation as our partner, not the sole author of decisions.
It’s not just the legal sector—financial services, insurance, all these sectors are stepping up. It’s a mistake to look at AI and justice as an intrinsic issue. More and more evidence coming into courts is AI-generated. Are judges trained to recognise deepfakes? The “liar’s dividend”—where there’s so much disinformation that you can’t rely on anything—could slow down the process.
The idea that AI will make everything easier is wishful, naive thinking. Certain aspects can make things more complicated. It’s going to be up to the legal sector to deal with this, rather than waiting for governments or international bodies to regulate.