Credit: Pixabay/CC0 Public Domain
More than a decade of underfunding by successive governments has left the UK's justice system in crisis. There is now a significant backlog in cases and court dates are being canceled due to logistical problems.
The justice system in the UK is not unified and instead consists of three separate legal systems across the jurisdictions of England and Wales, Scotland and Northern Ireland.
Powerful voices in UK politics, including the Tony Blair Institute and Policy Exchange think tanks, have behind (AI) as a potential solution to problems being experienced across the public sector. Some of those voices could liberate staff from bureaucratic workloads and give them more time to concentrate on the human aspects of justice, such as face-to-face engagement with clients.
In January, the Labour government announced a plan across the UK in a bid to "turbocharge" growth, boost living standards and revolutionize public services.
So how might AI affect the UK's justice system?
The current focus on AI has been largely driven by developments in large language models (LLMs). This is the technology behind AI chatbots such as ChatGPT. But automation, machine learning, and other AI tools are not novel features of the justice system.
Older tools such as used a form of AI to help lawyers predict the probable relevance of documents to a particular case or matter. More controversially, risk-scoring algorithms have been used in probation and immigration cases.
Critics of the last example that these systems entrench inequalities and affect people in life altering ways without their knowledge.
However, these automated risk scoring systems are substantially different in nature to the productivity tools based on LLMs that are aimed at streamlining administrative processes. The latter can draft statements as well as scheduling and transcribing meetings.
They can also retrieve and summarize sources for document reviews and case law. Apparent success stories include the Old Bailey saving £50,000 by using AI to for court cases.
How and why these tools are implemented—the institutional context—matters enormously. When digital tools are used not to provide more space for the human aspects of justice, but instead to cut costs, the harms fall especially heavily .
This is because even these seemingly routine administrative uses of AI require human reviewers to catch plausible, but wrong, information produced by these tools and to exercise expert judgment.
Evidence from a small scale shows why this is important. The pilot scheme used LLMs to summarize asylum case documents and transcripts to support asylum decisions.
Some 9% of the results were found to be inaccurate and missing interview references. Another 23% of users testing the scheme did not feel fully confident in the summaries, despite significant time savings.
Justice and digitization
In July 2025, the Ministry of Justice published its . While Microsoft's Copilot Chat is already available for judicial office holders, the strategy document promised to roll out AI tools to 95,000 justice staff by December.
The plan acknowledges the many limitations of AI. It also establishes a chief AI officer, creates and emphasizes that AI should "support, not substitute" human judgment.
It emphasizes a cautious method towards roll-out, including an effort to gather feedback from trade unions and the public. It also stresses transparency through and ethics framework.
The plan continues to promote more controversial uses of the technology, including assessing a person's risk of violence in custody. Nevertheless, it focuses more heavily on LLMs for time saving tasks in administration.
However, could the new strategy lead to the adoption of LLM tools by the justice system before there is a mature understanding of how they are best applied? Decisions based in part on AI generated evidence are likely to offer and challenges. This could add to, rather than reduce, the backlog in cases.
In June 2025, a senior lawyers against the use of LLM tools because of the potential for those tools to "hallucinate"—generate fictitious information. There have been a number of cases elsewhere in the world where fictitious AI-generated material has apparently been filed in court cases.
Given their limitations, any benefits of these tools will generally be seen in those parts of the system where resources and time for human oversight are at their highest. The risks will hit hardest where human time and resources are low and where clients have less money and time to challenge decisions.
This unequal access to justice is not solely an AI issue. Previous waves of digitization used to reduce the bureaucratic load included allowing some guilty pleas to be lodged online and automatic online convictions for some crimes, which would otherwise have required a court hearing.
As Gemma Birkett, lecturer in criminal justice at City St Georges University, , these automated systems particularly affect marginalized women, who are far more likely to plead guilty to crimes they did not commit.
Papering over the cracks
There are to be made in favor of using bespoke, carefully developed technology to remove the administrative burden on justice system staff, so that they can concentrate on the aspects of their work best delivered by people.
But when the current system is struggling, adopting LLMs (or other forms of rapid digitization) will not fix the deep underlying problems caused by years of austerity. Rather than reducing bureaucracy, they risk papering over the cracks in a dysfunctional system.
Provided by The Conversation
This article is republished from under a Creative Commons license. Read the .