England's High Court Issues Warning Over Use of Fake AI-Generated Legal Material

ICARO Media Group
News
07/06/2025 10h41

****

The High Court of England and Wales has issued a stern warning to legal professionals, highlighting the risks and potential legal repercussions of submitting false material generated by artificial intelligence. This action comes after recent instances where AI-generated content resulted in erroneous quotes and fictitious rulings being presented in court.

In a significant move, Judge Victoria Sharp, president of the King's Bench Division of the High Court, along with Judge Jeremy Johnson, pointed out that prior guidelines for lawyers were not adequately addressing the misuse of AI in legal practice. They emphasized the urgent need for enhanced measures to combat this growing issue.

The ruling highlighted two notable recent cases where AI was found to have generated inaccurate material. In one instance, a claimant and his attorney confessed that artificial intelligence tools had produced "inaccurate and fictitious" data during a lawsuit against two banks, leading to its dismissal. In another case resolved in April, a lawyer representing a man in a suit against his local council was unable to explain the origins of several non-existent cases cited in her arguments.

Judge Sharp utilized rarely exercised judicial powers to consolidate these examples, underscoring the necessity for the court to manage its procedures and uphold the responsibilities of legal professionals. She warned of severe consequences, including potential criminal prosecution and disbarment for those who misuse AI-generated material.

The court ruling pointed out the deficiencies in AI tools like ChatGPT for conducting dependable legal research. Despite generating seemingly coherent and plausible responses, these tools often produce incorrect or entirely fictitious results, citing sources that do not exist.

The warning draws attention to widespread concerns among technology experts regarding AI chatbots' tendency to produce fabricated information unexpectedly. Since late 2023, Silicon Valley-based Vectara has monitored chatbots' accuracy by asking them to summarize specific news articles, revealing hallucination rates of 0.7 percent to 2.2 percent. OpenAI, developers of ChatGPT, acknowledged even higher hallucination rates, with their latest AI models producing false information 51 percent to 79 percent of the time when given general questions.

This landmark warning from the High Court underscores the critical issue of AI reliability in the legal domain, stressing the importance of rigorous verification and responsible usage of artificial intelligence tools by legal professionals.

The views expressed in this article do not reflect the opinion of ICARO, or any of its affiliates.

Related