LONDON (AP) – England's 1,000-year-old legal system, still steeped in traditions such as wearing wigs and robes, will give judges permission to use artificial intelligence to write decisions. As a result, we have taken a cautious step toward the future.
The Office of Courts and Tribunals said last month that AI could help write opinions, but that the technology could fabricate information and provide misleading, inaccurate and biased information. , emphasized that it should not be used for research or legal analysis.
“Judges do not need to shy away from prudent use of AI,” said Geoffrey Vos, the second-highest ranking Master of the Rolls in England and Wales. “But they must uphold the trust and ensure that they take full personal responsibility for everything they produce.”
As academics and legal experts ponder a future in which AI could replace lawyers and help select juries and even decide cases, the approach laid out by law enforcement officials on Dec. 11 is restrained. . But for professionals who have been slow to embrace technological change, this is a positive step as governments, industry, and society at large react to rapidly advancing technology that is alternately portrayed as a panacea and a threat.
“There is currently a lively public debate about whether and how artificial intelligence should be regulated,” says Professor of Law at the University of Surrey and author of The Reasonable Robot: Artificial Intelligence and the Law. says Ryan Abbott, author of
“AI and justice are something that people have particular concerns about, and that's where we're particularly careful about keeping humans on top of information,” he said. “So I think AI may be slower to disrupt judicial operations than in other areas, and we will proceed more cautiously there.”
Abbott and other legal experts praised law enforcement agencies for addressing the latest iterations of AI, and said the guidance is useful for people around the world who are enthusiastic about using AI or who are concerned about its potential. He said that it would attract wide attention from courts and legal scholars.
By taking what has been described as a first step, England and Wales has moved towards the forefront of AI courts, but it is not the first such guidance.
Five years ago, the Council of Europe’s European Commission for Judicial Efficiency published an Ethics Charter on the use of AI in court systems. Although the document was not up to date with the latest technology, it did address core principles such as accountability and risk mitigation that judges should adhere to, said Lecturer in Law and Judiciary at Essex Law School. said Julia Gentile, who researches the use of AI in system.
Although U.S. Supreme Court Chief Justice John Roberts noted the pros and cons of artificial intelligence in his annual report, the U.S. federal court system has yet to establish guidance on AI, and state and county courts have It is too fragmented to take a universal approach. But individual courts and judges at the federal and local level set their own rules, said Cary Colyanese, a law professor at the University of Pennsylvania.
Commenting on the guidance for England and Wales, Collanese said: “This is certainly the first, if not the first, set of broadly applicable AI-related guidelines for judges and their staff published in English. It's one of the sets.” “A significant number of judges are internally warning staff about how existing policies regarding confidentiality and internet use apply to public portals offering ChatGPT and other similar services. I think there may be.”
The guidance shows courts are embracing the technology, but not across the board, Gentile said. She criticized a provision that said judges did not have to disclose the use of technology and questioned why there was no accountability mechanism.
“I think this is certainly a useful document, but it will be very interesting to see how this is enforced,” Gentile said. “There is no specificity as to how this document will work in practice. Who will oversee compliance with this document? What are the sanctions? Or maybe there are no sanctions? . If there are no sanctions, what can we do about this?”
To maintain the integrity of the courts while proceeding with litigation, the guidance includes a number of warnings about the limitations of the technology and the problems that can arise if users do not know how the technology works.
At the top of the list are warnings about chatbots such as ChatGPT. ChatGPT exploded into popularity last year and is the technology's most talked-about conversation tool, thanks to its ability to quickly create everything from term papers to songs to marketing. material.
The technology's pitfalls in the courtroom have already become infamous, as two New York state attorneys used ChatGPT to create legal briefs citing fictitious cases. The pair were fined by an angry judge who called the work they signed “legal gibberish.”
Chatbots have been ordered by judges in England and Wales not to reveal any personal or confidential information, as they have the ability to remember questions asked and retain other information provided. .
“Do not enter any information into public AI chatbots that are not already in the public domain,” the guidance states. “Any information you enter into a public AI chatbot should be considered publicly available to the entire world.”
Other caveats include recognizing that much of the legal material used to train AI systems comes from the internet and is often based primarily on US law.
But courts have large caseloads and lawyers, who routinely write judgments that run tens or even hundreds of pages, have a hard time writing decisions, especially when writing background material or summarizing information they already know. He said AI can be used as a secondary tool.
In addition to using the technology for emails and presentations, the judges said it can be used to quickly find materials that are familiar but hard to reach. However, the court said it should not be used to find new information that cannot be independently verified and is not yet capable of providing persuasive analysis or inferences.
Court of Appeal Judge Colin Barth recently praised how ChatGPT helped him write decisions in areas of law he knows well.
“I asked ChatGPT if he could give me an overview of this area of law, and it gave me pause,” he told the Law Society. “I know the answer, because that's what I was trying to write. But it worked for me and I left it to my judgment. It's there and it's very useful.”





