SELECT LANGUAGE BELOW

Attorney Expresses Regret for A.I.-Produced Mistakes in Homicide Case

Attorney Expresses Regret for A.I.-Produced Mistakes in Homicide Case

AI Error in Australian Court Case

Melbourne, Australia — An Australian lawyer has issued an apology to a judge for submitting a court document in a murder case that contained fabricated citations and a non-existent case created by AI.

This incident at the Victorian Supreme Court adds to a growing list of AI-related mishaps within justice systems globally.

According to court filings from Friday, Defense Attorney Rishi Naswani, who is known as a King’s attorney, accepted full responsibility for the submission of false information related to a murder charge against a teenager.

“We are truly sorry, and we’re quite embarrassed about what occurred,” Naswani expressed to Judge James Elliot on Wednesday, speaking for the defense team.

The AI-generated error arrived a day late for a resolution that Elliot had aimed to achieve on Wednesday. Ultimately, on Thursday, Elliot determined that Naswani’s client — protected by anonymity due to being a minor — did not commit murder because of mental illness.

“What transpired here is quite insufficient, given the risks involved,” Elliot remarked to the attorney on Thursday.

“The court’s ability to trust the accuracy of a lawyer’s submissions is essential for upholding justice,” he added.

The false information included a citation that was fictitiously created from a speech to the state legislature and a reference to an imaginary case from the Supreme Court.

The inaccuracies were identified by Elliot’s colleagues, who were unable to locate the referenced case and asked the defense attorney for a copy.

The defense attorney acknowledged that the citation was “nonexistent,” and admitted the submission contained “fictional citations,” according to court records.

He stated that he had verified the initial citation was correct and mistakenly assumed the others were valid as well.

The erroneous submission was also sent to Prosecutor Daniel Poldedou, who did not verify its accuracy.

The judge pointed out that the Supreme Court had issued guidelines the previous year regarding the use of AI by lawyers.

“It is unacceptable to use artificial intelligence unless its output is independently and rigorously verified,” Elliott stated.

Court documents do not specify which generative AI systems were utilized by the lawyers.

In a similar case in the U.S. earlier this year, a federal judge penalized two lawyers and their law firm with a $5,000 fine after ChatGPT was implicated in providing fictitious legal research in an aviation injury case.

Judge P. Kevin Castell noted the lawyers acted with malice but believed their apologies and corrective measures indicated that harsh penalties weren’t necessary to prevent AI from generating false legal precedents in their cases.

Later that year, another instance of fictitious legal decisions produced by AI was mentioned in court filings by lawyers representing Michael Cohen, who was a former personal attorney to U.S. President Donald Trump. Cohen admitted that the Google tools he utilized for legal research weren’t equipped to handle what are termed “AI hallucinations.”

British High Court Judge Victoria Sharp cautioned in June that presenting false information as though it were genuine could undermine the courts and disrupt the path to justice, especially in severe cases involving lengthy prison sentences.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News