Two federal judges recently admitted that their staff utilized artificial intelligence (AI) to draft court orders over the summer, which ended up containing factual inaccuracies.
“Honesty is always the best policy,” remarked Chuck Grassley (R-Iowa), the Chairman of the Senate Judiciary Committee, in a statement released on Thursday. He expressed gratitude towards Judges Wingate and Neils for owning up to their mistakes, noting it’s reassuring that they are taking measures to prevent this from happening again.
The statement elaborated:
It is imperative that every federal judge, as well as the judiciary as a whole, upholds the responsibility to ensure that the deployment of generative AI doesn’t infringe on litigants’ rights or hinder their fair treatment in legal proceedings. There needs to be a development of concrete, significant, and lasting policies and guidelines regarding AI usage. We simply cannot allow complacency, negligence, or excessive dependence on artificial tools to erode our legal system’s commitment to truth and integrity. As always, I remain vigilant.
The inaccuracies attributed to AI included instances where prior draft opinions were mistakenly used, which were not meant for final submission.
The reliance on AI for court orders represents a change in the thoroughness typically exercised by judges nationwide in scrutinizing information used in litigation. Notably, some judges have imposed fines and penalties for improper use of AI in certain cases.
As detailed in a letter from the Judiciary Committee, the judges acknowledged that decisions from unrelated cases bypassed their usual review process before becoming public.
Judge Julian Xavier Neal noted in his correspondence that a draft ruling in a securities case issued on June 30 was mistakenly published and was promptly retracted once his office was alerted.
He explained that a law school intern had used OpenAI’s ChatGPT for legal research without proper authorization, which goes against established policies.
“My office policy strictly prohibits the use of generative AI for legal research or drafting documents,” Niels stated. “I had previously communicated this verbally to my office staff, including interns, but moving forward, I’ve established a clear written policy that applies to all legal staff.”
Judge Henry T. Wingate also detailed in his letter that a law clerk utilized an AI writing tool called Perplexity as a preliminary drafting assistant to compile public information. He recognized that the publication of a draft decision on July 20 was due to a human oversight.
“This was indeed a mistake,” he acknowledged. “I have implemented measures in my office to prevent a recurrence of this issue.”
