Federal Judges Acknowledge AI Errors in Court Orders
During the summer, two federal judges disclosed that their staff had been using artificial intelligence to draft court orders, resulting in mistakes.
The revelation came from U.S. District Judge Julian Xavier Neals of New Jersey and District Judge Henry Wingate of Mississippi, responding to inquiries from Senator Chuck Grassley, who chairs the Senate Judiciary Committee. Grassley noted that recent court orders were, in his words, “riddled with errors.”
In a letter released by Grassley’s office, the judges explained that judgments in unrelated cases hadn’t undergone the usual review processes by both judges before being made public.
The judges indicated they have implemented measures to enhance the review of judgments prior to publication. Judge Neals mentioned a draft ruling in a securities case that was “published in error” due to human oversight and was quickly retracted once it was brought to their attention. He further stated that a law school intern utilized OpenAI’s ChatGPT for legal research without authorization, which went against court policies and the relevant law school’s guidelines.
“My House policy prohibits the use of GenAI for legal research or drafting opinions or orders,” Neals wrote. He acknowledged that his previous verbal communications on this policy to staff, including interns, were insufficient and emphasized now having a written policy applicable to all law clerks and interns.
Judge Wingate, in his letter, indicated that a law clerk had used Perplexity as a “rudimentary drafting assistant” which led to the July 20 release of a draft decision that Wingate labeled a “human oversight error.” He did note that the original order in a civil rights case had a “clerical error,” but didn’t elaborate on its specifics at the time.
Grassley had pressed the judges regarding AI’s role in their decisions after lawyers highlighted factual inaccuracies and noticeable errors in different court cases.
In a statement, Grassley praised Judges Wingate and Neals for their honesty in admitting the mistakes, expressing appreciation for their commitment to prevent such issues in the future. He added that each federal judge and the judiciary as a whole bear a responsibility to ensure that AI-assisted processes do not infringe on litigants’ rights or compromise their fair treatment under the law.
Grassley insisted that it’s crucial for the Judiciary to establish clear, enduring policies on AI usage, emphasizing the need for integrity and factual accuracy. He also mentioned ongoing scrutiny of lawyers across the nation over allegations of misusing AI in court documents, which has led to fines and other penalties in multiple cases recently.

