In two separate cases, U.S. judges in federal courts have retracted their initial rulings after concerns arose regarding inaccurate filings and questionable citations. This situation has certainly raised eyebrows.
In New Jersey, Judge Julian Neals suspended his previous denial of motions to dismiss securities fraud claims. It turns out that lawyers may have relied on filings that were “materially inaccurate.” The filing highlighted numerous instances of questionable quotations from lawyers and even pointed out cases where the details were incorrect. This, predictably, pushed Neals to reconsider his ruling.
Meanwhile, in Mississippi, Judge Henry Wingate replaced a temporary restraining order initially issued on July 20, aimed at blocking the enforcement of a state law in public schools. This change followed notifications from attorneys about significant errors in the legal documents they submitted. Those familiar with the case noted that some of these discrepancies were AI-generated, which, honestly, is quite troubling.
Wingate’s new decision came about after he was informed that certain testimonies had not actually been recorded in the case files. The state’s attorney later requested that the original order be reinstated, emphasizing that all parties should have access to a complete and accurate record for appeal purposes.
Interestingly, it’s reported that both judges acted swiftly, responding to lawyers who flagged the inaccuracies. It seems that AI use in legal contexts is becoming increasingly common, particularly among younger professionals. This brings up essential questions about the validity of information produced through such technology.
Concerns have extended beyond these two cases as other recent incidents revealed problems as well. For instance, a federal judge in California imposed sanctions on a law firm for improperly relying on AI in court submissions. Notably, a judge in Alabama also dealt with false filings attributed to AI-generated texts. These examples highlight an escalating issue of accountability for lawyers, who, after all, bear the responsibility of ensuring the accuracy of their submissions.
According to recent findings from the Pew Research Center, the use of AI tools, like ChatGPT, has markedly increased among adults. A June survey indicated that around 34% of U.S. adults had used ChatGPT, which is double the number from the previous year. Notably, younger users, particularly those under 30, have significantly adopted these tools, pointing toward a possible transformation in traditional practices, including legal work.





