Lawyers Face Consequences for AI-Generated Legal Documents
Lawyers across the nation are encountering serious repercussions for relying on AI to draft legal documents, with their justifications proving to be as questionable as the incidents they report. It seems a mix of creative excuses—like blaming hackers or suggesting that switching between windows is too challenging—are their last-ditch efforts to evade penalties for the surge of AI-generated inaccuracies clogging up court filings.
Judges, understandably frustrated, are not amused, and a group of “legal vigilantes” is determined to expose these oversights. They’ve created a database tracking every instance of AI misuse, with over 500 reported cases currently logged.
This database is curated by Damien Charlotin, a lawyer and researcher based in France. It reveals fake case citations, fraudulent references, and identifies the lawyers involved. Charlotin aims to cleanse the field by highlighting the shame some practitioners bring upon the profession. He noted that the frequency of cases keeps climbing.
“As soon as I began cataloging these incidents, I noticed a sharp increase—from maybe a few cases in a month to two or three every single day,” he shared in an email. Charlotin foresees this trend persisting for a while, mentioning that while some incidents are simply errors, the hope is that greater awareness will mitigate mistakes, although that’s not guaranteed.
In more severe cases, however, researchers assert that AI is often misused by “careless and negligent lawyers.” Unfortunately, he believes that there’s little that can be done to curb such misconduct.
Amir Mostafavi, a lawyer from the Los Angeles area, was recently fined $10,000 after an appeal was filed where 21 out of 23 cited cases were completely fabricated by ChatGPT. His defense? He claimed to have drafted the appeal himself but asked ChatGPT to “enhance it,” unaware that it would insert fake citations.
“There will be some fallout, some impacts, some wreckage,” Mostafavi acknowledged. He expressed hope that his experience serves as a warning to others.
Meanwhile, another attorney based in New York City, Innocent Chinweze, faced arrest for submitting a brief filled with untrue assertions. He initially claimed a hack caused the chaos, but after a break, he dramatically shifted his defense to state he didn’t realize AI could fabricate information.
Judge Kimon C. Thermos found his excuse to be “unbelievable and unfounded.” Chinweze was subsequently fined $1,000 and referred to the Grievance Committee due to actions that raised concerns about his honesty as a lawyer.
Another incident involved Alabama State’s Attorney James A. Johnson, who suggested that his “embarrassing mistake” came from struggling with a laptop while under immense personal pressure; he was assisting a sick family member at the time. He opted for a Microsoft Word plug-in rather than the legal research tools available to him because toggling between applications was too cumbersome. This reasoning didn’t resonate well with the judge, who noted that Johnson had explicitly used ChatGPT. Johnson was fined $5,000 and immediately dismissed by his clients.
Such scenarios tarnish the profession’s reputation, according to Tephen Gillers, an ethics professor at New York University School of Law. He expressed disappointment with what some lawyers are doing to their own field.
Excuses for these AI mishaps are plentiful. One lawyer blamed a client for assisting with a questionable submission, while another cited “login issues” with a legal database. In Georgia, lawyers claimed they “accidentally filed the wrong draft.” Nonetheless, the repercussions are becoming more severe: an attorney in Florida was fined an astonishing $85,000 for “repeated abusive and malicious conduct.” When he protested that the fine was excessive, the court responded that leniency would only enable further misconduct.
In Illinois, attorney William T. Panich has faced disciplinary measures at least three times. After his initial sanction, just before two subsequent ones, he assured the court, “I’ll never do it again.” Judges, however, are losing patience.
“Honestly, if any lawyer believes using generative AI for legal research is safe, they’re living in a cloud,” remarked U.S. Bankruptcy Judge Michael B. Slade.
Judge Nancy Miller criticized yet another lawyer who claimed it only took “7.6 seconds” to verify a citation, pointing out that lawyers typically don’t have the luxury of even those few seconds to double-check their own work. “A busy court system already lacks sufficient resources to manage AI-generated miscitations,” one Texas judge noted.
The Post reached out to Mostafavi, Chinweze, Johnson, Paul, and Panich for their responses to these matters.





