In a recent New York court, an expert witness was criticized by the presiding judge for relying on Microsoft's chatbot Copilot to estimate damages in a real estate dispute. This is just the latest example of legal professionals and professionals being embarrassed by their reliance on AI tools.
ars technica report New York Judge Jonathan Schopf called attention to the potential dangers of expert witnesses testifying in court using AI tools such as Microsoft's Copilot chatbot. The issue came to light during a real estate dispute involving a $485,000 rental property in the Bahamas that was included in a trust for the deceased man's son.
The case revolved around an executor and trustee, the deceased man's sister, who was accused of breaching her fiduciary duties by delaying the sale of a property she had used for personal vacations. To prove damages, the surviving son's lawyers relied on expert witness Charles Ranson to calculate the difference between the potential sale price in 2008 and the actual sale price in 2022.
However, Ranson, who specializes in trust and estate litigation, lacked the relevant real estate expertise. To compensate for this, he turned to Copilot AI to help with the calculations. During his testimony, Ranson could not recall the specific instructions he used to estimate damages or cite the sources of information he received from the chatbot. He also acknowledged that there is limited understanding of how Copilot works and produces output. Nevertheless, Mr. Ranson staunchly defended the use of Copilot and other AI tools to produce expert reports, arguing that it is a generally accepted practice in the field of fiduciary services.
In response to Ranson's testimony, Judge Schopf took the time to experiment with the co-pilot himself, attempting to produce the same estimates that Ranson had provided. However, we noticed that even when given the same query, the chatbot would generate slightly different answers each time. This discrepancy raised concerns about the reliability and accuracy of the evidence produced by the co-pilot in court proceedings.
Copilot itself, when questioned in court about its accuracy and reliability, replied that its output should always be verified by experts and accompanied by a professional evaluation before being used in court. Judge Schopf agreed with this opinion, noting that CoPilot's developers recognized the need for human oversight to verify the accuracy of both the input information and the output produced.
In response to this case, Judge Schopp required lawyers to disclose their use of AI in litigation to prevent inadmissible chatbot-generated testimony from confusing the legal system. He emphasized that while AI is becoming increasingly prevalent in many industries, the mere presence of AI does not automatically mean its results will be admissible in court.
Ultimately, Judge Schopf found that there was no breach of fiduciary duty in the case, and the co-pilot's damages testimony against Ranson was unnecessary. The judge noted that Mr. Ranson's testimony showed a lack of thorough analysis, incorrect use of the damages period, and lack of consideration of obvious factors in the calculation, dismissing all of his son's objections and future claims. I denied it.
read more Ars Technica is here.
Lucas Nolan is a reporter for Breitbart News, where he covers free speech and online censorship issues.
