SELECT LANGUAGE BELOW

Lawyers caution that conversations with AI chatbots could be used against individuals in court following a federal decision.

Lawyers caution that conversations with AI chatbots could be used against individuals in court following a federal decision.

April 15 – With a growing reliance on artificial intelligence for advice, some U.S. lawyers are cautioning their clients against viewing AI chatbots as reliable confidants, especially when freedom or legal liabilities are on the line.

This advice has become more pressing following a federal judge’s ruling in New York. A recently uncovered case highlighted how the former CEO of a bankrupt financial services firm neglected to secure his AI conversations during a securities investigation.

In light of this ruling, attorneys have warned that discussions with chatbots like Claude from Anthropic and ChatGPT from OpenAI could be subpoenaed by prosecutors in criminal cases or by litigants in civil scenarios.

“We tell our clients, ‘We should proceed with caution here,'” shared Alexandria Gutierrez Sweat, an attorney at Kobre & Kim in New York.

While conversations between clients and their attorneys are typically considered confidential under U.S. law, AI chatbots themselves do not have the same protections. Legal professionals are advising steps to maintain privacy when interacting with these tools.

More than a dozen major U.S. law firms have sent emails and posted advisories outlining measures individuals and businesses can take to minimize the risk of their AI communications being used against them in court.

Some employment contracts from companies have similar warnings. For instance, the firm Shah Tremonte recently stated that discussing an attorney’s advice with a chatbot could void the “attorney-client privilege,” which usually safeguards such communications.

The case that raised these concerns involved Bradley Heppner, former chairman of GWG Holdings, who was indicted by federal prosecutors last November on multiple charges including securities fraud. He has maintained his innocence.

Heppner had utilized Claude to produce reports about his case for his legal team, but later his lawyers argued that the AI-generated content should be kept from prosecutors as it contained sensitive information relevant to his defense.

Prosecutors insisted they had the right to access documents created by Heppner using Claude. Since his lawyers were not directly involved in those AI interactions, it was determined that attorney-client privilege did not extend to chatbots.

Disclosing information shared with your attorney to a third party like an AI tool could undermine standard legal protections surrounding attorney-client communications.

U.S. District Judge Jed Rakoff ruled in February that Heppner must surrender 31 documents produced by the chatbot Claude, stating, “There is no attorney-client relationship that can exist between AI users and platforms like Claude.”

Heppner’s attorney did not respond to inquiries, and the U.S. Attorney’s Office in Manhattan declined to comment.

Courts are already starting to address the implications of increasing AI use in legal settings, leading to nuanced cases related to the subject.

Rakoff’s decision serves as a significant early evaluation of AI’s impact on legal protections concerning attorney-client communications.

On the same day as Rakoff’s ruling, U.S. Magistrate Judge Anthony Patti in Michigan stated that a woman in a lawsuit against her former employer wouldn’t have to disclose her discussions with ChatGPT regarding employment claims, categorizing those chats as her personal “work product.”

Patti pointed out that programs like ChatGPT are “tools, not people.”

Both OpenAI and Anthropic’s terms indicate they may share user data with third parties and recommend consulting a qualified professional before seeking legal advice from an AI platform.

During a hearing in Heppner’s case, Rakoff mentioned that Claude’s terms clearly indicated no expectation of privacy in user input.

Representatives from OpenAI and Anthropic did not respond to inquiries.

Lawyers race to install guardrails

Legal advice varies, with some lawyers recommending clients carefully select AI platforms and suggesting specific phrasing for chatbot inquiries.

Some legal firms believe that AI investigations could remain under attorney-client privilege when directed by a lawyer. Debevoise & Plimpton has advised that attorneys should explicitly state their legal role in any chatbot prompts.

Lawyers are increasingly incorporating AI-related clauses into client contracts, as noted in various contracts reviewed by legal experts.

“Sharing privileged communications with a third-party AI platform may result in a waiver of attorney-client privilege,” cautioned attorney Char Tremonte in a recent client agreement.

Lawyers, including Justin Ellis from Molorumken, anticipate that more court decisions will clarify when AI communications can be considered evidence in legal proceedings.

Until such clarifications arise, the traditional understanding remains that discussions about cases should be reserved for conversations with one’s attorney—and not shared with AI platforms.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News