SELECT LANGUAGE BELOW

OpenAI claims the New York Times intends to breach the privacy of ChatGPT users.

OpenAI claims the New York Times intends to breach the privacy of ChatGPT users.

OpenAI Challenges New York Times Over Privacy Concerns

OpenAI has made a strong statement accusing the New York Times of attempting to infringe on user privacy as part of its ongoing legal battle against the tech company.

“Trust, security, and privacy guide all of our products and decisions,” stated Dane Stuckey, the Chief Information Security Officer at OpenAI. He highlighted that “every week, 800 million people use ChatGPT for personal matters.” Stuckey emphasized the importance of safeguarding this sensitive information and noted the company’s commitment to privacy and security protections.

He remarked, “Currently, that responsibility is being tested.” The New York Times is insisting that OpenAI hand over 20 million private conversations from ChatGPT users, claiming they might uncover attempts to bypass paywalls. This demand, according to Stuckey, contradicts established privacy protections and good security practices, putting millions of unrelated conversations at risk.

Stuckey indicated that OpenAI has asked the court to reject the Times’ request, asserting that the company will explore all possible avenues to protect user privacy. He also mentioned that OpenAI had proposed various privacy-friendly options to the Times, including limited search capabilities, which were, unfortunately, turned down. Initially, the Times sought access to 1.4 billion conversations, but OpenAI managed to restrict that to a random sample of 20 million chat records spanning late 2022 to late 2024.

OpenAI has assured users that their personal details will be wiped through an “anonymization procedure.”

A spokesperson for the New York Times responded, saying, “The lawsuit holds OpenAI and Microsoft accountable for using millions of copyrighted materials improperly.” They criticized OpenAI’s statements as misleading and maintained that user privacy is not compromised. The spokesperson pointed out that a legal order allows for the provision of anonymized chat samples, which could potentially support the Times’ case.

Additionally, they stated, “It’s disingenuous to create fear over privacy risks, especially since OpenAI’s terms permit them to utilize user chats in their model training and in legal contexts.”

The New York Times had launched its lawsuit in late 2023, accusing OpenAI and Microsoft of utilizing numerous articles without authorization for training the large language models that drive ChatGPT.

The complaint asserts that these AI tools lean heavily on the Times’ extensive copyrighted content, covering various forms of journalism and insights. It criticizes OpenAI for emphasizing Times content when developing their language model, suggesting a preference for exploiting such valuable intellectual property.

Furthermore, the lawsuit claims that OpenAI’s utilization of others’ intellectual property without compensation has proven highly profitable for the defendants. The introduction of Times-trained models, according to the lawsuit, has significantly boosted Microsoft’s market valuation, and the launch of ChatGPT has raised OpenAI’s valuation dramatically as well.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News