SELECT LANGUAGE BELOW

Hacker accessed OpenAI’s internal AI details in 2023 breach: report

OpenAI was reportedly broken into last year by hackers who accessed details of the company’s internal discussions. Artificial Intelligence (AI) technology.

Thursday Report The New York Times Two people familiar with the incident said the hackers accessed an online forum where OpenAI employees discuss the latest technology.

However, the intrusion did not give the hackers access to the systems where OpenAI stores and builds its AI. Chatbot ChatGPT.

OpenAI executives reportedly informed employees and the board of directors about the breach during an all-hands meeting in April 2023, but because no information about customers or partners was stolen, executives chose not to make the news public.

OPENAI CTO says AI could take over some of creative industry’s replaceable jobs

OpenAI said the systems housing its AI tools were not affected by the breach. (Idrees Abbas/SOPA Images/LightRocket via Getty Images)

The report said OpenAI executives did not consider the incident a national security threat because they believed the hackers were private citizens with no known ties to foreign governments.

This evaluation resulted in OpenAI not notifying the federal government. Law enforcement Regarding violations.

“As we communicated to our board and employees last year, we identified and remediated underlying security issues and continue to invest in strengthening our security,” an OpenAI spokesperson told FOX Business.

Google to require disclosure of digitally altered election ads

Open AI

According to reports, OpenAI suffered a data leak last year when hackers accessed an internal employee forum where AI technology was discussed. (Reuters/Dado Ruvic/Illustration)

The hack has raised new concerns among OpenAI employees that a hostile foreign government like China could steal the company’s AI technology and ultimately pose a threat to U.S. national security, and it has also raised questions about how the company handles security, The New York Times reported.

In May, OpenAI said it had disrupted five covert influence operations that sought to use its AI models to carry out “deceptive activities” online, the revelation the latest to raise concerns about the potential misuse of AI technology.

OpenAI was one of 16 AI technology development companies that pledged at an international conference in May to develop AI technology safely and address concerns raised by regulators around the world.

What is Artificial Intelligence (AI)?

Sam Altman OpenAI

OpenAI CEO Sam Altman (Fabrice Cofrini/AFP via Getty Images)

In a separate announcement in May, OpenAI announced new Safety and Security A committee that advises the board on how to address these issues in the company’s projects and operations.

The company said the committee will evaluate and develop OpenAI’s processes and safeguards and will share its recommendations at the end of a 90-day evaluation period, after which it will share any adopted recommendations “in a manner consistent with safety and security.”

Click here to get FOX Business on the go

OpenAI CEO Sam Altman Chair Brett Taylor and board members Adam D’Angelo and Nicole Seligman will serve on the safety committee, along with four technical and policy experts.

FOX Business’ Steven Sorellese and Reuters contributed to this report.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News