South African billionaire Elon Musk dropped his lawsuit against OpenAI, the artificial intelligence organization that developed the powerful multimodal large-scale language model GPT-4 last year. But Musk hasn’t given up the fight, threatening to ban devices of OpenAI’s new partners from his company, citing security threats.
Litigation
In February, Musk Sued The lawsuit sued OpenAI and its co-founders Sam Altman and Greg Brockman for breach of contract, breach of fiduciary duties, and unfair business practices.
Musk’s complaint focused on allegations that OpenAI, which he co-founded, had “fired up” the founding agreement.
According to the lawsuit, the agreement provided for OpenAI to “become a non-profit development organization.” [artificial general intelligence] “It is developed for the benefit of humanity, not by commercial corporations seeking to maximize shareholder profits; and (b) it is open source, balanced only with safety considerations, and does not keep its technology private or secret for proprietary commercial reasons.”
The company further states that it will “compete with Google/DeepMind on AGI and serve as a significant counterweight, but for the benefit of humanity.”
“OpenAI has been transformed into a closed-source, de facto subsidiary of Microsoft, the world’s largest technology company,” the lawsuit states. “Under its new board of directors, the company is no longer simply developing AGI, but is actually improving it to maximize Microsoft’s profits.”
The lawsuit, filed months after Musk founded his AI company xAI, further alleges that GPT-4 “has effectively become a Microsoft-proprietary algorithm,” despite being outside the scope of the exclusive license Microsoft entered into with OpenAI in September 2020.
OpenAI, which endured a failed coup last year, rejected Musk’s framing in a blog post in March. state“In early 2017, we came to the realization that building AGI would require a massive amount of computing. We started calculating how much computing AGI would require. We all knew that the mission would require a lot more capital to succeed. Billions of dollars per year was far more than any of us, especially Elon, thought we could raise as a nonprofit.”
According to the post, Musk said in 2017 that he had “decided that the next step in my mission was to build a for-profit company,” and sought a majority stake, initial control of the board and the CEO position. Musk then reportedly proposed merging OpenAI with Tesla.
Lawyers for OpenAI argued that the lawsuit was an attempt by Musk to thwart competitors and advance his own interests in the AI field. report Reuters.
“Having seen the incredible technological advances OpenAI has made, Mr. Musk now wants to see it succeed himself,” OpenAI’s lawyers said.
rear Months of criticism Musk filed a motion against Open AI to dismiss the lawsuit without prejudice on Tuesday, without disclosing reasons.
A San Francisco Superior Court judge is reportedly prepared to hear OpenAI’s motion to dismiss the lawsuit at a hearing scheduled for the following day.
threat
The day before Musk dropped the lawsuit, OpenAI Announced Apple says it will “integrate ChatGPT into experiences within iOS, iPadOS and macOS so users can access ChatGPT’s capabilities, including image and document understanding, without having to switch between tools.”
Through this partnership, Siri and Writing Tools will be able to rely on ChatGPT’s intelligence.
OpenAI says that requests made through the ChatGPT interface Apple program are not stored by OpenAI and that users’ IP addresses are hidden.
mask Responded Monday on X: “If Apple integrated OpenAI at the OS level, Apple devices would be banned at my company. This is an unacceptable security breach.”
“Visitors will be required to check in their Apple devices at the entrance, which will be stored inside a Faraday cage,” Musk wrote.
mask Added“Apple has no idea what’s actually going to happen once they hand your data over to OpenAI. They’re scamming you.”
Reactions to Musk’s threat were mixed. Some critics This suggests that the integration wasn’t really happening at the operating system level.
But others praised Musk’s move.
For example, Senator Mike Lee (R-Utah) I got it. “The world needs open source AI. OpenAI started with that goal in mind, but we’ve strayed so far that we’d be better served to call it ‘Closed AI.'”
“I applaud @elonmusk’s advocacy in this space,” Lee continued. “Unless Elon is successful, we will see the emergence of a cartelized AI industry — one that benefits a few large, entrenched market incumbents to the detriment of all others.”
Whistleblower
Musk isn’t the only one associated with OpenAI who has concerns about the direction the company is heading in. Earlier this month, a group of OpenAI insiders spoke out about worrying trends at the company.
Officials reiterated several themes from Musk’s lawsuit. tell The New York Times reported that profits are being put first while workers’ concerns are being suppressed.
“OpenAI is very ambitious about building AGI and is in a race to be first in the field,” said Daniel Kokotajiro, a former governance researcher at OpenAI.
Kokotajiro believes this is not a competitive process, pointing out that there is a 70% chance that AI will destroy or cause catastrophic damage to humanity.
Kokotajilou resigned shortly after allegedly advising Altman that Open AI “should take safeguards,” citing a lack of meaningful change that led him to lose “confidence that Open AI would act responsibly,” The New York Times reported.
Kokotajiro is one of 13 current and former OpenAI employees. Open Letter Emphasis:
AI companies have strong economic incentives to avoid effective oversight, and we believe that bespoke corporate governance structures will not be enough to change this. AI companies hold a great deal of non-public information about the capabilities and limitations of their systems, the adequacy of their safeguards, and the risk levels of different types of harm. However, they currently have only a small obligation to share some of this information with governments and no obligation to share it with civil society. We do not expect all companies to share information voluntarily.
The problem is exacerbated by companies’ interference with employees raising concerns, officials said.
Open AI spokesperson Lindsay Held said of the letter, “We are proud of our track record of delivering the most capable and safe AI systems, and we believe in a science-based approach to addressing risks. Given the importance of this technology, we agree that a rigorous discussion is essential, and we will continue to engage with governments, civil society, and other communities around the world.”
Like Blaze News? Bypass the censorship and sign up for our newsletter to get stories like this one directly to your inbox. Register here!
