Recently, two independent nonprofits, the MIDAS Project and the Technical Surveillance Project, published a significant document after a year of research. This report highlights a range of alarming behaviors connected primarily to OpenAI itself, rather than just its CEO, Sam Altman.
If your knowledge of the company is limited to what you’ve heard in passing or from the buzz around ChatGPT, well, it might be time for you to look deeper.
Currently, there are over eight serious lawsuits facing Sam Altman and/or OpenAI.
Recently, IYO Audio has filed a complaint against OpenAI, alleging that the company misappropriated designs and infringed on trademarks. Scanning recent headlines reveals a troubling pattern.
- Reportedly, Altman did not prioritize fairness at OpenAI, particularly given the backdoor investments via Y Combinator.
- He holds a 7.5% stake in Reddit, which boosted his net worth to about $50 million through his expanding partnership with OpenAI.
- OpenAI is, again, reshuffling its corporate structure—under the new setup, which gives Altman a 7% stake, his wealth could rise by $20 billion.
- Former executives at OpenAI, including Muri Murati and Ilya Sutskever, have confirmed a troubling atmosphere of abuse and misconduct attributed to Altman.
This is merely a snapshot of the violations documented in the OpenAI file. As of now, serious allegations against Sam Altman and/or OpenAI range from sexual abuse to assault and copyright infringement.
Surprisingly, these severe accusations, including those of a sexual nature, have not affected the reputation or rising ratings of OpenAI.
The Power Dynamics in Tech
The journey of OpenAI seems emblematic of a Silicon Valley power struggle. Founded in 2016 by Altman, Elon Musk, and others, OpenAI has aimed to be a leading force in AI. Despite groundbreaking advancements, the company has also generated considerable confusion and controversy—addressing rumors and issues as quickly as it pushes research forward.
Back in 2016, big-name investors, including Amazon and Peter Thiel, invested $1 billion upfront, though the funds were slow to materialize. Tensions rose between Altman and Musk, leading to Musk’s departure in 2017, a move that sent investors into uncertainty and prompted a new capital influx.
New backers, among them Reid Hoffman, stepped in and OpenAI thrived. Under Altman’s guidance, the company released products like OpenAI Gym and Universe.
Many, including Musk, believed OpenAI lagged behind Google in the AI race. This was concerning for those initially attracted to the organization as a counter to the potential risks of centralized development.
What started as a nonprofit with an open, human-focused approach shifted by 2019, transforming into a profit-driven model, primarily influenced by major investments like Microsoft’s $1 billion deal. The debut of GPT-2 was met with much anticipation.
What Comes After Elon?
By 2020, the newly established profit-driven entity halted the API access and opened the doors for developers to utilize GPT-3. Dall-E was introduced in 2021, marking a pivotal moment that created a buzz around OpenAI’s direction. While the idea of cooperation lingered, many questioned its authenticity.
ChatGPT exploded in popularity in 2022, propelling OpenAI’s valuation skyward, estimated now around $1 billion.
After Musk’s controversial exit, his public discontent grew, critiquing the company’s shift toward a “closed” model and alleging fraud under Altman’s direction.
In tandem with existing products, OpenAI has since unveiled several new innovations aimed at diverse fields, from music generation to 3D object creation. Branding efforts include hiring ex-Apple design chief Jony Ive.
Yet, it’s crucial to not overlook the distinction in their lineup of products known as the “O-series,” which represents a notable evolution. While these are clever and sophisticated, they do not follow the traditional, industrial model that focuses on sheer efficiency.
A Moment of Reflection
The latest controversies surrounding OpenAI serve as stark reminders. Can we really trust them to responsibly handle the “true” AI vision Altman espouses?
Consider this: with human behavior and the accompanying uncertainties, do we have enough moral clarity to feel at ease about placing our futures in Altman and his team’s hands? Are individuals in this sphere worth millions each year?
Or are we teetering on the edge of a dangerously misguided faith in the powers of these leaders, our society, and civilizations? What assurances do we have that the faith placed in Altman will yield a transformative world, especially as we look ahead to whatever he plans next?





