California Man Withdraws AI Ballot Measure Amid Controversy
A man from California, who proposed a ballot measure related to AI that seemed to take aim at OpenAI, has decided to withdraw his proposal. This decision followed a request from Sam Altman’s company for a local agency to investigate him.
According to exclusive reports, OpenAI’s legal team lodged a complaint with the California Fair Political Practices Commission, raising concerns about Alexander Oldham, the East Bay native behind the measure, and questioning his possible motives.
Oldham, who calls himself a “nobody” in the AI policy scene, had suggested a measure that would allow state regulators to oversee major AI companies if it passed.
The complaint surfaced after it was revealed that Oldham is related to Zoe Blumenfeld, who works at Anthropic—OpenAI’s main competitor. He also maintains connections with Guy Lavigne, a tech entrepreneur embroiled in a legal dispute with OpenAI over its original ideas.
On Tuesday, Oldham stated he filed to withdraw the measure largely due to “threats and intimidation from OpenAI,” likely referring to the FPPC complaint.
He expressed his feelings of naivety in an interview, saying, “I don’t want any more negative results because I was foolish to think that all I could do was put the idea out there for people to see in today’s world.”
Oldham admitted he had overlooked that his sister-in-law was employed at Anthropic, strongly denying any involvement from Blumenfeld or Lavigne in his proposal’s development.
“I didn’t even think about her,” he said. “It’s a complete coincidence that she works at Anthropic. To be honest, I didn’t even figure it out.”
He previously mentioned that he utilized an AI chatbot to draft the ballots and hadn’t consulted with any lawyers or external advisors prior to submission. Oldham maintained that his effort wasn’t specifically targeted toward OpenAI.
However, OpenAI’s attorneys argued in their complaint that Oldham’s proposal seemed “designed to impose complex and unnecessary regulatory burdens on OpenAI.” They pointed out that the measure’s wording seemed specifically tailored to affect OpenAI’s unique status as a public benefit corporation, possibly allowing regulators to target specific firms instead of creating broad industry regulations.
Furthermore, OpenAI has called for investigations into any affiliations Oldham may have. A nonprofit group, Coalition for AI Nonprofit Integrity (CANI), has supported a separate ballot measure introduced by a former OpenAI employee who has brought attention to the company’s restructuring.
OpenAI has expressed concerns that various measures show distinct similarities that suggest they were composed by the same individual.
Oldham denied any links to CANI. He reflected on his original intentions, saying he thought his initiative would simply be seen and either appreciated or dismissed. “What I want to say most is that the big world of AI is a big world with zero responsibility,” he noted.
Oldham did not respond to requests for further comment. When asked about the situation, OpenAI’s attorney voiced concerns regarding the motivations behind the ballot supporters. They emphasized the need for transparency so voters can make informed decisions.
In his statement to the press, Oldham declared he hadn’t spoken to Lavigne in nearly a decade, nor to Blumenfeld in over two years. “I submitted, created, and funded this idea,” he stated.
Anthropic also distanced itself from Oldham’s actions, stating they have no involvement in or knowledge of his ballot proposals.
Meanwhile, Lavigne firmly denied any collaboration with Oldham, asserting he hasn’t been in contact with him for around ten years and describing their connection as very tenuous.





