Concerns Raised Over Elon Musk’s Use of AI in Government
Elon Musk’s Doge team is reportedly expanding its use of the Grok AI chatbot within the US federal government. However, this move raises potential concerns about conflict of interest laws and the safeguarding of sensitive information that pertains to millions of Americans, according to three insiders.
The deployment of Grok might intensify worries among privacy advocates, especially as government efficiency initiatives seem to overlook established protections regarding the handling of sensitive data.
One insider revealed that Doge is utilizing a tailored version of the Grok Chatbot to streamline data analysis. The aim, it seems, is to facilitate inquiries, generate reports, and conduct thorough data evaluations.
Further, Doge staff reportedly recommended the Department of Homeland Security (DHS) to implement Grok, even though it has not received official approval within the department.
Details remain murky concerning the kind of data being made available to AI tools like Grok or how this customized system is configured. Grok was originally created by Xai, a tech venture Musk launched on social media in 2023.
Experts in technology and government ethics warn that if Grok accesses confidential government data, it could infringe upon security and privacy regulations.
Possibly, Musk might leverage Grok to train personnel at Tesla and SpaceX in data assessment, which could give him an edge over other AI service providers in the federal arena.
Neither Musk, the White House, nor Xai responded to requests for comments. A DHS spokesperson refuted claims that Doge encouraged employees to adopt Grok, stating, “Doge does not push employees to use specific tools or products,” while avoiding further inquiry.
Compared to key players like OpenAI and Anthropic, Musk’s Xai is relatively new to the field. Their website emphasizes their capability to monitor Grok users for defined business intentions, asserting that “AI knowledge should be as comprehensive and extensive as possible.”
As part of broader initiatives to eradicate waste and inefficiency from government practices, Musk’s Doge team has access to a highly secured federal database populated with personal information on millions of citizens. Experts caution that this data access is generally restricted to a select few due to risks of unauthorized sale, loss, leaks, or breaches that could compromise American privacy or national security.
Generally, sharing data in the federal sector requires governmental approval and the involvement of experts to maintain compliance with privacy laws.
The use of Grok for analyzing sensitive federal data symbolizes a significant paradigm shift in Doge’s operational methods, especially as Musk’s team has been engaged in remapping personnel and reclaiming oversight of sensitive data systems.
Albert Fox Cahn, executive director of the Surveillance Technology Supervisor Project, expressed serious concerns about the implications of transferring such data to Grok, viewing it as a considerable privacy risk.
He raised alarms over the prospect of government data leaking back to Xai, a private entity, highlighting the obscurity surrounding who can access this personalized version of Grok.
Cary Coglianese, a federal regulation and ethics specialist, noted that Doge’s actions might afford Grok and Xai an upper hand over other firms seeking governmental contracts for AI services.
Possible Ethical Implications
Beyond merely employing Grok, it seems Doge instructed DHS officials to utilize this tool despite its unofficial standing in such a large institution. The DHS is responsible for various sensitive national security duties, including border security and immigration enforcement.
Insiders indicated that if federal staff are permitted to access Grok, federal funds should cover this access.
Reports suggest that they were “advocating” for its use throughout the department. However, it remains unclear whether the federal government has officially utilized Grok.
The DHS had implemented policies under the Biden Administration allowing certain AI platforms, including OpenAI’s ChatGPT and Anthropic’s Claude Chatbot. While this plan aimed to position the DHS as a pioneer in federal AI technology deployment, concerns arose about misuse, leading to a sudden halt of access to commercial AI tools, including ChatGPT, just last month.
As a result, employees had to rely on internally developed AI systems at DHS. It’s unclear if this abrupt change affected Doge’s ability to promote Grok during this period.
The DHS did not offer insights concerning these developments.
Musk informed investors recently that he plans to significantly reduce his time at Doge in the upcoming month. As a special civil servant, he has limitations on his service days, and it remains uncertain when this term might conclude. Musk suggested his team would continue operations while he looks to help out in the White House.
If Musk is directly involved in Grok’s deployment, this could contravene the Criminal Conflict of Interest Act that prohibits officials from engaging in matters that might yield them a financial advantage, according to Richard Painter, an ethics expert.
“This indicates that Doge is pressuring the agency to incorporate software that primarily benefits Musk and Xai, rather than the American public,” Painter noted. Although prosecutions under this statute are rare, they can lead to substantial penalties.
If Doge staff promoted Grok’s use without Musk’s direct influence, it would raise ethical questions but wouldn’t necessarily violate legal statutes, according to Painter, who emphasized that it’s the White House’s responsibility to counteract such appearances of self-dealing.
The initiative to employ Grok aligns with broader efforts by Doge employees, notably Kyle Schutt and Edward Coristine, to implement AI into government procedures. Coristine, known online as “Big Ball,” is reportedly a standout member of the team.
Attempts have been made by Doge staff to access emails of DHS personnel, allegedly to train AI for identifying communications from individuals perceived as not supporting Trump’s political agenda, sources shared. However, it’s unclear if Grok was involved in these attempts.
In recent weeks, about a dozen employees at the Department of Defense were informed by a supervisor that an algorithmic tool was tracking some of their computer activities, according to additional sources involved in those discussions, who requested anonymity due to potential retaliation.
Using AI to monitor employees’ political beliefs may breach civil service laws designed to shield workers from political interference, as emphasized by Coglianese.
The Department of Defense stated that Doge was not engaged in network surveillance and had not received directives to use AI tools such as Grok. “It’s crucial to recognize that all government computers generally are subject to monitoring as part of standard user agreements,” clarified Pentagon spokesperson Kingsley Wilson.
Follow-up questions regarding the implementation of new surveillance systems went unanswered.





