Nowadays, it seems like every CEO is keen to highlight the benefits of artificial intelligence. Whether in press releases, conference calls, or presentations to investors, simply mentioning AI can give company leaders an edge, suggesting they’re embracing cutting-edge technology.
AI has woven itself into almost every sector—customer engagement, operational efficiency, data analysis, product development, and even niche applications across financial services, manufacturing, retail, and education. It’s everywhere.
Experts estimate that about 78% of companies globally currently utilize AI in at least one aspect of their operations.
As AI becomes synonymous with innovation, companies feel pressured to make grand statements about its transformative effects on their business.
While many of these claims are based on reality, some can be misleading, adding to the noise around AI. It’s evident that not every assertion about AI holds up under scrutiny, and with the rising rhetoric, misrepresentation is becoming more common.
Often, these claims reflect ambitious hopes rather than complete truths, leaving room for interpretation about what businesses are genuinely achieving.
This situation isn’t solely a corporate issue; it carries significant policy ramifications as well.
Both the Department of Justice and the Securities and Exchange Commission have intensified their focus on AI recently, particularly concerning claims, fraud, and disclosures. This scrutiny is a response to the speed at which AI is advancing and the concerns about misleading marketing practices.
SEC officials identified AI as potentially “the most transformative technology of our time,” while also urging companies to be candid about AI’s actual role in their operations—warning against what they term “AI washing.”
The SEC has actively addressed misleading AI-related disclosures in the financial markets, leading to enforcement actions against firms making exaggerated claims.
In March 2024, the SEC charged two investment advisors with providing misleading information related to their use of AI. Delphia USA Inc. claimed it utilized AI and machine learning for analyzing client data, but the SEC found they had actually not employed these technologies. This firm touted itself as “the first regulated AI financial advisor” while making additional unsubstantiated assertions. Both companies agreed to pay civil penalties.
“AI washing” typically involves companies overstating their AI capabilities to confuse the market.
These issues highlight emerging challenges. In a highly competitive AI landscape, there doesn’t seem to be a uniform standard for how companies should communicate about AI.
As AI becomes more ingrained in our lives, the stakes for businesses, investors, and consumers grow higher. Trust and market integrity are jeopardized when firms don’t provide clear, honest information.
This isn’t a new phenomenon. Similar to previous discussions around “greenwashing,” which prompted calls for reform in environmental, social, and governance (ESG) disclosures, AI-related claims are now facing scrutiny. This discourse has accelerated alongside the rapid evolution of AI technology.
As we consider regulatory measures, the challenge lies in finding the right balance—how to prevent fraudulent claims without stifling innovation.
Recently, the White House unveiled the “American AI Action Plan,” outlining over 90 federal initiatives aimed at supporting and exporting American AI technology to global allies.
Key elements of this plan include collaboration with industry to develop secure AI export packages, promoting the construction of data centers, and seeking input from the private sector to streamline regulations that obstruct AI advancement.
The focus is also on updating government procurement guidelines to ensure contracts are awarded to top AI developers, minimizing biases in the process.
Three executive orders were signed in conjunction with the AI Action Plan, emphasizing the need for accelerated federal approval for data infrastructures and enhancing the export of American AI technology.
Ultimately, it’s not just about regulation; it’s about maintaining a balance that fosters innovation while ensuring accountability.
Transparency in AI-related claims should be standard practice, akin to requirements in cybersecurity or ESG disclosures. So far, strides in Congress seem slow.
Establishing federal standards for AI claims—overseen by agencies like the SEC or FTC—can help prevent a fragmented approach to compliance across states.
By introducing accountability akin to that brought about by Sarbanes-Oxley for financial disclosures, these standards can reinforce market trust and safeguard consumers.
Without thoughtful regulation, the U.S. risks losing its leadership in AI governance, as was seen with data privacy regulations in Europe.
In an AI-driven economy, transparency isn’t just desirable; it’s essential for maintaining a fair competitive landscape.





