Recently, Kevin Hassett, the Director of the National Economic Council, mentioned that the White House is considering the possibility of issuing an executive order to regulate and evaluate AI models, much like how the FDA assesses new foods and drugs.
There are compelling reasons to take this seriously. Here’s a brief overview:
AI cybersecurity raises some major concerns that haven’t been fully tackled yet. For instance, while AI can be used in cyberattacks, it also faces threats itself, such as clever phishing schemes and the rising prevalence of deepfakes. These developments pose significant risks to American businesses and the federal government, potentially affecting financial data, privacy, personal information, trade secrets, and national security.
The CEO of CrowdStrike recently raised the alarm about these issues, noting that we face new threat actors that can harness generative AI to carry out attacks at unprecedented speed and scale. The variety of adversaries is expanding rapidly, likely to increase exponentially.
A report from the National Counterintelligence and Security Center highlighted research indicating that advanced AI models can automate intricate, multi-stage cyberattacks at “machine speed.” Some AI systems are now performing at the same level as human experts but at a fraction of the cost, while others are outpacing humans entirely. The threats are escalating due to both the growth in human expertise and the evolving capabilities of AI.
Another report discussed a new strain of malware called “DeepLoad,” which employs AI to evade traditional security measures in corporate environments. These reports, while useful, can be overwhelming for the average person trying to keep pace with daily threats. We need accessible databases that provide machine-readable information about these risks, similar to established computer virus databases.
The wide array of emerging threats is quite alarming. Although the Open Worldwide Application Security Project has a useful AI Top 10 list, it falls short of what modern systems must confront. It’s essential that our federal government prioritize establishing a solid framework to address these challenges.
The tech industry does have some databases of cyber threats, yet more collaboration on how to mitigate these risks is necessary. This effort might require expertise not just in AI but also in other complex fields like audio processing.
The National Institute of Standards and Technology (NIST), a federal agency not bound by strict regulations, has been a pioneer in offering guidelines for responsible AI use. However, more robust enforcement powers are undoubtedly needed.
Governments tend to be slow at updating their protocols, and legislatures even more so. It wouldn’t be advisable for Congress to delve into specifics on cybersecurity metrics. Instead, regulators should be empowered to monitor and enforce AI safety standards. A fitting analogy is the FDA, which safeguards public health by ensuring the safety of various products. Just as we regulate foods and medicines by examining research and conducting tests, we should also address AI cybersecurity systematically.
Congress needs to mandate the creation and maintenance of a centralized AI cybersecurity threat database through NIST, where all software vendors can submit new threats. While NIST is well-positioned to facilitate communication, most of the critical threat information resides in the private sector.
In fact, NIST is already obligated under federal cybersecurity policies to provide similar resources as part of its Secure Software Development Framework and through the National Vulnerability Database.
What we truly need is a framework that not only defends against current attacks but also anticipates future adversaries in the realm of AI, regardless of who they are. A NIST-led national framework could be pivotal in safeguarding Americans, businesses, and the federal government from rapidly changing cybersecurity threats.


