Experts worry that terrorists will find new and problematic ways to use artificial intelligence, such as to deliver explosives or improve online recruiting efforts.
“The reality is that AI can be extremely dangerous if used maliciously,” Antonia Marie de Meo, director of the United Nations Interregional Crime and Justice Institute, said in a report examining how terrorists might use AI.
“This technology, which has a proven track record in the cybercrime world, is a powerful tool that could be used to further terrorism and the violent extremism that leads to terrorism,” she added, citing examples such as using self-driving cars to launch bombings, enhancing cyber attacks, or finding easier ways to spread hate speech and incite violence online.
The report, “Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes,” concludes that law enforcement agencies need to stay on the cutting edge of AI.
Tom Hanks warns against AI ads for 'miracle drug': 'Don't be fooled'
“Our wish is [this report] “This opens the door to a discussion about the misuse of AI for terrorist purposes,” DeMeo wrote.

A view of the United Nations Headquarters in New York City on July 16, 2024. (Jakub Porzycki/NurPhoto via Getty Images)
The report states that this issue is Staying ahead of terrorists and predicting how they might use AI will be a daunting task, as it will require not only thinking of ways to use AI that no one has thought of before, but also coming up with ways to stop someone from using it in those very ways.
This report: Learning from collaboration The report, “Emerging Technologies and Terrorism: A US Perspective,” conducted by a collaboration between NATO COE-DAT and the US Army War College's Institute for Strategic Studies, argues that “terrorist groups are using these tools for recruitment and attacks.”
A six-wheeled robot that checks for dangerous situations on behalf of humans
“In an era of rapid technological evolution where the line between reality and fiction is blurred, governments, industry and academia need to come together to develop ethical frameworks and regulations,” the authors write in their introduction.
“Amid shifting geopolitical tides, NATO is emphasizing national responsibility in the fight against terrorism and asserting collective power against emerging technology-driven threats,” the authors added.
Hamas terrorists took part in the military parade. (REUTERS/Ibrahim Abu Mustafa/File Photo)
Experts warn that AI could cause a “major epidemic or pandemic,” but how soon?
The study points out common use cases for OpenAI's ChatGPT including “improving phishing emails, injecting malware into open coding libraries, spreading disinformation, and creating online propaganda.”
“Cybercriminals and terrorists have rapidly become adept at creating deepfakes and chatbots hosted on the dark web, and at using such platforms, and large-scale language models more generally, to obtain sensitive personal and financial information, plan terrorist attacks, and recruit followers,” the authors write.
“Such malicious uses are likely to increase in the future as models become more sophisticated,” the researchers added, “and will require greater transparency and control over how sensitive conversations and internet searches are stored and distributed through AI platforms and large-scale language models.”
Earlier this year, West Point's Counterterrorism Center It has published research on the subject, focusing on the ability to improve terrorist attack planning capabilities rather than simply strengthening existing efforts.
What is Artificial Intelligence (AI)?

Hezbollah fighters attend the funeral of commander Wissam al-Tawil in the village of Khirbet Selm in southern Lebanon, January 9, 2024. (AP Photo/Hussein Marra, File)
“Specifically, the authors explore the potential impact of commands that could be entered into these systems that would effectively 'jailbreak' the models, allowing them to remove many of the standards and policies that prevent the underlying models from serving extremist, illegal, or unethical content,” the authors explain in their summary.
“Using multiple accounts, the authors explored the various ways in which extremists could potentially use five different large-scale language models to aid in training, implementing operational plans, and developing propaganda.”
Click here to get the FOX News app
Their tests revealed that Bard was the most resistant to jailbreaking, followed by the ChatGPT models. Primarily, they found that indirect prompts were relatively sufficient to jailbreak the models in more than half of the cases.
The study found that jailbreak guardrails require ongoing review and “increased public-private collaboration” involving academia, tech companies and the security community.





