total-news-1024x279-1__1_-removebg-preview.png

LANGUAGE

AI security infrastructure needed after ‘malicious’ foreign actors use OpenAI to train operatives, expert says

How openly will the U.S. allow public access to artificial intelligence (AI) after Microsoft revealed that a rival nation’s state agency was using it to train operatives? decisions need to be made, which may impact the overall data protection policy.

“We’re going to have to decide whether we want to open these things up and make them easily accessible to everyone, bad actors and good actors, or take a different tack,” Phil Siegel said. said. The founder of the AI ​​nonprofit Center for Advanced Preparedness and Threat Response Simulation told FOX News Digital.

In a blog post on Wednesday, OpenAI said there are five nation types: Charcoal Typhoon and Salmon Typhoon from China, Crimson from Iran, Sandstorm from North Korea, and Emerald Threat and Forest Blizzard from Russia. “malicious” actors were identified.

The post claims the group used OpenAI services to “query open source information, translate it, find coding errors, and perform basic coding tasks.” For example, the two China-affiliated groups allegedly translated technical documentation, debugged code, generated scripts, and researched ways to hide processes in various electronic systems.

How AI can manipulate voters and undermine elections that threaten democracy

In response to the, Suggested by OpenAI The company’s multi-pronged approach to combating such malicious uses of its tools includes “monitoring and disrupting” malicious actors with new technology to identify and block their activities; This includes increasing collaboration with other AI platforms to catch malicious activity, increasing public transparency, and more.

“As with many other ecosystems, there is a small number of malicious actors that require continued vigilance to ensure that everyone else continues to benefit,” OpenAI wrote. ing. “While we strive to minimize the potential for abuse by such actors, we cannot stop every instance.”

Sam Altman, CEO of OpenAI, speaks during a panel session on the third day of the World Economic Forum in Davos, Switzerland, Thursday, January 18, 2024. (Stefan Vermes/Bloomberg via Getty Images/Getty Images)

“By continuing to innovate, investigate, collaborate and share, we make it harder for malicious actors to be detected across the digital ecosystem and improve the experience for everyone else,” the company claimed.

Siegel argued that while these gestures are well-intentioned, they are ultimately ineffective due to a lack of current infrastructure to have the necessary impact.

What is artificial intelligence (AI)?

“We’re going to have to decide whether this is a completely open system or whether it’s more like a banking system with a lot of gates in the system to prevent these things from happening.” Siegel said.

“I’m skeptical because banks have a set of infrastructure and regulations behind them to make these things happen…but we don’t have it yet,” he said. explained. “We’re thinking about it and working on it, but until we have that in place, this isn’t Microsoft’s fault, it’s not Open A’s fault, it’s not Google’s fault.”

Photo illustration of a man using an AI assistant and Bill Gates

Bill Gates (left) and OpenAI CEO Sam Altman recently discussed the potential need for a global governing body to regulate AI technology. (Ian Jopson/Fox News)

“We just need to move quickly and make sure that something like this is put in place so they know how to implement this kind of thing. ” he added.

In a separate blog post, Microsoft said it has implemented several additional measures, including allowing other AI service providers to flag related activity and data so they can take immediate action against the same user or process. He insisted on “notification.”

Opinion: How AI is driving healthcare that meets consumer expectations

Microsoft and OpenAI are committed to protecting valuable AI systems, albeit as a “complementary defense,” with MITER’s support to develop countermeasures in the “evolving landscape of AI-enabled cyber operations.” did.

“The threat ecosystem over the past few years has revealed a consistent theme of attackers following technology trends in parallel with defenders,” Microsoft acknowledged.

hacker

computer hacker. (Cyberguy.com)

Siegel said the process described only accounts for part of the activities pursued by malicious actors, as hackers can use espionage and even “other forms of attack.” suggested. This is also due to the lack of current systems to capture all activity. Technology to achieve your goals.”

CLICK HERE TO GET THE FOX NEWS APP

“There is work to be done, but I am skeptical that Microsoft’s OpenAI will be able to do it on its own without support from the government or other agencies already working on such technology,” Siegel said.

The Department of Homeland Security did not respond to Fox News Digital’s request for comment by press time.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

SUBSCRIBE TO

Sign up to stay informed to breaking news