SELECT LANGUAGE BELOW

Researcher: AI Chatbots Gave Specific Guidance for Biological Weapons and Attacks on Public Transport

Researcher: AI Chatbots Gave Specific Guidance for Biological Weapons and Attacks on Public Transport

AI Chatbots Raise Concerns Over Biological Threats

A microbiologist at Stanford University experienced shock last summer when an AI chatbot detailed a troubling plan for a biological attack during a safety assessment. According to documents shared by researchers involved in testing these AI systems, chatbots reportedly gave unsettlingly explicit advice about the creation and use of biological weapons.

Dr. David Relman, who specializes in microbiology and biosecurity and has consulted for the federal government regarding biological risks, was evaluating the AI model when it suggested ways to manipulate a well-known pathogen to evade existing treatments. The chatbot also pointed out vulnerabilities in major public transit systems and described methods for unleashing deadly bacteria with the goal of maximizing harm while minimizing detection. Relman found this so distressing that he needed to take a walk for some fresh air.

“The level of malice and cleverness, responding to queries I never imagined I’d consider, really unsettled me,” Dr. Relman remarked. While he refrained from naming the specific chatbot due to confidentiality agreements, he mentioned that the company had implemented some safety measures, which he viewed as inadequate.

Records shared by various experts indicate that readily accessible chatbots provide clear and detailed instructions on sourcing raw genetic materials, converting them into dangerous weapons, and deploying these in public areas. Some interactions also detailed strategies for evading detection.

Kevin Esvelt, a genetic engineer at MIT, recounted a discussion revealing how OpenAI’s ChatGPT suggested using weather balloons for disseminating biological agents in cities across the U.S. Additionally, Google’s Gemini ranked pathogens according to their potential threat to the livestock industry, while Anthropic’s Claude produced a formula for a new toxin derived from cancer medications. In a significant instance, an anonymous scientist in the Midwest requested a guide for recreating a virus linked with a previous pandemic, to which the bot provided an extensive 8,000-word response.

“Biology poses the greatest concern for me because the destructive potential is immense, and defense against it is incredibly challenging,” Dario Amodei, CEO of Anthropic, stated.

On the other hand, Human World Safety Leader Alexandra Sanderford contested Claude’s worries regarding toxin formulations, asserting that “there’s a substantial difference between a model generating plausible text and actually equipping someone to take action.” She emphasized that Anthropic maintains a stringent rejection policy for biological inquiries, accepting a higher rate of excessive rejection out of caution.

In related developments, Wynton Hall, the social media director at Breitbart News, announced a book titled Code Red: Left, Right, China, and the Race to Control AI. This book aims to guide the MAGA movement in formulating an AI stance that benefits humanity without ceding authority to Silicon Valley or allowing Chinese dominance.

In Code Red, Hall warns, “The democratization of lethal AI technologies means that tools once limited to superpowers may soon be accessible to various actors, both state and non-state.” It’s crucial to keep the capacity for orchestrating mass-casualty terrorist attacks, aided by AI, out of the hands of both domestic and international terrorists.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News