total-news-1024x279-1__1_-removebg-preview.png

SELECT LANGUAGE BELOW

Scientists found 1,140 AI bots on X creating fake profiles

scientists revealed study Last month, Company X, formerly known as Twitter, was hit by a serious bot that “posted machine-generated content and stole selfies to create fake personas” on nearly 1,140 artificial intelligence-powered accounts. Turns out I had a problem.

The research, conducted by a team of students and teachers at the Indiana University Social Media Observatory, uncovered a network of fake accounts on X called the “Fox8” botnet. The botnet reportedly uses ChatGPT to generate “content for promotional purposes.” It can spread suspicious websites and harmful content. ”

According to scientists Kai-Cheng Yang and Filippo Menczer, the bot accounts are believed to be trying to lure people into investing in fake cryptocurrencies and even stealing from existing cryptocurrency wallets. .

Their posts often include hashtags such as #bitcoin, #crypto, and #web3, and can be found on Forbes’ crypto-centric X account (@ForbesCrypto) and blockchain-centric news site Watcher Guru (@WatcherGuru). We frequently interact with human-operated accounts such as Research has found.

Yang and Mentzer said that beyond cryptocurrency looting, Fox8 accounts “were found to distort online conversations and spread misinformation in situations ranging from elections to public health crises. did,” he said.


A team of students and teachers from the Indiana University Social Media Observatory has discovered a network of 1,140 fake accounts on X that are allegedly using ChatGPT to generate “suspicious and harmful content.”
Getty Images/iStockphoto

The purpose of the botnet is to spam X users with a large number of AI-generated posts. Frequent tweeting increases the likelihood that these posts will be seen by more legitimate users, increasing the likelihood that humans will click on malicious URLs.

To make it look more human, the botnet — a network of hundreds of malicious spam accounts — not only steals photos from real users, but also “frequently interacts through retweets and replies” and brags about their profile descriptions. And then “74 accounts” followers, 140 friends and an average of 149.6 tweets. ”

These factors suggest that the Fox8 bot is actively participating in Twitter activity. [now known as X]’ to make it more trustworthy for human users.

According to Indiana University researchers, most of Fox8’s profiles were “created over seven years ago, with some created in 2023” and “frequently mention cryptocurrencies and blockchain.” It says.

The research notes that botnets like Fox8 have historically been very visible, as they have traditionally posted tweets containing unconvincing content and unnatural language.

However, advances in language models, especially ChatGPT, have “made the bot much more capable in every aspect,” making it increasingly difficult to detect accounts within Fox8.


These accounts spam human users with AI-generated posts in an attempt to persuade people to invest in fake cryptocurrencies. They are also believed to steal from existing cryptocurrency wallets.
These accounts spam human users with AI-generated posts in an attempt to persuade people to invest in fake cryptocurrencies. They are also believed to steal from existing cryptocurrency wallets.
SOPA image/LightRocket (via Getty Images)

“With the advent and availability of free AI APIs, [application programming interfaces] Similar to ChatGPT, we wanted to see if these tools were already being abused to trick people. And sadly, not surprisingly, it turns out they were,” Mentzer told the Post.

These fake accounts are now so convincing that even when Yang and Mentzer applied large-scale language model (LLM) content detectors, even “state-of-the-art” technology was “using human bots and LLMs. We were unable to effectively distinguish between bots that were “in the wild.” “

The researchers did not reveal the handles associated with these accounts.

However, they said they found “self-revealing tweets accidentally posted by these accounts” and identified which accounts were included in the botnet.

“When I searched Twitter for this clue, [X] The researchers explained, “Between October 1, 2022 and April 23, 2023, 12,226 tweets were made by 9,112 unique accounts about the phrase ‘as a language model’.” However, “there is no guarantee that all of this has been done.” The account is a bot powered by LLM. ”

The researchers concluded that 76% of these tweets “are likely to be humans posting or retweeting ChatGPT output, while the remaining accounts are likely to be bots using LLM for content generation.” attached.

Mentzer told the Post the main conclusion of the study was, “This is just the tip of the iceberg. [that] Malicious bots developed by more cautious malicious actors will not be detected. [that] Significant resources need to be devoted to developing appropriate measures and regulations. ”

“At the moment, there is no effective way to detect AI-generated content,” he added.


Advances in ChatGPT have made it increasingly difficult to distinguish between accounts within a botnet and legitimate human-operated accounts, the study said.
Advances in ChatGPT have made it increasingly difficult to distinguish between accounts within a botnet and legitimate human-operated accounts, the study said.
APs

The Post has reached out to OpenAI, the parent company of ChatGPT, for comment.

Someone at X reportedly removed 1,140 illegal bots after Menczer and Yang published the study in July. Wired.

Mentzer told the journal that he normally informs X of the university’s findings, but he didn’t in this study because “universities were slow to respond.”

When the Post reached out to Company X for comment, the press line responded with an automated message stating, “We’ll get back to you shortly.”

Leave a Reply

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp