SELECT LANGUAGE BELOW

What is the ‘AI homeless man prank’? Authorities warn it poses risks

What is the 'AI homeless man prank'? Authorities warn it poses risks

Big Rapids, Michigan – A TikTok trend driven by artificial intelligence has sparked a flurry of 911 calls from people who mistakenly believe a man has broken into their homes.

This prank involves using AI to create realistic videos and images of a “homeless man” entering various homes, rummaging through fridges, and even lounging on beds. Friends or family members receive these fake videos, often convincing enough to lead them to believe they are real.

Authorities in at least four states have reported incidents where individuals thought they were dealing with home invasions, only to find out that the so-called “intruder” was actually an AI-generated figure.

In West Bloomfield, Michigan, close to Detroit, the police noted that there have been multiple cases of people being misled by these videos. They issued a warning that this “AI Homeless Man Prank” is not just a joke but a misuse of emergency response resources.

“The concern is that officers rush to the scene, lights flashing and sirens blaring, believing they are responding to a real threat, only to find out it’s a prank,” said a police officer from Yonkers, New York. “This not only wastes valuable resources but also endangers both officers and families when they arrive and have to deal with a situation that isn’t real.”

Greg Gogolin, a professor and director of cybersecurity and data science at Ferris State University, explained how easily these pranks can be executed. He mentioned that he created a program in just a few hours to demonstrate how AI can alter images.

According to Gogolin, the program he developed, called Face Swap, can realistically blend facial features from one image into another, making it hard to differentiate between what’s real and what’s generated.

Such technologies can easily be misused once they are out in the world. “They sell it, they share it… It becomes decentralized, which introduces real risks because now anyone, regardless of their technical expertise, can manipulate it,” he added.

There are some indicators to potentially spot AI-generated images, however. Gogolin explained that common flaws include misaligned limbs or unnatural body proportions. While earlier versions of AI images often resulted in bizarre anomalies—like people appearing with three arms or oddly-shaped limbs—improvements have made these detections less straightforward.

He also stressed the importance of enhanced training for investigators and law enforcement regarding these technologies, noting that many local and state-level investigators lack formal training in cybersecurity or computer science.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News