SELECT LANGUAGE BELOW

Deepfake Technology Leads Consumers to Purchase Questionable Items

A shirtless man, holding a large carrot, promotes supplements that he claims will enlarge male genitalia. The emergence of generative AI has facilitated the mass production of such videos with minimal oversight, providing a financial edge.

In several TikTok videos, carrots serve as euphemistic representations of male anatomy, cleverly sidestepping content moderation related to explicit language. “You’ll notice that the carrot has grown,” the muscular individual states in a robotic tone, directing viewers to an online purchase link.

“This product will change your life,” he adds, making unverified claims that the herbs in the supplement can boost testosterone and significantly increase energy levels.

A Bay Area deepfake detection service recently indicated that these types of videos resemble AI-generated content, which it shared with AFP. “As illustrated in this example, misleading AI-generated material can endanger consumer health by promoting supplements with exaggerated or unverified claims,” noted Zohaib Ahmed, the company’s CEO and co-founder, as shared with AFP.

“We are witnessing AI being misused to disseminate false information.”

“A cheap way”

This trend highlights how fast-paced advancements in artificial intelligence have ushered in what some researchers term “AI dystopia,” a deceitful online realm designed to manipulate users into buying dubious products. These products range from untested dietary supplements—which can sometimes be harmful—to weight-loss solutions and sexual enhancement aids.

“AI acts as a valuable tool for those looking to churn out large volumes of content cheaply,” disinformation researcher Abby Richards stated. “It’s a cost-effective method to produce advertisements,” she added.

Alexios Mantzarlis, Director of Security, Trust, and Safety Initiatives at Cornell Tech, noted the rise of TikTok’s “AI Doctor” avatars and audio clips that promote questionable sexual health solutions. Some of these videos showcase purported testosterone-enhancing formulas made from ingredients like lemon, ginger, and garlic, racking up millions of views.

What’s even more concerning is the fast progression of AI tools, enabling the production of deepfakes featuring celebrities such as Amanda Seyfried and Robert De Niro. In one TikTok video promoting prostate supplements, a clip seemingly shows Anthony Fauci asking, “Can’t your husband do that?” but it’s actually a manipulated version using his likeness.

“Harmful”

Many of these altered videos are created from existing footage, enhanced by AI-generated voices that lip-sync to the modified dialogue. “Impersonation videos pose significant risks, as they undermine the ability to identify genuine accounts online,” Mantzarlis remarked.

Last year, he uncovered numerous ads on YouTube involving deepfake celebrities promoting branded supplements for erectile dysfunction, including figures like Arnold Schwarzenegger, Sylvester Stallone, and Mike Tyson.

The rapid production of these short-form AI videos means that even if platforms remove suspicious content, similar versions quickly resurface, creating a cat-and-mouse game.

Researchers indicate this situation presents unique challenges for regulating AI-generated content, necessitating new solutions and advanced detection methods. The AFP Fact Checker has repeatedly encountered fraudulent ads on Facebook promoting various treatments, including those for erectile dysfunction.

Despite these warnings, many users still find these claims credible, highlighting the allure of deepfakes. “Scam marketing schemes and dubious sexual enhancement products existed before the internet,” Mantzarlis pointed out. “Generative AI has simply amplified this exploitation, making it cheaper and faster.”

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News