SELECT LANGUAGE BELOW

The internet is filled with fake reviews — here are some ways to spot them

The emergence of generative artificial intelligence tools that can efficiently create novel, detailed online reviews with little effort on the part of people is pushing retailers, service providers and consumers into uncharted territory, according to watchdog groups and researchers. It is said that he set foot in

Fake reviews have long plagued many popular consumer websites, including Amazon and Yelp. These are usually traded between fake review brokers and paying companies on private social media groups.

In some cases, such reviews may be initiated by businesses that offer incentives to customers, such as gift cards, for positive feedback.

The OpenAI “ChatGPT” app welcome screen will appear on your laptop. Getty Images

But tech industry experts say AI-powered text generation tools popularized by OpenAI's ChatGPT will allow scammers to generate reviews faster and in greater numbers.

Although this scam, which is illegal in the United States, occurs year-round, it becomes a bigger problem for consumers during the holiday shopping season, when many people rely on reviews when purchasing gifts.

Where can I see AI-generated reviews?

Fake reviews are found in a wide range of industries, from e-commerce, lodging, and restaurants to services such as home repair, healthcare, and piano lessons.

The Transparency Company, a tech company and watchdog group that uses software to detect fake reviews, says AI-generated reviews began appearing in large numbers in mid-2023 and have been increasing ever since. said.

In a report released this month, The Transparency Company analyzed 73 million reviews in three categories: family, legal, and medical services. Nearly 14% of reviews are likely false, and the company expressed “high confidence” that 2.3 million reviews were partially or fully generated by AI.

AI-infused text generation tools, popularized by OpenAI’s ChatGPT, allow scammers to create reviews faster and at higher volumes. Elena – Stock.adobe.com

“This is a really, really good tool for review fraudsters,” said Morley, an investor and tech startup advisor who reviewed Transparency Company's efforts and will lead the organization starting January 1. Blackman says.

Software company DoubleVerify announced in August that it was observing a “significant increase” in mobile phone and smart TV apps that included reviews created by generative AI.

The company said reviews were often used to trick customers into installing apps that hijacked their devices or displayed constant ads.

The following month, the Federal Trade Commission charged the company, which makes an AI writing tool and content generator called Rytr, with offering a service that could pollute the market with fraudulent reviews.

A person's hand holds an iPhone displaying the OpenaAI ChatGPT app running GPT-4. Gad (via Getty Images)

Although the FTC banned the sale and purchase of fake reviews earlier this year, some of Rytr's subscribers have used the tool to sell hundreds of products, including garage door repair companies and sellers of “replica” designer handbags. He said he probably created thousands of reviews.

May also be listed on popular online sites

Max Spero, CEO of AI detection company Pangram Labs, says the software his company uses is rising to the top of review search results with some of the AI-generated ratings posted on Amazon. He said he was almost certain that he had detected what he was doing. Well thought out.

However, it can be difficult to determine what is fake and what is not. Amazon said it may be insufficient because outside parties do not have “access to data signals that indicate patterns of abuse.”

Pangram Labs has detected several prominent online sites, which Spero declined to name due to non-disclosure agreements. He said he independently evaluated Amazon and Yelp.

False reviews are found in a wide range of industries, from e-commerce, lodging, and restaurants to services like home repair, healthcare, and piano lessons. AP

Many of the AI-generated comments on Yelp are posted by individuals trying to publish enough reviews to earn an “elite” badge, which is meant to let users know they should trust the content. Spero said.

Your badge gives you access to exclusive events with local business owners. Scammers also want it to make Yelp profiles look more realistic, said Kay Dean, a former federal criminal investigator who runs a watchdog group called Fake Reviews Watch.

To be sure, just because a review is generated by AI doesn't necessarily mean it's fake. Some consumers may experiment with AI tools to generate content that reflects their true emotions. Some non-native English speakers say they rely on AI to ensure they use accurate language in the reviews they write.

“If they're well-intentioned, they can help (and) make them more useful,” said Sherry He, a marketing professor at Michigan State University who has studied fake reviews. He said technology platforms should focus on bad actors' behavioral patterns rather than preventing legitimate users from relying on AI tools, and prominent platforms are already doing so.

What companies are working on

Prominent companies are developing policies on how AI-generated content fits into their systems to remove fake and fraudulent reviews. Some companies already employ algorithms and investigative teams to detect and remove fake reviews, but they are giving users the flexibility to use AI.

For example, Amazon and Trustpilot spokespeople said they allow customers to post AI-assisted reviews as long as they reflect authentic experiences. Yelp is taking a more cautious approach, saying its guidelines require reviewers to write their own copy.

“With the recent increased adoption of AI tools by consumers, Yelp has made significant investments in ways to better detect and mitigate such content on our platform,” the company said. said in a statement.

The Coalition for Trusted Reviews includes Amazon, Trustpilot, employment review site Glassdoor, travel sites Tripadvisor, Expedia, and travel sites. Booking.com The technology, launched last year, says that even though fraudsters could use AI illegally, the technology is an “opportunity to fend off those who use reviews to mislead others.” He said that there is.

“Sharing best practices and raising standards, including the development of advanced AI detection systems, can protect consumers and maintain the integrity of online reviews,” the group said.

FTC rules A ban on fake reviews, which came into effect in October, allows authorities to fine companies and individuals who give fake reviews. Tech companies that host such reviews can escape punishment because they are not legally responsible under U.S. law for content posted on their platforms by outsiders.

Tech companies including Amazon, Yelp, and Google have sued fake review brokers for spreading fake reviews on their sites. The companies say their technology has blocked or removed a vast range of questionable reviews and suspicious accounts. But some experts say more could be done.

Tech industry experts say AI-infused text generation tools will allow scammers to generate reviews faster and in greater numbers. Farnot Architect – Stock.adobe.com

“Their efforts so far have not been enough,” said the Fake Review Watch dean.

“If these technology companies are so dedicated to eliminating review fraud on their platforms, why do I, an individual working in a non-automated environment, lose hundreds or even hundreds of thousands of reviews every day? Can you find a thousand fake reviews?”

Spot AI-generated fake reviews

Researchers say consumers can spot fake reviews by looking out for several red flags. Overly enthusiastic or negative reviews are red flags. Jargon that repeats a product's official name or model number is also a potential clue.

When it comes to AI, a study conducted by Baraz Kovacs, a professor of organizational behavior at Yale University, found that people can't tell the difference between reviews generated by AI and reviews written by humans.

Some AI detectors can also be fooled by short texts commonly found in online reviews, the study says.

However, there are some “AI lessons” that online shoppers and service seekers should keep in mind. According to Panagram Labs, reviews written by AI are typically long, highly structured, and contain “empty descriptors” such as common words and attributes. The writing also tends to include clichés like “the first thing that hit me” or “game changer.”

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News