SELECT LANGUAGE BELOW

AI-Generated Reviews Deceive Humans and Detectors, Eroding Trust in Online Platforms

A recent study by Professor Balazs Kovacs of the Yale School of Management found that AI-generated restaurant reviews fooled both human readers and AI detectors to pass the “Turing test” and test the trustworthiness of online reviews. It became clear that there was a possibility of damage.

that kind of science report Online reviews have become an integral part of the decision-making process for consumers, with the majority relying on them to make informed choices. However, the advent of sophisticated AI language models threatens to undermine the credibility of these reviews. Professor Kovacs conducted two experiments with a diverse group of 301 participants to investigate the ability of AI-generated reviews to fool human readers and her AI detectors. The study was divided into two studies.

In Study 1, participants were shown a combination of real Yelp reviews and AI-generated counterparts created by OpenAI’s GPT-4. Amazingly, they correctly identified the source only 50% of the time, which was as good as chance. Study 2, in which GPT-4 created a completely fictitious review, produced even more surprising results. Participants classified AI-generated reviews as written by humans 64% of the time.

Kovac also tested a major AI detector designed to distinguish between human-written text and AI-generated text. He entered 102 of the reviews into Copyleaks, a publicly available AI text recognition tool, which labeled all reviews as human-authored and not AI-authored. The content was shown to be unidentifiable. If asked to rate the likelihood that each review was generated by his AI on a scale of 0 to 100, even GPT-4 would ensure that reviews written by humans and those generated by its own AI could not be distinguished.

The findings have far-reaching implications for review platforms, businesses, and consumers. Malicious actors could exploit his AI to generate fake reviews, undermining trust in online platforms and unfairly influencing small and medium-sized businesses that rely heavily on genuine reviews. This study serves as a wake-up call for review platforms to reconsider their authentication mechanisms and for policymakers to consider regulatory measures to enhance transparency.

read more Here is such a science.

Lucas Nolan is a reporter for Breitbart News, where he covers free speech and online censorship issues.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News