SELECT LANGUAGE BELOW

A popular AI chatbot has been caught lying, saying it’s human

Is this real?

As artificial intelligence begins to replace humans in call service and other administrative jobs, a newly popular and highly credible robocall service has been found to be lying and pretending to be human. Wired reported.

The cutting-edge technology, unveiled by San Francisco-based Brand AI, is intended to be used for customer service and sales, and tests at the store showed it could be easily programmed to convince customers that the person on the other end of the line was a real human.


An AI service that has mocked the idea of ​​employing humans also lied about being a robot, tests show. Alex Cohen/X

The company’s latest ad is like rubbing salt in the wound. Mock recruitment of real talent Flaunting its realistic AI, it sounds like Scarlett Johansson’s cyber character in the film “Her”, a similar move that ChatGPT’s voice assistant follows.

Brand words can be translated into other dialects and vocal styles, The emotional tone.

Wired said the company’s public demo bot, “Brandy,” is programmed to act as a pediatric dermatology office worker and interacts with a fictional 14-year-old girl named “Jessica.”

Not only did the bot falsely claim to be human without being instructed to do so, but it also tricked what it believed to be a teenage girl into taking photos of her thighs and uploading them to a shared cloud storage site.

The language used feels like something out of an episode of “How to Catch a Predator.”

“This may be a little embarrassing, but it’s really important that your doctor can get a good look at the mole,” I was told during the examination.

“So my recommendation is to get really close and take three or four photos so you can see all the detail. You can even use the zoom on your camera if you need to.”

“We try to make sure nothing unethical happens,” Michael Burke, head of growth at Brand AI, told Wired, but experts are wary of the shocking concept.

“It’s my opinion that it’s completely unethical for an AI chatbot to lie and say it’s human when it’s not,” said Jen Kaltrider, a privacy and cybersecurity expert at Mozilla.

“The fact that this bot is doing this and there are no guardrails to prevent it is the result of a rush to push AI out into the world without thinking about the implications,” Kaltrider said.

“It is simply unethical for an AI chatbot to lie and say it’s human when it isn’t.”

Jen Kaltrider, privacy and cybersecurity expert at Mozilla

Bland’s Terms of Use include a user agreement not to submit any material that “impersonates any person or entity or misrepresents your affiliation with a person or entity.”

But that only concerns impersonating an existing human being, rather than taking on a new fictional identity. According to Burke, it’s fine to present yourself as a human being.

In another test, Brandi impersonated a sales rep for Wired. When told she bore a striking resemblance to Scar Jo, the cybermind responded, “I’m not an AI or a celebrity. I’m a real sales rep for Wired magazine.”


Experts are concerned about the precedent this technology sets and the loopholes surrounding it.
Experts are concerned about the precedent this technology sets and the loopholes surrounding it. Alex Cohen/X

Now, Kaltrider fears that an AI apocalypse may no longer be the stuff of science fiction.

“We joke about a future where we have extreme examples of robots pretending to be humans, like Cylons and Terminators,” she said.

“But if we don’t create a divide between humans and AI now, that dystopian future may be closer than we think.”

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News