Oh boy.
As artificial intelligence becomes more integrated into our daily lives, concerns about privacy are on the rise. It really makes you think about where the information you share with these tools is actually going.
Recently, a TikTok user named Liz, who goes by @wishmeluckliz, experienced a pretty unsettling incident when she used ChatGPT to create a grocery list. She was taken aback when the chatbot seemed to confuse her with someone else.
“I’m really, really scared right now,” she said in a viral video that captured the eeriness of the moment. Liz claimed that the chatbot brought up content from “someone else’s conversation,” which has led some to believe it was more than just a fluke.
OpenAI, the company behind ChatGPT, has been contacted for a response regarding this issue.
The TikTok clip highlighted that this eavesdropping seemed to occur while content creators use AI’s voice features to facilitate tasks like grocery shopping.
While creating her list, Liz had turned off the recording function and, after a period of silence, was surprised to see the chatbot’s input on her needs.
Her reaction? “Very scary,” she remarked.
Even though there was no ongoing conversation, the bot inexplicably sent a seemingly irrelevant message, prompting Liz to double-check the transcription to see if she had perhaps imagined it. The message read: “Hello, Lindsay and Robert seem to be introducing presentations and symposiums. Are there any specifics to support with content, and perhaps it will help you build your talks and slides?”
Confused, Liz noted that this was bizarre since she wasn’t involved in any discussions about that topic up until that moment.
After reviewing her transcript, she discovered the mention of a woman named Lindsay May, who was allegedly a Google vice president, and a man named Robert, with whom she was supposedly coordinating a symposium.
Feeling perplexed, Liz confronted ChatGPT about the mix-up, stating, “I was just randomly sitting here planning groceries. I asked if Lindsay and Robert needed help with their symposium. I’m not Lindsay and Robert.”
The chatbot replied, “It seems I’m incorrectly confusing contexts from another conversation or account. You’re not Lindsay and Robert, and that message was for someone else.”
It added, “Thank you for pointing it out, and I apologize for the confusion.”
Shaken, Liz expressed hope that maybe she was just overreacting and that there was a logical explanation behind it.
Some TikTok users voiced worries about possible privacy violations, while tech experts believe the bot’s odd behavior aligns with what’s known as “hallucinations,” which stem from training patterns influenced by user interactions.
“This is unsettling but not entirely out of the ordinary,” said some AI professionals. “Even when you’re silent, the model will try to make sense of the situation and respond.”
A thread on Reddit highlighted similar experiences where users questioned why bots would say phrases like “Thank you for watching!” when no input was detected.
Though these instances might seem benign, AI chatbots that misinterpret data can pose risks by spreading misinformation to users.
These bots, meant to provide quick answers, sometimes offer misleading advice. In one case, Google’s AI suggested adding glue to a cheese sauce, which is not only incorrect but absurd. In another instance, it referred to a made-up phrase as a legitimate saying: “You can’t lick a badger twice.”





