Concerns Grow Over AI Chatbots Sharing Personal Information
AI chatbots seem to be becoming accidental sources of private information, inadvertently sharing real phone numbers with strangers.
Privacy experts are raising red flags about a troubling phenomenon known as “AI doxxing.” This occurs when bots like Google Gemini and OpenAI’s ChatGPT disclose personal contact details without permission.
One Reddit user recounted their unsettling experience, where Google’s AI allegedly began using personal numbers as placeholders for businesses and various services.
“I keep getting calls from people looking for lawyers, product designers, locksmiths — you name it,” the user shared. They noted that each caller stated, “I got your number from Google’s AI.”
Reddit users classified this as a “massive privacy invasion and data breach,” indicating that their phones have turned into a constant barrage of calls from confused strangers, resulting in a significant disruption to their daily lives.
A representative from privacy firm ClearNym commented that the Gemini issue is not merely a glitch but stems from years of unchecked data brokering practices that fuel generative AI.
They elaborated that historical personal data is now intersecting with AI training on sizable internet datasets.
“What results is a mix of exact copies, fabrications, and lately, phone numbers acting as placeholders for unfamiliar individuals,” the spokesperson warned.
Moreover, this isn’t just about random errors causing confusion.
Recent reports suggested that scammers have started posting fake customer service numbers online, which AI chatbots then mistakenly share with users, creating additional pitfalls.
“Fraudsters are aware that people often seek assistance urgently, and AI tools are giving them new chances to create misleadingly realistic phone numbers that can mislead individuals into contacting criminals instead of reliable service providers,” said Murray McKenzie, a fraud prevention director.
According to researchers at Aurascape, scammers are managing this by “seeding poisoned content” across the web.
“These attackers are subtly altering the internet that AI systems utilize,” remarked Qi Deng, the lead security researcher.
“If a user asks how to reach their airline, the AI might provide accurate info, but the customer support numbers could route them to scammers instead of the actual company.”
In some instances, the invasiveness is even greater.
For example, in one case, Gemini inaccurately listed the personal number of an Israeli software engineer as a customer support number on a payment app.
Researchers at the University of Washington also discovered that Geminis could reveal personal contact information unexpectedly.
Mayra Gilbert, a doctoral student, recounted her experience: “I was playing with Gemini, searching for my friend Yael Iger, and it surprisingly gave out my own mobile number.” This revelation shocked her.
Her colleague, Yael Iger, pointed out that while this information might have been online for some time, it was previously buried deeply enough to be nearly untraceable.
“It feels vastly different for your information to be accessible to a limited audience than for it to be broadly available through Gemini,” Iger noted.
Rob Shavel, CEO of DellyTom, mentioned an uptick in complaints regarding AI leaking personal data, with customers reporting instances of chatbots revealing “exact home addresses, phone numbers, family member names, or employer details.”
A spokesperson for Google stated that the company has safeguards to prevent personal details from showing up in their AI applications, and claims they are addressing deletion requests.
Yet, some users find it challenging to receive assistance.
“Filing standard support forms feels utterly futile,” the aforementioned Reddit user lamented. “We’ve received no response, and the harassment simply continues.”
These AI privacy issues arise as fraudsters increasingly leverage technology in alarming ways.
As reported earlier, authorities in Long Island cautioned that scammers are using AI voice cloning to impersonate victims’ grandchildren in urgent phone calls aimed at older individuals.
Allegedly, scammers are mining TikTok and other social platforms for videos of younger people speaking, allowing them to create realistic fake voices that demand bail or emergency cash.
Suffolk County Police Chief Kevin Catalina had previously stated, “They’re always attempting to remain one step ahead.” He warned that as AI technology advances, techniques are becoming “increasingly sophisticated,” resulting in elderly victims losing significant sums to convincing synthetic voices and fraudulent phone numbers.

