SELECT LANGUAGE BELOW

Apps for AI girlfriends expose millions of private messages in large data breach

Apps for AI girlfriends expose millions of private messages in large data breach

Millions of private messages thought to be secure are now public due to a major data breach involving two AI companion apps, Chattee Chat and GiMe Chat. This breach exposed over 43 million intimate messages as well as more than 600,000 images and videos. The discovery was made by Cybernews, a prominent cybersecurity research team known for tracking data breaches and privacy concerns globally. This incident highlights the risks we take when relying on AI companions for personal interactions.

Major Data Breach Affects AI Chat Users

On August 28, 2025, CyberNews found that Imagime Interactive Limited, a developer based in Hong Kong, had left its Kafka Broker server completely unprotected. This security lapse allowed real-time chats between users and AI to be streamed freely. The database included links to personal photos, videos, and images generated by AI, compromising around 400,000 users on both iOS and Android platforms. Researchers characterized the content as “virtually unsafe for work,” revealing a disturbing disconnect between users’ trust and the responsibilities of developers.

Who Was Impacted?

Most of those affected were in the United States, with about two-thirds of the accounts linked to iOS and the rest to Android devices. While full names and emails weren’t exposed, IP addresses and unique device identifiers were, raising concerns about potential tracking and identification through other databases. On average, users sent about 107 messages to their AI companions, creating a significant digital footprint that could be misused for identity theft or harassment.

Financial Details and Developer Accountability

Some users reportedly spent as much as $18,000 interacting with their AI companions, generating significant revenue for the developer—over $1 million before the breach was revealed. The company’s privacy policy suggested a commitment to user safety, yet Cybernews discovered a stark lack of basic authentication or access controls on the server. Essentially, anyone with the right link could access private chats, photos, and videos, showcasing vulnerabilities in digital intimacy when security measures are neglected.

How Was the Breach Discovered?

CyberNews acted swiftly to report the issue concerning Imagime Interactive Limited. The exposed servers were taken offline in mid-September after being indexed by public IoT search engines, making them easily discoverable by malicious actors. Experts are still uncertain whether cybercriminals accessed the data before its removal, leaving ongoing concerns about possible sextortion or other scams using leaked information.

Protecting Yourself from Future Breaches

This incident serves as a crucial reminder for anyone concerned about their online privacy, even those who haven’t used such AI apps.

1) Think Before Sharing

Be cautious about sending personal information to AI chat apps. Once you share, you lose control over that data.

2) Choose Reputable Tools

Select applications with clear privacy policies and a track record of security.

3) Use Data Deletion Services

Consider employing a service that helps erase your personal information from public databases. While not foolproof, such services can actively monitor and remove your data from various sites.

4) Install Strong Antivirus Software

Utilize robust antivirus solutions to protect against malware and phishing attempts, safeguarding your personal information across devices.

5) Employ a Password Manager and MFA

Use a password manager with multi-factor authentication to secure your accounts and check for any past breaches involving your email.

What This Means for You

AI chat apps may seem safe, but they handle sensitive data that can lead to various privacy risks if compromised. It’s important to use services that ensure secure encryption and clear privacy terms. If a company can’t adequately protect your data, it likely isn’t worth the risk.

Wrapping Up

This breach underscores a significant flaw in how many developers safeguard user data in AI chat applications. The burgeoning industry needs stricter security measures and accountability to avert such breaches. By understanding data flow and ownership, users can better protect themselves against future incidents.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News