SELECT LANGUAGE BELOW

Chat and Ask AI app revealed 300 million messages due to a setup error

Chat and Ask AI app revealed 300 million messages due to a setup error

Privacy Breach in Chat & Ask AI App

A mobile application named Chat & Ask AI, which boasts over 50 million users across the Google Play Store and Apple App Store, is facing serious scrutiny. Independent security researchers have reported that the app has unintentionally exposed hundreds of millions of private conversations conducted with its chatbot feature.

The leaked messages include highly personal and alarming inquiries. Users reportedly sought advice on topics like how to commit suicide painlessly, how to write a suicide note, as well as methods to manufacture meth and hack into other applications.

These were not casual questions; they represented complete chat histories linked to real users.

Discovery of Misconfiguration

The issue was uncovered by a researcher named Harry, who found that the app’s backend, built on Google Firebase, had a significant misconfiguration. This flaw allowed unauthorized access to the database, granting Harry access to around 300 million messages from over 25 million users. To gauge the extent of the breach, he analyzed a small sample of about 60,000 users and more than 1 million messages.

The compromised data reportedly included:

  • Full chat history with the AI
  • Timestamps for each conversation
  • User-assigned names for the chatbot
  • Configurations made by users on the AI model
  • Choices regarding which AI model was selected

This aspect is particularly concerning because many individuals treat AI chat interactions like private reflections, therapy sessions, or brainstorming discussions.

Storage of Sensitive Information by Chat & Ask AI

Chat & Ask AI does not operate as a standalone AI model. Instead, it provides a platform for users to engage with substantial language models offered by larger organizations. The choices include well-known models from OpenAI, Anthropic, and Google, such as ChatGPT, Claude, and Gemini. While these companies maintain the underlying models, Chat & Ask AI is responsible for data storage, which led to the current issue. Cybersecurity experts note that this kind of misconfiguration is a known vulnerability.

Attempts to reach Codeway, the company behind Chat & Ask AI, for a comment were unsuccessful before this information was published.

Implications for Everyday Users

Many assume that chatting with an AI is confidential. They share thoughts that they would hesitate to reveal publicly. However, if the application lacks secure data storage practices, conversations can become targets for malicious actors. Even in the absence of identifiable user names, chat histories can expose personal struggles, illegal activities, job-related secrets, and private relationships. Once made public, this data risks being copied or disseminated endlessly.

Staying Safe when Using AI Apps

There’s no need to abandon AI tools entirely. By making informed choices, users can minimize risks while still enjoying these applications responsibly.

1) Be Cautious with Sensitive Topics

It’s easy to feel secure while chatting with an AI, especially when seeking answers to distressing questions. Yet, not all applications can protect your conversations. Before discussing deeply personal issues, medical questions, or legal inquiries, understand the app’s data protection measures. If those are vague, consider seeking help from trusted professionals instead.

2) Review the App Before Installation

Look beyond downloads and ratings. Investigate the history of the app, the operating company, and whether the privacy policy comprehensively outlines user data protection.

3) Assume Conversations May Be Saved

Even when apps profess privacy, many still retain chat records for troubleshooting or improvement purposes. Act as if your messages could be stored indefinitely, rather than treating them as fleeting interactions.

4) Limit Linking of Accounts

Some apps allow login through Google, Apple, or email accounts, which can connect chat histories to real identities. If possible, keep your AI interactions separate from primary accounts used for work or personal matters.

5) Examine App Permissions

AI applications may request more access than necessary. Scrutinize permissions and disable those that are superfluous. If an app provides options for deleting chat histories or limiting data retention, take advantage of those features.

6) Utilize Data Deletion Services

Your online presence extends beyond AI apps. A basic internet search can uncover personal details such as phone numbers, addresses, and social security numbers, often sought after by marketers or scammers. Data deletion services can help mitigate your digital footprint, although no platform guarantees total erasure. They monitor and remove information from numerous websites, providing a layer of security and peace of mind.

Ultimately, while AI chat apps are evolving, security remains a major concern. This incident highlights how a single misconfiguration can expose millions of sensitive conversations. Until better protections are implemented, exercising caution with these tools is essential.

Your Thoughts

Has this incident shifted your perception of privacy in AI chat applications? We’d love to hear your views.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News