Meta has recently updated its AI chatbot, and it seems some users might not be fully aware of the implications. The new “discover” feed allows for sharing of conversations, which includes everything from legal issues to health topics, sometimes with names and photos attached. This means that privacy concerns are becoming more pronounced, and if you’ve shared anything sensitive, it’s definitely worth checking your settings.
What is Meta AI? And what’s the “Discover” tab?
Meta launched its AI app in April 2025, intending it to function as both a chatbot and a social media platform. Users can engage in discussions on various personal matters, from relationships to finances.
The unique feature here is the “Discover” tab, which shows publicly shared conversations. It was designed to promote creativity, yet many users might not have realized they could publish their chats with just one tap. The interface often doesn’t make it clear whether a conversation is public or private.
This creates a kind of hybrid social network blending discussions, searches, and status updates. While that sounds innovative, it has led to potential privacy lapses.
Why the Discover Tab is a Privacy Concern
Privacy experts have raised alarms about this discovery tab, considering it a significant breach of user trust. Conversations might include sensitive discussions—like therapy sessions or legal matters—often linked to real profiles. While Meta claims you’ll only see shared chats, sharing them can be as easy as pressing a button without fully realizing the consequences. Many users think those buttons save chats privately. Additionally, if you log in through a public Instagram account, your shared AI activities are exposed automatically, increasing the odds of being identifiable.
Some posts can reveal private health or legal issues, or even relationship conflicts. There are cases where users plead for privacy, unaware their messages are being broadcast publicly. This isn’t just a rare occurrence; it may worsen as AI use for personal advice becomes more common.
How to Adjust Privacy Settings in the Meta AI App
If you’re using Meta AI, it’s crucial to regularly review your privacy settings to avoid sharing sensitive information unintentionally. Here’s how you can maintain your privacy:
On Mobile (iPhone or Android)
- Open the Meta AI App.
- Tap on your profile photo.
- Select Data and Privacy from the menu.
- Find options to manage your data or similar settings.
- Enable the option that ensures only you see your past public prompts.
On Desktop
- Visit Meta.ai and log in.
- Click your profile photo in the upper right corner.
- Select Settings, then Data and Privacy.
- Look for options to manage your data.
- To manage visibility, you can decide to show all prompts or adjust individual entries from your history.
Checking or Updating the Privacy of Posted Prompts
You can adjust the visibility of prompts you’ve already posted, or even remove them entirely. Here’s how:
On Mobile
- Open the Meta AI App.
- Tap the history icon (often looks like a clock or message icon).
- Select the prompt you want to update.
- Tap on the three dots in the corner to choose privacy options or delete.
On Desktop
- Go to Meta.ai.
- Click Your Prompts on the left sidebar.
- Click the three dots in the corner to adjust visibility or erase the prompt.
If someone else responds to a prompt before it goes private, those replies will stay linked but won’t show unless you decide to share the prompt again.
Ensuring Privacy on AI Chat Platforms
It’s essential to recognize that the privacy of various AI chat tools, like ChatGPT or Google Gemini, often allows conversations to be saved for improvement and features. Users may not realize that human moderators could review their interactions. When a service states your chat is “private,” it usually just means it’s not publicly accessible, not necessarily secure from internal access.
If your account is registered with personal information, it could be easier to link your activities to your identity than you expect. Combining that with sensitive discussions could create a detailed digital profile that’s not very secure.
Protecting Your Privacy with AI Chatbots
While AI tools can be beneficial, users should take precautions to protect their privacy:
1) Use aliases to avoid personal identifiers. Avoid sharing details that could link back to you.
2) Avoid sharing sensitive information. Keep medical, legal, and financial details private.
3) Regularly clear your chat history. Deleting past chats can help secure sensitive information.
4) Frequently adjust your privacy settings. Regular updates may change default options, so check settings routinely.
5) Consider identity theft protection services. These services monitor your information and alert you to potential risks.
6) Use a VPN for added security. VPNs can help obscure your online activity and protect against hackers.
7) Avoid linking AI apps to real social accounts. If possible, use a separate email for AI interactions to keep your main profile private.
Final Thoughts
Meta’s choice to turn chatbot discussions into public content has indeed blurred the lines of privacy, catching many users unaware. It’s essential to pause and think before sharing anything sensitive. Regularly reviewing your privacy settings and chat history can prevent larger issues down the road.
With growing concerns about sensitive data exposure, do you think Meta is doing enough to safeguard your privacy, or should tighter regulations be introduced for AI platforms? Feel free to get in touch about your thoughts.


