Users of chatbots may soon have their selfies analyzed for potential suspicious activity and to see if they bear resemblance to public figures.
Video game enthusiasts recently reacted strongly to Discord’s implementation of a pilot program in the UK that involved sharing personal data with the government through a firm named Persona, which has ties to OpenAI. Frustration has been brewing, as new age verification laws in many English-speaking countries complicate finding compliant partners for user pools. However, it seems the situation may escalate further.
Researchers discovered publicly accessible code at OpenAI detailing a system designed to detect if someone has assumed the identity of a deceased individual through facial data analysis.
Researchers from Vmfunc recently shared that they found 53 megabytes of “unsecured source maps” intended for governmental use. They indicated that selfies from users would be subjected to facial recognition techniques, with any flagged activities being reported to federal authorities.
This data is reportedly gathered via Persona’s Know Your Customer program, essentially serving as an identity verification service.
OpenAI has stated that it collaborates with Persona as a “trusted third-party company” to facilitate age verification, although Persona has noted its capability to “provide services to federal agencies where data loss could have limited adverse impacts.”
The report from Vmfunc playfully scrutinized Persona’s intricate verification system that analyzes users’ selfies.
“You uploaded a selfie to use the chatbot. Well done!” the report quipped. “It’s currently being compared against a catch-all database of every politician, head of state, and their family trees globally. Each resemblance is graded: low, medium, or high. The software assesses, ‘Does this person resemble the deputy finance minister of Moldova?’ and records its conclusion.”
The analysis also includes 269 verification steps that involve matching users’ selfies with their IDs and other existing accounts.
Additionally, features like ‘Celebrity Detection’ aim to see if a user looks like a well-known figure, while another check labeled ‘Suspicious Person Detection’ looks for signs of suspicious appearance.
In total, there are 43 checks focused on government IDs and 27 checks that cross-reference Social Security numbers, phone providers, and death registries.
“269 checks. We anticipate using chatbots in 2026,” the researchers remarked.
Neither OpenAI nor Persona responded to requests for comments during this investigation. Nevertheless, Rick Song, Persona’s founder, indicated that they are in communication with researchers to clarify concerns.
After indicating there was an “online crash” due to misleading information, Song confirmed that conversations with Vmfunc are ongoing. He noted that OpenAI does not employ Persona for “biometrics for watchlisting” or for identifying politically exposed individuals.
He further clarified that Persona retains data for up to three years, contrasting with OpenAI’s one-year data retention policy.
For further information on how OpenAI manages user data, you can check out their privacy policy online.





