YouTube’s Deepfake Detection Tool Raises Concerns
Experts are raising concerns over YouTube’s new deepfake detection tool, as highlighted in a recent report. This feature allows Google to utilize creators’ faces to train its AI bots.
The tool enables YouTube users to submit videos of their faces, which helps the platform identify and flag any unauthorized deepfakes that misuse their likenesses.
Additionally, creators can request the removal of these AI-generated impersonations.
However, there’s a catch. This safety feature would permit Google, which oversees YouTube, to collect biometric data from creators for training its AI systems. CNBC detailed this aspect earlier this week.
“The data that creators provide to enroll in our similarity detection tools is not, and has never been, used to train Google’s generative AI models,” clarified YouTube spokesperson Jacques Maron.
Maron emphasized that this information is solely for identity verification and specific functionality.
YouTube informed CNBC that it is revising the wording of its policy to eliminate any confusion but stressed that the policy itself will remain unchanged.
Tech companies are finding it challenging to implement new AI models while maintaining user trust.
The introduction of deepfake detection tools came in October to assist creators in protecting their likenesses.
Amjad Hanif, who leads YouTube’s creator products team, mentioned to CNBC that the goal is to extend this feature to over 3 million creators in the YouTube Partner Program by the end of January.
To access the tool, users must upload a government-issued ID along with a video of themselves, which helps scan countless hours of new content uploaded to YouTube every minute.
According to CNBC, public content may be utilized “to train Google’s AI models and build products and features.” That includes Google Translate and various cloud AI services.
When potential deepfakes are flagged, they are directed to the creator, who then has the option to request removal of the footage.
Surprisingly, Hanif noted that the actual number of removals has been low. Many creators seem content just to have the awareness of the deepfakes, and they find it not worth the hassle of removal.
“The most prevalent response is, ‘I saw it, but that’s OK,’” he shared.
Experts in online safety suggest that the low takedown numbers may stem more from confusion regarding the new features than from a comfortable familiarity with deepfakes.
Third-party companies like Vermillio and Loti are increasing their efforts to assist celebrities in safeguarding their publicity rights amid the growing use of AI.
“As Google competes in the AI landscape, creators should really think about whether they want their likeness controlled by a platform rather than owning it themselves,” Vermillio CEO Dan Neely advised.
He remarked, “Your likeness will be one of your most valuable assets in the age of AI. Once you relinquish control, regaining it may never happen.”
Roti CEO Luke Arrigoni described the risks associated with YouTube’s current policies on biometric data as “huge.”
Both executives advised against using YouTube’s deepfake detection tools.
YouTube creators such as Mikhail Varshavsky, known as “Dr. Mike,” are noticing an uptick in deepfake videos, particularly with new applications like OpenAI’s Sora and Google’s Veo 3.
Varshavsky, who has garnered over 14 million YouTube subscribers, often debunks health myths and addresses inaccuracies in medical dramas.
He recounted first encountering a deepfake of himself on TikTok, where he seemed to endorse a “miracle” supplement.
“I was taken aback because I’ve dedicated a decade to earning my audience’s trust, focusing on truth and sound health decisions,” he reflected.
He expressed concern over someone exploiting his likeness to deceive others into purchasing unnecessary or even harmful products.
Currently, creators lack avenues to monetize unauthorized use of their likenesses in fraudulent deepfake videos. Earlier this year, YouTube allowed creators to permit third-party companies to use their videos for AI training, but without any compensation.





