To combat the rise of deepfake videos and AI-generated fake content, YouTube has announced the launch of a new tool called “Likeness Detection.” This tool allows content creators to detect videos that feature their likeness or voice without permission and request their removal.
According to the platform, the tool allows creators—after verifying their identity—to access a dedicated tab within YouTube Studio. There, they can review videos that the system has identified as potentially containing artificial versions of their face or voice. If the video is found to be the result of AI manipulation or generation without authorization, they can submit a removal request directly through the tool.
Gradual Rollout and Global Expansion
YouTube stated that the first batch of eligible users have received email notifications, and the feature will be gradually rolled out to more creators worldwide in the coming months.
YouTube explained that the tool is currently available in a beta version within the YouTube Partner Program, as part of its efforts to protect creators’ digital identities and curb the misuse of artificial intelligence in creating fake videos.
How it Works and Verification Steps
Before activating the tool, creators must complete a biometric verification process. This involves uploading a photo of their ID card or passport, as well as recording a short video (selfie) in which they are asked to perform simple movements, such as turning their head or looking up, to confirm their identity.
After verification is complete, content creators can access the new dashboard to review suspected videos. The system displays details such as the title, the channel name, the number of views, and a snippet of the dialogue or audio used. They can then request the video’s removal, file a legal objection, or save the case for later review.
Limitations and Warnings
Despite the importance of this step, YouTube warned that the system is still in its early stages of development and may sometimes produce inaccurate results, such as identifying original videos by the same creator. She also noted that some videos used in critical or satirical contexts may not be removed due to considerations of free speech or fair use.
The tool works similarly to the Content ID system the platform has used for years to detect copyright infringements in audio and video, but this time it focuses on identifying individuals rather than the copyrighted content itself.
YouTube first announced the idea for this tool late last year and began testing it in December through a pilot program in collaboration with the American talent agency Creative Artists Agency (CAA), which includes some of the world’s most prominent public figures. The experiment aims to develop tools that enable influencers to identify videos that use their faces or voices in a deceptive manner.
This move comes amid increasing pressure from creators and regulators worldwide, particularly in the United States, where YouTube supports the NO FAKES Act, which grants individuals the right to demand the removal of unauthorized digital copies of their faces or voices from the internet. This step is expected to contribute to greater transparency and legal accountability in content published on digital platforms.
A Vision for the Future
This initiative is part of a broader strategy by Google and YouTube to address AI-generated content. Last March, the platform began requiring creators to clearly disclaim any AI-generated or modified videos, and also banned synthetic music that imitates the voices of real artists.
While the debate surrounding the boundaries of creativity and imitation in the age of artificial intelligence intensifies, YouTube emphasizes that its goal is not to restrict the use of the technology, but rather to ensure it is not used to mislead the public or infringe on the identities of others.