Consequences for not properly labeling content could vary, including video removal and demonetization
YouTube has introduced new guidelines for managing AI-generated content, particularly deepfakes. These rules are divided into two main categories: one more strict, aimed at protecting the platform’s music industry partners, and another less restrictive, applicable to the rest of the users. The company has made it clear that creators will need to label the AI-realistic content they upload, immediately making it clear that it was made with artificial intelligence, especially in sensitive contexts such as elections or current conflict situations. These labels will be visible in the video descriptions and directly on the videos themselves, in the case of sensitive material. YouTube’s definition of “realistic” has not yet been made clear. However, spokesperson Jack Malon indicated that more detailed guidance, complete with examples, will be provided when this policy goes into effect next year.
Consequences for failing to properly label AI content could vary, including video removal and demonetization. However, it remains unclear how YouTube can reliably identify unlabeled AI-generated videos, considering that current detection tools are still unreliable. The situation becomes even more complicated for videos that use deepfakes to simulate real people, such as their voice or face. YouTube will allow takedown requests via an existing form, but will evaluate several factors, including whether the content is a parody or satire and whether the individual is a public figure. Additionally, for AI-generated musical content that imitates an artist’s singing or speaking voice, exceptions for parody and satire will not be allowed. Channels producing AI coverage of living or deceased artists may have their content removed.