YouTube now requires labels for some — but not all — AI-generated videos
Some AI-generated videos will now require a label on YouTube
The policy was first announced by the company last November, but according to a new update from YouTube posted on Monday, the new policy and its required compliance tools are now being launched, and will continue to roll out over the coming weeks.
“We’re introducing a new tool in Creator Studio requiring creators to disclose to viewers when realistic content – content a viewer could easily mistake for a real person, place, or event – is made with altered or synthetic media, including generative AI,” says YouTube.
YouTube is requiring that creators mark certain AI video content so the platform can affix an “altered or synthetic content” label on it. However, not all AI video content will need to be labeled.
According to YouTube, this policy only covers AI digital alterations or renderings of a realistic person, footage of real events or places, or complete generation of a realistic looking scene.
YouTube also explains what type of AI-generated content is exempt. For the most part, these exemptions are minor alterations that were possible well before the generative AI boom of recent years. These include videos that use beauty filters, special effects like blur or a vintage overlay, or color correction.
Potential pitfalls from YouTube’s AI labeling policy
There is one interesting and glaring exemption from YouTube’s new AI-labeling policy: Animated AI-content.
According to YouTube, animated content is “clearly unrealistic” so it does not need to be labeled. The policy is meant to curb misinformation or potential legal issues which could arise from generated versions of real people. It is not meant to be quality control, warning users when low-effort AI generated junk starts playing on their screen.
However, as Wired points out, YouTube is arguably dropping the ball here because the policy leaves out the bulk of kids content, animated video.
Disturbing kids videos on YouTube have made headlines over the years and the company has made moves to deal with the problem. Often these appear to be pumped out as quickly as possible without any educational intent or even steps to ensure age-appropriateness.
Kid-oriented content on YouTube will be affected by this policy if creators push misinformation, because that would fall under the “realistic” portion of these new rules. However, bulk-generated AI animated junk, usually aimed at the youngest demographics, will not. And it seems YouTube is missing out on an opportunity to have this type of content labeled so parents can easily filter it out.
All-in-all, YouTube’s new AI policy is a step in the right direction. Generative AI that could be misconstrued as real will be labeled, and uses of AI by filmmakers and creators to enhance high-quality content won’t be affected.
Still, it doesn’t appear that YouTube is yet dealing with the potential for low-quality, AI-generated content to fill the site and fundamentally change the platform. It may be forced to confront that reality, though, when and if it arises.