Starting mid-August 2018, advertisers will no longer be able to use self-selected sensitive content categories to restrict where their ads run on YouTube. They’ll need to select from one of the three exclusion categories (called “Inventory Types”) listed below:
- Expanded – Covers all videos on YouTube and Google video partners that meet Google’s monetization standards
- Standard (Default Setting) – Covers ads across a range of content that’s appropriate for most brands, such as popular music videos, documentaries, movie trailers, etc.
- Limited – Covers ads on a range of content appropriate for brands with particularly strict guidelines – especially for inappropriate language and sexual suggestiveness
These three options, while a bit more concise than the current setup, leave a great deal of room for ambiguity when it comes to content. The “standard” setting for example, which serves as the default option, allows for the inclusion of the following not-so-safe content per Google:
- Limited clothing in sexual settings, sensual dancing, moderate sexually suggestive behavior, or a music video containing sexual content
- Moderate profanity used in a non-hateful, comedic, or artistic manner, or a music video with frequent profanity
For brands looking to be as safe as possible, content falling into the above category likely wouldn’t be something they’d want to run alongside. There also exists a great deal of vagueness in the terms “limited” and “moderate” within the above descriptions. The previous format allowed you to exclude your brand from all content related to a specific subject, but the new default “standard” setting seems to allow for an acceptable amount of content that might not be brand safe. This will push most advertisers to focus solely on the “limited” inventory setting along with whitelists, block lists and third-party brand safety tagging as backup.
Additionally, Google has sworn to automatically apply exclusions for deplorable content. This should prevent ads from running alongside the most controversial topics, such as terrorism, nudity and recent tragedies. But keep in mind, this is an area they’ve attempted to snuff out in the past, with little success.
Past Brand Safety Concerns
YouTube took down more than eight million videos in late 2017 for violating its content guidelines. From October through December, roughly 6.7 million videos were flagged for review by automated processes alone. From there, roughly three out of four of those videos were subsequently removed. They also put as many as 10,000 employees on a manual, human-driven YouTube review group designed to curtail further issues.
But in reality, there’s no surefire way to police the entirety of YouTube. The sheer amount of content being added to the platform every minute and every day (400 hours and 576k hours respectively) is almost unfathomable. Without 100 percent error-proof natural language processing (NLP) and machine-based image and video recognition capabilities that can not only review – but also comprehend, digest and categorize – every single second of user-generated content, there’s no true way to ensure 100 percent safety on a platform like YouTube.
Artificial Intelligence and Brand Safety
Google has noted their algorithm shoots down 70 percent of violent extremist content within a few hours of being uploaded. This is the highest priority negative content that the new, automatic exclusions are designed to catch, but as noted, they’re only catching and subsequently removing seventy percent of it.
Google knows there’s no way to truly put out this fire unless they invest even more heavily in preventative artificial intelligence and machine learning – with a focus on automation designed to protect as opposed to automation designed for profit.
But do these most recent changes signal a move in that direction? A direction that demonstrates they’re ready to truly put protecting their advertisers first? Not yet.
Keeping Advertisers Safe
Lumping all of YouTube’s monetized content into three buckets doesn’t seem like the answer advertisers are seeking. More specificity would have been the best course. More options to ensure brands know where their ads will be served and where they won’t. This new format simply adds layers of ambiguity, which will lead to confusion. We’ll continue to see advertisers denouncing YouTube after ending up alongside content they don’t want associated with their brand. We’ll continue to see those same advertisers questioning how this could have happened.
With that said, these new “Inventory Type” classification categories seem more designed to alleviate blame on Google’s part than to protect advertisers when issues arise. In order to keep advertisers safe, Google needs to invest more heavily in the automation piece. But in all honesty, it’s in their best interest to wait.
Google likely won’t fully invest in automation designed to protect until enough large brands decide to fully divest from their platform due to brand safety concerns. Once that happens, it will signal a sea of change. The moment at which preventative automation becomes a profit-driving mechanism for Google – one they can use to lure back the advertisers they’d driven away, while enticing others to spend even more.
Despite all of this, YouTube is still a platform of incredible value. It’s still an environment with unparalleled reach. It’s still an environment allowing access and engagement with unique and sometimes hard to find audiences. And it’s still an environment most advertisers should invest in.
It’s just not an environment most advertisers can fully trust – yet.