Risky Business: Why It’s Time to #JustSayNoToKeywordTargeting

Zefr
5 min readApr 13, 2020

--

Over the last few weeks, the debate of brand safety blocking has intensified with the onset of COVID-19. As more quality publishers and journalists have started writing articles for the foreseeable future featuring Coronavirus as a backdrop, the more that it’s become apparent that existing Brand Safety tools are blocking good content and hurting quality journalism’s revenue.

This conversation has really set the groundwork for two larger trends that are occurring in Brand Safety: the lack of nuance of keyword technology in the open web, and the incredible progress of YouTube’s Brand Safety policies.

Open Web’s Keyword Dilemma

In mid-March, the COVID-19 pandemic started to appear in nearly every piece of content on the open web, generating huge consumption increase in News content. The public needed to read the latest to stay informed, educated, and ahead of what was happening in society. With more eyeballs on this content, a pattern began to emerge: high quality journalism content began to be “blocked” for violating brand safety terms from advertisers.

The Wall Street Journal posted as much in their first expose of how advertisers were avoiding Corona-related content. What surprised some was not that advertisers were avoiding content in a new pandemic, but rather the antiquated tools by which they did it. Coronavirus, it was revealed, overtook “Trump” as a keyword blocked by brands.

Imagine every article that mentions the word either “COVID-19” or “Trump” in it losing all of its revenue, and it becomes apparent that this is not a long-term solution for success. The problem is the blunt-ness of Keywords as a mechanism for blocking content, and the fear-based environment that requires them.

David Cohen, the newly appointed President of the IAB, called out for brands to actively remove COVID-19 from their blacklists in an effort to stop the bleeding. Unfortunately, when a majority of pre-loaded keyword blacklists also have keywords like “virus” “sickness” or “illness,” the step of removing “COVID-19” does little to mitigate the problem.

Safety Beyond Keywords: YouTube’s Incredible Brand Safety Progress

As the open web struggles with safety challenges and definitions, it’s time for the industry to acknowledge and applaud how far YouTube has come in terms of Brand Safety. Three years ago, when the first brand safety crisis hit, that would have seen impossible, with its open nature and diverse creator community. But the platform has achieved it, with its policy of human review and machine learning to remove unsafe content, and third-party measurement solutions as additional comfort for post-campaign reporting. As a result, brands that have problems on the open web can now turn to YouTube as a safe haven.

Clients have become so comfortable with safety, that the current third-party measurement taxonomy, which ranks placements/channels on a risk-level, is not considered a barrier to investment.

For those uninitiated, brands who choose to get third-party brand safety verification receive a report at the end of every campaign, with a list of the channels and videos they advertised on YouTube. On each placement, there is a label for the level of “risk” that the advertiser was exposed to: low-risk, moderate risk, or high risk. The category of “no risk” doesn’t exist with these third parties.

Obviously, no advertiser wants to advertise against risky content if given the choice. But thanks to the efforts of the platforms, most advertisers understand that these keyword-level reports aren’t an accurate gauge for brand safety risk on YouTube.

Look at the following two examples of “moderate risk” content according to this legacy keyword technology by third parties.

Example 1: DudePerfect

One of the most popular creators on the platform, DudePerfect, has built up an incredible community and following on YouTube, even to the point where they were selling out arenas last year for their unique Sports content. Brands not only don’t want to blacklist them — they go out of their way to try and advertise with them. However, when looking at a Third Brand Safety Measurement report, their content is often labeled as “moderately risky”.

Above is an example video from a recent third party safety “risk assessment” for a “Moderate Risk” video — as you can see here, the word “Battle” is in the title. But because advertisers understand the inherent safety and popularity of top creators like DudePerfect, they are able to discard this keyword approach for their video.

Example 2: Bethany Mota

Similarly, a video from top creator Bethany Mota was also labeled as “moderate” risky. The video is about healthy back to school lunches and advice from one of the platforms biggest stars. So what would be deemed “risky”?

Perhaps the word “hash”, as in “hash tag”, deep in the metadata — which would mistakenly label this video as a risk for drug/alcohol content. Clearly, this video is a perfect fit for a brand looking to align with some of YouTube’s most engaging content — and advertisers understand this risk assessment is not an accurate portrayal of their campaigns.

Looking Ahead

As the issues in the open web have exposed, keyword technologies are ineffective at capturing nuance — which is becoming increasingly problematic as quality journalism suffers due to legacy technology. As such, legacy companies are trying to conflate brand safety into brand suitability. They are not the same — brand safety is about content that all brands should block; brand suitability is defined by what suits your brand. No brand wants risky content — but they do want to align with content suitable to their brand needs.

By sunsetting legacy keyword strategies and instead using more nuanced tech approaches, the ecosystem can adapt to the next generation of context without putting the future of free, ad-supported on the line.

Rich Raddon is Zefr’s Co-Founder and Co-CEO.

--

--

Zefr
Zefr

Written by Zefr

Leader in Responsible AI, Brand Safety & Suitability

No responses yet