Why Brand Safety Measurement Standards are Failing Marketers

Why Brand Safety Measurement Standards are Failing Marketers

With 85% of consumers feeling that brands bear responsibility for ensuring their ads run adjacent to content that is safe (DV/Harris Poll), following a year of social disruption involving a divisive presidential election, racial injustice, and COVID-19, it’s no surprise that brand safety is a top priority for marketers.

But, the ways in which brands are approaching brand safety measurement are wrong — especially as we enter into an online world driven by the creator or “passion” economy.

Today, brand safety standards rely on traditional tools like keyword lists and generic API solutions. While these standards effectively offered a means of brand safety in the 2010s’ attention economy, which centered on static editorial content that was grammatically sound, structured, and tagged with meta information that assisted these tools, they are now unequipped to handle emerging online conversational communities, social networks, chat platforms, forums, and blogs that are all powered by user-generated content.

Marketing Technology News: Strata Introduces Identity Orchestration Platform for Multi-Clouds

The New Dynamic of Conversation Online 

To understand why brand safety metrics must change, we must first address the new dynamic of online conversation.

As conversation-driven online platforms have become more popular, the format has become less structured and content production has become more rapid. For instance, even in a relatively structured environment, like Reddit, nested conversations and “top” views give way to what’s “new” in live threads. Chat activity alongside a livestream (such as on Twitch or YouTube Live) is decidedly more robust and less immediately coherent than comments left beneath a Video on Demand (VOD) or article.

Each of these examples speaks to the power and appeal of naturally occurring, unstructured conversations. These free-flowing environments empower users to openly express their opinions and views, driving deeper audience engagement.  With almost half of consumers (48%) claiming that user-generated content (UGC) is a great way to discover new products, brands cannot miss out on engaging in these environments.

After all, these high-velocity environments are often the nexus of the next viral story. They are also, however, the most difficult to measure.

As conversation volume accelerates, technical clarity decreases. Where editorial content typically expresses clear thoughts across multiple sentences and paragraphs, live conversations are littered with emojis, sentence fragments, slang, and other loose expressionism. While participants in the conversation may have a contextual understanding of one another, historical brand safety mechanisms lack the requisite perspective to accurately classify the content.

For instance, consider how a keyword-based tool would regard the discussion of the show “Sex and the City” or how it might censor the phrase “this is f***ing awesome.” Both are likely to be flagged as unsafe content, but the former is clearly safe, and the latter depends on your tolerance for positive profanity. Meanwhile, that same keyword tool is likely to miss inappropriate content, particularly when an author is creative with their spelling or leverages emojis within their message. When brand safety tools incorrectly flag proper nouns but cannot understand emoji innuendos, then everyone is disserviced.

Marketing Technology News: MarTech Interview with Navdeep Saini, Co-founder and CEO at DistroScale

Gaps in Current Standards 

To date, the standard mechanisms for verifying the safety of an ad placement have been keyword lists and APIs that scrape the static content on a page. However, both of these approaches have their faults.

Keyword lists began as a way to protect brands from the words they don’t want to be associated with. Lists used by Fortune 500 brands are fairly well circulated, emphasizing how stagnant they have become. One notable example is the open source “List of Dirty, Naughty, and Otherwise Obscene Bad Words (LDNOOBW),” which was originally created to restrict people from auto-completing dirty terms in their Shutterstock search bar, and is still relied upon to this day. Unfortunately, both brand-generated and open-sourced keyword lists suffer from the same problems: they are static solutions that quickly become outdated as language evolves, and they are unable to process content in context.

Meanwhile, API solutions — such as the Perspective API created by Google’s Jigsaw and the New York Times —  offer generic approaches to content classification. While an API approach does mitigate the stagnation issues of keyword lists, most are ill-equipped to understand the nuanced vernacular of niche audiences. For instance, while Perspective is certainly better than any keyword list, it struggles with deciphering informal and unmoderated, grammarless language. Pulling from our previous example, without the contextual knowledge that Sex and the City is a unique entity, an API might mistakenly tag the text as inherently sexual.

Each of these mechanisms doesn’t account for the most important thing in determining brand safety — context.

Marketing Technology News: 4 Best Practices For Managing A Remote Tech Workforce

Addressing Brand Safety with AI 

Today, contextual understanding and classification of content is possible, allowing the industry to move on from its keyword reliance of the early 2000s. While machine learning is still relatively young, especially in comparison to linguistics, it has matured significantly and has the ability to incorporate knowledge of the world — and its multitudes of communities — into systems and models that allow deeper and more precise understanding.

Moreover, as NLP AI continues to evolve, the industry is able to process more information faster, enabling real-time understanding of even the highest velocity, unstructured environments.

Conversation understanding at scale is here. Now, it is time to seize our ability to understand and use this knowledge to identify and foster safer environments. The next viral moment won’t wait; why should our methodology to understand it?

Picture of Andrea Vattani

Andrea Vattani

Andrea Vattani is the Founder & Chief Scientist at Spiketrap

You Might Also Like