In a new era of CTV targeting, Ken Weiner, CTO at global technology and media company GumGum, explains why brands need to start thinking beyond the signposts of standard metadata.
Imagine you are a brand rolling out a billboard campaign in a major city. The billboards you buy in a particular neighborhood are going to be based on the people and characteristics of that specific locale – rather than the city as a whole. You could just approach it as a city-wide project, but you’d be missing the geographical nuance that would really make your messaging hit home.
It’s this distinction that is causing fracture lines in the burgeoning OTT market right now. In the rich new world of contextual CTV targeting, most advertisers are calling for just basic video-level content metadata – giving them a broad-brush overview of what any given piece of video content is about. What is lacking, however, is a more detailed understanding of different content segments that appear before or after each ad marker.
The result is ham-fisted: the equivalent of positioning your billboards anywhere in a city, rather than specific demographic areas. Like a placard in the wrong place, your ad may well end up next to content that is irrelevant or unsuited to its brand message – when just a few blocks (or ad markers) away, there could be the perfect match.
Marketing Technology News: For better Or For Verse Focus 22: Web 3.0 and Metaverse
It’s understandable that brands, agencies and publishers are lining up for their share of connected TV spoils. With 82% of U.S. TV households owning at least one connected TV device, CTV is on poll as the fastest-growing video advertising platform. Like all lucrative growth trends, however, it comes with teething problems; and players across the digital landscape should be mindful of these kinks.
Chief among them is the lack of adequate content metadata for CTV. Currently, CTV metadata is used to identify and categorize videos as a whole. Instead of taking this one-size-fits-all approach, I believe the industry should be going further and aiming for a mechanism I describe as “intra-video metadata”. This involves the creation of tags that provide rich contextual insight on the content directly before and after each ad marker within CTV video – where the brand message is actually going to appear.
Let me give you an example of how this could work. Say you have a two-hour Hollywood film going out on a premium CTV channel. A contextual vendor that uses AI to scan the video may find a scene involving gun-related violence at minute 15 and tag the video as a whole as ‘violent content’. This video-level metadata could mean that many brands would avoid advertising against the video entirely: but really it’s just one component that is potentially unsafe, with much of the video providing perfectly suitable inventory for advertisers.
With intra-video metadata, we could bring more precision to the scenario. The metadata would flag where exactly certain scenes are, including any acts of violence or explicit content that may appear around ad markers, making them inappropriate for certain ads. Brands would be able to avoid these points as required and advertise against other, more suitable markets within the video, without skipping on the entire movie. With intra-video metadata, we can avoid the sticking points that currently exist, creating a much more transparent, accurate and brand safe approach to CTV targeting.
Marketing Technology News: The Outlook on Growing Trends in Digital Marketing that will Rapidly Grow in 2022
Next steps, new standards
It’s still very early days for the emerging CTV ad ecosystem, and many agencies and brands are not even aware of the need for more precise metadata tagging – let alone the difference it could make in maximizing CTV alignment. Intra-video metadata is my take on how we could move things forward. As an industry, we need to start a debate on how we can define this new approach and develop standardized practices to ensure it is widely adopted and understood. Bodies like the IAB will obviously be pivotal in supporting this.
At the same time, players in the CTV media space will need to up their technology game and ensure they are working with the right partners to make intra-video metadata tagging possible. AI-based contextual intelligence has reached a point of advancement where CTV video content can be analyzed on a forensic, scene-by-scene basis, including the audio. This allows accurate metadata tagging to be performed at any point within a video.
The biggest drive for change, however, should come from the ground up. Advertisers should educate themselves on the limitations of standard CTV metadata and lend their voices to the drive for improved methods. If players on the demand side of the digital ad model recognize the need for precise metadata within CTV video, ad tech and publishers will quickly evolve to make it a central part of their offerings.
The need to invest in improved content metadata should be at the top of the agenda for our industry. If we want to target video segments with real precision – integrating intra-video metadata as a go-to standard will be critical. Only then will CTV reach its full potential, for publishers and advertisers alike.
Marketing Technology News: Marketing Success in 2022 Relies on Agility, Adaptability, and Innovation