How AI in Broadcasting Has A Hand In Improving Marketing

The role of video in marketing is on an upward trajectory, thanks to a wide range of new platforms offering short, engaging and often free content. According to a 2021 Statista survey of US consumers, more than 7 in 10 respondents indicated they did not subscribe to cable or satellite TV because they could access the content they wanted via other online options. Punctuating this finding, Deloitte reports that 4 in 10 US respondents in its recent survey say they spend more time “watching user-generated video content than they do TV shows and movies on video streaming services—a sentiment that increases to around 60% for Gen Zs and Millennials.”

There’s plenty of evidence echoing these findings, so it’s no surprise marketers are taking heed. But, with the volume of online video viewership increasing rapidly, the production of video content and the optimization of the functionality of the content becomes a more interesting challenge for video producers. The modern consumers of video content demand more flexibility with how they use it. They want a steady stream of high-quality content for various screens and form factors. The content must instantly capture their interest and deliver the message (or even persuade them) within a few minutes.

Manual analysis of live streaming content to identify key highlights for promotional material and user sharing is time-consuming and involves expensive manpower. Smart content processing using artificial intelligence (AI), machine learning (ML), deep learning, and natural language processing (NLP) are expected to introduce major disruptions in video content production and streaming. These technologies will impact streaming through all stages of production—from content creation, processing, post-production and consumption.

Marketing Technology News: MarTech Interview With Don White, CEO and Co-founder at Satisfi Labs

AI can perform mundane, repetitive tasks in a fraction of the time and use minimum human resources than would otherwise be required. Technology giants have made headway in this space with Google Cloud Video Intelligence, Conviva’s Video AI Architecture, Nvidia DLA and IBM’s Watson technology. These technologies currently deploy AI in varying degrees—especially in the cloud.

Applications of AI in sports streaming, in particular, have jumped. Just consider that in the not-so-distant past, sports fans were glued to the radio for commentaries of matches. Then came television and live telecasts of sporting events from around the world. But for today’s sports enthusiasts, live telecasts of matches are not enough. They want to engage with the game and look for means that bring them closer to real-time action.

Modern AI and ML technologies provide some truly immersive experiences to meet this growing demand. Key moments packages with real-time highlights are the fastest growing video segment, whether in sports, movies or television, with the video industry estimated to reach nearly $20 billion by 2023. AI can interpret streaming content and extract metadata by automatically generating descriptive tags, categories and summaries, helping with smarter analytics, content insights and better content management. Advanced solutions powered by AI and ML can identify specific game objects, constructs, players, events and actions, aiding in near real-time content discovery and helping sports producers and marketers create highlight packages automatically while the game is in progress.

The application of AI to capture key moments in live streaming sports can be illustrated with a real-life example of how AI helped a sports media production platform build interaction with their viewers on their app, engage them extensively and provide relevant real-time content to them instantaneously during their broadcast of a major Grand Slam tennis event. An AI-driven platform helped identify the context of the content and map it across the broadcast. The AI identified nine key context data points, including tennis modalities through ball and player tracking, court mapping, audio interpretation, player reactions, crowd reactions and more. For instance, the AI engine identified when the receiver did not return a ball by tracking the ball’s movement and the interaction it had with the racquet. It also identified and tagged all match interviews in the live broadcast stream.

Marketing Technology News: 4 Steps to Recession Proof Your Marketing

This seamless contextualizing and automatic metadata tagging done on live broadcasting streams provided rich data-led video content accessed as easily consumable bite-size content pieces and created content assets—teasers, highlights, player focus reels and key-moments packages. In this case, 490 hours of live broadcast stream content were analyzed, and 365 hours of key moments and highlights were identified and repurposed as smaller bite-size content. This AI-engine-based video content processing helped save 70% of manual editing time and achieved 80% cost savings. It also improved efficiency by eliminating the chance of human errors. The largest impact was the 120% increase in audience engagement.

Unstructured content accounts for 90 percent of all digital information locked in various formats, locations and applications of separate repositories. When connected and used properly, such information can help increase revenue, reduce costs, address customer needs efficiently or help bring products to market faster. Many media companies are exploring ways to have unified search/metadata across various forms of their digital information assets. Manual metadata tagging is laborious and inefficient, considering the sheer volume of content. ML-aided metadata extraction and analytics can be used to unleash the power of such vast information assets and use them efficiently.

The adoption of bite-size videos is increasing across sports broadcasting. Cutting-edge technologies such as AI, ML, NLP and predictive analysis amplify audience engagement through revolutionary breakthroughs such as automated real-time creation of highlights and key moments packages and advanced video analytics. Cloud-agnostic automated meta-tagging ensures meticulous organization of videos, which helps the audience find the content relevant to them quickly and efficiently. It will be smart for companies that use video content to engage with their customers to include AI-powered solutions in their content strategy. This will help them achieve high audience engagement at a much lesser manpower cost while significantly cutting down the production time and producing higher profits.

Picture of Meghna Krishna

Meghna Krishna

Meghna Krishna is the Chief Revenue Officer at Videoverse. As part of her job she is responsible for revenue maximization along with facilitating strategic partnerships and innovative marketing solutions for the overall growth of the company. Her unique leadership shines through her ability to build passionate teams focused on delivering the best quality services to their clients. Meghna has a proven track record of over two decades of running successful businesses while building and developing teams from scratch. Prior to joining hands with the passionate team at Videoverse, Meghna has worked across 3 continents in industries including retail, e-commerce, travel, and SaaS, giving her an extensive understanding of global and national market trends. Given the vast experience, she is able to effectively manage investor and stakeholder relations to supercharge business growth. Meghna is an MBA holder from INSEAD and currently lives in Delhi with her family. She enjoys traveling and learning about other cultures.

You Might Also Like