AI Blunders and How Marketers Can Avoid Them

As marketers find more practical AI use cases for interacting directly with consumers, they’ll also find more risk that the tech will make embarrassing missteps in their brand’s name.

Inaccurate or biased ad targeting, off-brand content generation, gaps in data privacy — these are all blunders that brands have been caught making, courtesy of their AI tools. For any brand with a broad presence and countless localized consumer touchpoints, it looks like a lot to manage. But it’s possible for marketers not only to sidestep these risks, but to do so easily, by implementing AI incrementally and transparently.

High-profile AI blunders — and cases of AI done right

When generative AI hit the mainstream in the form of tools like ChatGPT, DALL-E, and Midjourney, major tech businesses were eager to position themselves as tech trailblazers. But soon, high-profile AI blunders appeared, which now serve as cautionary tales. To wit: MSN ran a clearly AI-generated obituary for a professional basketball player, with mangled phrasing and an inappropriate tone. It scored considerable bad press for MSN. Famously, a New York Times reporter manipulated Bing’s AI chatbot to tell him it loved him – not a good look for the AI’s trustworthiness. Some sites — particularly those that rely heavily on SEO for traffic — have been called out for running variations on the same headline. And while Google has said generative AI content can be used as long as it’s not written just to rank, the company has struggled to distinguish good content from great.

Meanwhile, other leading businesses have taken steps to respect the user experience, and to soften any reputational effects of their AI tools’ current blind spots. Google tells the consumer its implementation of AI is experimental, and the consumer needs to opt into using it. Meta launched an experience where users can interact with AI versions of celebrities — including one where Snoop Dogg portrays a Dungeons and Dragons Dungeon Master, answering only questions about Dungeons and Dragons. This way, Snoop’s own brand is protected by narrow guardrails from having inappropriate words put in his mouth.

Realistically, brand voice can be managed easily in AI applications, as long as the brand can feed the AI a large volume of language in that voice. But understanding company policy is another story. What if a consumer raises a complaint to an AI chatbot, and the chatbot offers a $10 discount, but that’s not company policy? Such missteps could become a real PR issue when the consumer decides to take their experience to social media.

Marketing Technology News: MarTech Interview with Roman Gun, Vice President, Product @ Zeta Global

Brands can fill in consumers’ AI knowledge gaps

When the results of  SOCi’s 2024 Consumer Behavior Index (CBI) were unveiled, it became clear that the share of consumers who say they feel well-informed about AI (61%) is greater than the share who say they’ve used AI tools (31%). Basically, they’re not getting their information from direct experience alone, but from news coverage and word of mouth. Some of that news is accurate and well-contextualized. Some of it sensationalistic or over-simplified. And however reliable the information, the typical consumer isn’t actually obligated to follow AI trends closely.

Savvy brands can turn AI into an educational opportunity for their consumers, and correct consumers’ misunderstandings about the technology. Simultaneously, brands can learn from their own relationships with consumers, to understand what kinds of AI experiences enhance which consumer touchpoints. Consumer-facing AI tools are varied in their levels of sophistication, but they always need to provide a useful experience and do what they’re intended to do. If the consumer simply wants a question answered, for example, but customer service AI tries to sell or upsell them a product or service, that’s poor UX, and the brand needs to amend how the AI functions at that touchpoint.

Brands need a transparency checklist

Consumers want brands to be transparent in how they use AI in local marketing. This is complicated for brands, though, because the type of disclosures they provide can vary even on a site-by-site or app-by-app basis. Yet disclosures are a must. Transparency means that the consumer must be informed of when and how they are interacting with AI. And UX must be carefully considered and trialed.

Let’s consider how tech giants have historically approached disclosures. Apple offers a long list of terms and conditions upfront, which consumers tend to trust and accept. Then users can control location-sharing in apps on a case-by-case basis. Facebook permits users to either allow or deny permissions to countless advertisers – but those advertisers must be selected one at a time. These are two very different forms of disclosures and permissions, but they’re both valid and compliant.

Don’t expect government regulators to step in and issue guidelines for AI transparency anytime soon. As recent congressional hearings with tech executives have shown, legislators are generally not AI experts. But your brand needs a consistent compliance policy, regardless. One crucial task for the brand is to understand the kind of questions their consumers will be asking AI tools — and to locate and accurately respond to consumer comments, reviews, and questions, anywhere on the web that need feedback from the business.

Next steps for providing AI clarity and guidance to consumers

Only the largest brands have the internal resources to become AI experts. Any business’s top responsibility is to be the best at their core product or service, not to optimize UX and performance of AI tools. And most brands will need partnership from a provider that deeply understands AI. Remember that whenever your business starts using new AI tools, the output won’t be perfect right away. Trial them using a small sample size, and get a feedback loop running using data from opted-in early adopters. Make consumers aware of the value exchange with these tools: This deepens consumer trust and draws input that can continuously train the AI. These efforts will help minimize inaccurate, biased and off-brand AI output.

It’s important to point out that consumers will appreciate AI’s efficiency and capacity for remembering preferences — if they don’t perceive the AI as “creepy,” wrong, or annoying. And while generative AI tech is improving, AI doesn’t “think,” doesn’t have any intentions of its own, and needs human oversight. To keep consumers loyal to and engaged with your brand, marketers need to set up those all-important guardrails, and to provide the transparency that fosters trust.

Marketing Technology News: Building Bridges, Not Barriers: Brand Strategies For Social Advocacy

Episode 205: Driving Sales and Marketing Engagement with Video-Audio Outreach: Featuring Michael Litt, CEO at Vidyard

Picture of Damian Rollison

Damian Rollison

Damian Rollison, is Director of Market Insights at SOCi

You Might Also Like