The Ethical Faultline of AI in Advertising

Artificial Intelligence is transforming the way publishers create, distribute and monetise content across the advertising industry. Since its explosion into the mainstream in 2023, AI has taken the business world by storm. Its appeal is obvious: efficiency in automating tasks, the convenience of accessibility at little to no cost, and its ability to problem-solve even complex tasks with minimal human effort.

Arguments for and against the use of AI are extensive and ongoing, from job losses and concerning negative environmental effects to increased self-sufficiency and better health. But when it comes down to it, there’s one universal truth: whether we embrace or reject it, AI isn’t going anywhere.

That means our chief responsibility is not to debate whether or not it should exist, but to confront the risks that accompany it, and examine how we can use it fairly.

And arguably the primary hazard of careless AI use is the inherent bias that exists behind the agents and LLMs that make AI the flexible tool that it is, fed by prejudice deep in the algorithms, inherited from the data and people that build it.

Where bias creeps into AI & advertising

In advertising, AI systems typically process vast qualities of consumer data to decide who to target, when, and with what message. The issue is, these systems learn from the historical data on which they’re trained, which means they inevitably reflect and amplify the human biases embedded in that data.

The consequences are real. Unintended prejudice can show up anywhere in the advertising process: in under-representing certain demographics in creatives; audience segmentation that omits entire groups; language and tone in Generative AI content that perpetuates stereotypes; and targeting or pricing algorithms that reinforce harmful assumptions.

Evidence of all of this is mounting. A 2023 Bloomberg deep analysis into the Stable Diffusion AI model found that racial and gender disparities in 5,000 generated images were more extreme even than the real world. Researchers at USC, meanwhile, found bias in nearly 40% of the crowdsourced data on which AI was being trained. If the data is biased, fair outputs that treat all people equally are an impossibility.

How does systematic AI bias affect the advertising industry?

Advertising is not neutral. Our industry is built to shape perceptions and influence behaviour in order to sell ideas. With that comes a cultural responsibility as we impact the public narrative. The way people see or don’t see themselves in advertising impacts their sense of acceptance and belonging. The opportunities people are exposed to through ads can influence their choices, aspirations, and socio-economic mobility.

When ads are being created, served and targeted through AI systems in which bias is present, the stakes rise. We’re not only perpetuating inequality; we’re missing valuable audiences, leaving revenue on the table, and risking erosion of trust in a market where loyalty is already a fragile commodity.

Trust is advertising’s most valuable currency, and when viewers recognise patterns of misrepresentation, they begin to question a publisher’s credibility and impartiality.

Marketing Technology News: MarTech Interview With Kristin Russel, CMO @ symplr

What can advertisers do to remain ethical in the AI era?

Mitigating the risks of unintended AI prejudice in advertising requires more than good intentions; it demands a clear strategy and human oversight. Four core approaches should guide advertisers and publishers alike to prove to our audiences that we care enough to take fairness and inclusivity seriously:

1. Focus on Transparency

Every part of the process should be transparent, from how data is collected to how it’s used and how people are targeted. Not only does this allow for better decision making and bias reduction, transparency also increases consumer confidence and in turn, trust.

2. Audit and Diversify Data

AI training data needs to be audited regularly to root out and remove bias, and datasets should be diversified and updated on an ongoing basis to reflect current consumer demographics and behaviors. Advertisers should choose to work with data providers who collect information fairly from the get go to lower the chances of issues arising, and look for AI tools built around ethical outputs, such as Brandtech Group’s Pencil.

3. Keep it Human

At the end of the day, algorithms cannot replace human judgement. Campaigns need to be overseen with a critical human eye, by people with diverse understandings of the social impact and implications of their decisions.

4. Think Contextual

As cookies become less and less relevant to targeting, we have a unique opportunity to embrace contextual targeting (more on that from GumGum), which minimises the risk of perpetuating biases present in data. By aligning ads with relevant content, campaigns can reach wider and more diverse audiences, ensuring they reach individuals in all areas regardless of their socioeconomic or cultural backgrounds.

AI bias in advertising is not a side issue

It’s a defining ethical challenge of our industry in days to come. As publishers and advertisers, we hold the power to shape how people see themselves in the world, and love it or hate it, AI sits at the heart of that responsibility. If we don’t confront bias head-on, we risk alienating audiences and destroying trust that underpins our industry.

By leading with human oversight, making fairness and transparency a prerequisite to AI use, we can set a new standard and prove AI can stand for progress rather than prejudice. The choice is ours, and it’s one the industry can’t afford to get wrong.

Picture of Ren Bowman

Ren Bowman

Nonbinary creator Ren Bowman is a freelance journalist, marketer, and award-winning audio producer. They've worked in a range of industries including entertainment, adtech, education and finance, balancing work with various successful creative pursuits. Sitting at the intersection between queer and neurodiverse, Ren has a fierce drive to work hard, succeed humbly, and fight for what’s right. Outside of work they have a strong passion for human rights, mental health, and sustainability, and they've spent the past 7 years building online communities for LGBTQ+ and neurodivergent people to connect, socialise and seek advice in a safe environment.