spot_imgspot_img

Recently Published

spot_img

Related Posts

From Cookies to Code: why AI regulation needs a Privacy Sandbox approach

Artificial intelligence (AI) is no longer an experimental layer sitting on top of the digital economy. In a relatively short space of time, AI has become a key interface through which people make decisions about which products or services to buy. As mainstream adoption continues to accelerate and the market edges toward the trillion-dollar scale, the question is no longer whether regulation is needed, but who should do it and how it should be implemented.

Those distinctions will become increasingly important. Done well, regulation can protect users, foster competition and sustain innovation. Done poorly, it risks entrenching the dominance of the largest technology platforms. In many ways, those same platforms are already best positioned to shape and absorb regulatory change, potentially leaving everyone else at a disadvantage.

The sheer momentum of AI to date makes it easy to feel helpless in the face of such a technological revolution. How can any of us hope to help shape and guide the ways in which AI is to unfold?

Fortunately, the digital media industry has faced a similar inflection point before in its recent history. The journey towards cookie deprecation offers a valuable lesson, and perhaps a blueprint, for what comes next.

The Privacy Sandbox experience

When browsers began phasing out third-party cookies, it triggered a wave of uncertainty and, in some cases, outright panic across the digital ecosystem. Advertisers, publishers and ad tech vendors all faced the challenge of maintaining addressability and monetisation while ensuring user privacy. Google’s Privacy Sandbox initiative was the most notable attempt to strike that balance.

The Privacy Sandbox was not perfect, but its intent is instructive. Rather than abruptly removing a foundational technology and leaving the ecosystem to adapt overnight, it introduced a standards-based framework designed to evolve over time. It sought input from across the industry (including publishers, advertisers, developers and regulators) and aimed to create privacy-preserving alternatives that could support the economic model of the open web.

One could argue that Google could deprecate cookies, as Apple did, and introduce its own unique way of targeting users in Chrome. Instead, it opened up the discussion with the ecosystem around collaboration and iteration. This created a space, however imperfect, for broader participation and conversation, demonstrating that large-scale ecosystem change can be coordinated by consensus rather than imposed.

The cookie deprecation process made it clear that simply “switching off” a core capability at scale is not viable. Sudden changes risk destabilising publishers who rely on advertising revenue, limiting the ability of smaller tech providers to compete, and forcing advertisers into narrower, less transparent buying environments. Meaningful progress required frameworks that could be refined in real time, informed by data and shaped by those operating across the ecosystem, not only those at the top of it.

Marketing Technology News: MarTech Interview with Max Groth, CEO at Decentriq

Regulating agentic AI

Today, the digital media industry faces a parallel moment with the rise of agentic AI. These systems are increasingly acting as intermediaries between users and the digital world. It’s shaping what content is discovered, which products are surfaced, and how decisions are made. In effect, they are becoming gatekeepers to information, commerce, and attention.

As control over these systems concentrates in the hands of a few large players, questions around transparency, fairness and access become more urgent. Regulation is clearly necessary, but it must be approached with care.

A “sandbox approach” to AI regulation, at its core, means developing standards collaboratively across the industry, rather than imposing rigid rules from the top down. It also necessitates creating environments where new approaches can be tested, evaluated and iterated before being scaled. Finally, it requires that any regulation evolves alongside the technology it seeks to govern.

Large technology platforms have a critical role to play in this process. As with the Privacy Sandbox, companies like Google have the scale, data and infrastructure to help develop and test new approaches. But with that role comes responsibility. Their contribution should be to support industry-wide solutions, not to define the rules in isolation.

Collaboration, transparency and iteration

There are already signs that the stakes are rising. As AI systems become more embedded in advertising, commerce, and content discovery, brands need to collaborate effectively with chat interfaces, which act as intermediaries, and with end users. Without clear and collaborative frameworks, the risk is that regulation, however well-intentioned, ends up reinforcing the very dynamics it seeks to address.

The transition from cookies to privacy-first alternatives showed that the industry is capable of navigating complex change. It also showed that the process matters as much as the outcome. As AI becomes the primary interface for digital decision-making, those same principles must guide the next phase of regulation. Collaboration, transparency, and iteration are not just desirable; they are essential.

A sandbox approach offers a way to balance innovation with accountability, and competition with control. The window to get this right is narrow; fortunately, the blueprint already exists.

About PrimeAudience

PrimeAudience (an RTB House company) is a, AI-driven, privacy-focused adtech platform designed to boost client acquisition and enhance targeting. It uses Generative AI to create custom audiences, reducing ad costs by up to 80% and providing up to 40-60% identity resolution of website visitors without relying solely on third-party cookies

Mateusz Ruminski
Mateusz Rumiński is VP of Product at PrimeAudience

Popular Articles