The Twists & Turns of Data Excellence

The Twists & Turns of Data Excellence

affinio logoFor the past few decades, the enterprise’s view on data has been clear: focus on Data Excellence when you collect, store, analyze and activate as much of it as possible. Marketers have long recognized that within its data lie the insights they need to develop the products their customers will want and the journeys they take when arriving at decisions. In other words, data excellence translates directly into long-term financial security.

Though the need for data excellence is straightforward and obvious, the road to achieving it has been circuitous. At first, enterprises stored it on internal servers on their own premises, a strategy that went a long way to mitigating risks of data breaches and privacy issues. Then in 1999, Marc Benioff upended things when he launched Salesforce, the world’s first entirely SaaS delivered company. The model offered quick implementation, frictionless upgrades and, most important of all, easy delivery of continuous solution innovation. In terms of data, the SaaS model give rise to data co-ops, such as data management platforms (DMPs) and data brokers that sold third-party data that marketers appended to their first-party data in order to gleam better insights into their customers.

Big Data and SaaS combined to deliver an even better benefit to marketers: the network effect. This phenomenon held that the more people use a solution, such as a DMP, the better it becomes. It’s how Google perfected its search engine, and it’s how AI companies trained their algorithms to arrive at the right answers.

Read More: How Can AI Boost Your Email Personalization Performance?

For marketers, the network effect was widely attractive, enabling them to gain clear insight into the customer journey by looking beyond their own first-party datasets. They flocked to DMPs and third-party data exchanges, eager to capitalize on the network effect, and to see how their consumers behaved across the online and offline worlds.

Today, data excellence is at a crossroad once again. GDPR, the California Consumer Privacy Act of 2018 and many other regulations currently under consideration have made corporate lawyers uneasy. Sharing or using data without consent now has big consequences, including multi-million dollar fines and reputational damage. Marketers now find many of their routine initiatives hopelessly delayed as jittery legal departments pore over the potential risks.

And GDPR has added significant expense! I regularly speak with data scientists who tell me that their organizations spend nine-figuring sums licensing third-party data from brokers who assure them that all of their data is fully GDPR compliant, data that is hosted on their own data lakes to analyze behind their firewalls, ensuring that their first-party data never leaves its walls.  But this is creating the next problem — scale and ease of use.

Although enterprises have massive data lakes and teams of Ph.D.-holding data scientists charged with mining it for insights, results have been mixed. As it turns out, getting insights out of the data lakes is difficult and time consuming. Leveraging data for marketing purposes, quickly and at scale, has been elusive. Ironically, first-party data, long considered an enterprise’s most valuable asset, is now seen as its biggest headache.

Read More: How to Analyze Your Lead Generation Form Using Google Tag Manager

The Way Forward: Containerization

It’s obvious to me that a new approach to storing and using data is urgently needed. Businesses today need the flexibility of the cloud but the security of on-premise software. I think containerization is a good option, and I wouldn’t be surprised if over the next ten years it became the de facto method of software delivery.

If containerization is a new concept to you, let me explain the basic idea. The containerization delivery model is a sort of hybrid of on-premise and SaaS. It’s like on-premise in that the actual technology is shipped to the enterprise and is managed wholly within the enterprise’s walls. As with the on-premise model, it offers significant security and privacy benefits.

Conversely, containerization is similar to SaaS in that it’s cloud-based, meaning that no physical servers are sent to the enterprise. But rather than the enterprise loading its data to a third-party vendor platform, the vendor packages up its technology into a container, and sends it to the enterprise for it to run within its cloud-instance.

These are obvious benefits, but containerization is not without its drawbacks. Namely, if every enterprise keeps its data to itself, we lose the network effect. Imagine looking for answers on the web if the likes of Google and Facebook had been required to keep every user’s search or activity private, and couldn’t aggregate their interactions to sharpen their recommendation engines.

Containerization will force enterprises to choose between privacy and security and the network effect. That said, I think there may be a silver lining to this cloud: given the history of global innovation, I suspect that new machine-learning and AI technologies will arise to allow enterprises to see their first-party data in new and imaginative ways, and activate thousands of smaller and more pointed marketing initiatives.

Whether or not enterprises adopt the containerization or some other software delivery model, one thing is certain: things can’t go on as they are. Enterprises may be faced with choosing between privacy and data security and all the marketing simplicity and efficiency that stems from SaaS and traditional data coops. I think, long term, privacy simply must win out.

Read More: GDPR Compliance: Decoding The Mood A Year Later

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *