Chief Scientist, Lotame
We are witnessing a phenomenal change in the way CMOs work with data now. Almost every contemporary marketing leader now prefers to consult with the top data scientists of the world. The need to build a powerful intelligence marketing tools have inspired martech companies to deploy talented data scientists that can help explore finer nuances of data orchestration and people-based marketing. To understand the growing disruptions in data science, AI/ML, and the maturing capabilities of Data Management Platforms (DMPs), we sat down with Lotame’s Chief Data Scientist, Omar Abdala.
Tell us about your role at Lotame and the team you handle.
I am the Chief Data Scientist at Lotame. Before I explain my role, I think some background on me is helpful. I joined Lotame through the acquisition of AdMobius, the first mobile Audience Management Platform, which I co-founded. Prior to that, I was a Data Scientist at Apple, leading the creation of iAd’s targeting database. I am also the author of more than 15 technical papers and patents on statistical modeling, inference, ad network optimization, targeting, and inventory management. I hold eleven patents, several of which are for predictive ad technologies that hinge on user intent and interests. I am proud to say that my work has shaped the trajectory of ad tech.
As Lotame’s Chief Data Scientist, I steer the development of our platform’s data science, AI and machine learning capabilities. These technologies ultimately power our technology and enable marketers to better target potential customers by giving them access to and analyzing billions of behavioral and demographic data points connected to millions of computers, tablets and mobile phones across the world.
How should Data Management Platforms leverage artificial intelligent assistants to better audience segmentation and to improve engagement rates?
Ultimately, AI and machine learning need to support DMPs in bridging and understanding the relationships between audiences and devices. Frankly, too much digital advertising today is executed in silos. Audiences are built across smartphones, tablets, and desktops, and campaigns are served across these devices with no clear understanding of audience overlap, and no way to effectively deliver sequential messaging. It’s one of the biggest challenges facing digital advertising today.
We have used AI and machine learning to tackle this problem with our own cross-device technology. It determines relationships that exist between billions of PII-free signals flowing from desktops, smartphones, and tablets. It essentially creates a device graph so that different devices can be mapped together. To be a successful DMP in the future, every DMP will need these capabilities because they create significant opportunities for advertisers and publishers alike who need greater insights into how audiences behavior no matter what device they might be on.
Also, the ubiquitous availability of data in a DMP as well as advancements in Machine Learning on big data platforms make this the right time to transform data selling from its current position as unvalidated claims from unknown providers on a profile’s demographics, interest, intent, viewership etc. into highly specific probabilistic claims that can be independently verified using a validation service (like Nielsen’s DAR) or another agreed upon surveying methodology. A DMP is the best-positioned entity in the current AdTech landscape to accomplish this goal, and it’s critical to both engagement rates on specific campaigns and the health of the entire data and targeting industry.
On what factors is the performance of data streams within a DMP measured? What data do Data Scientists measure that tells users how customer interaction is progressing?
In terms of performance measurements, there are many and they span a few different types of analysis:
An Audience Property is something measured against a particular group of profiles and generally has to do with what a perceived characteristic the audience should be:
- Accuracy: if I target 1000 “home buyers”, how many will buy homes within 90 days?
- Reach: using some Audience definition, how many of the home buyers in America could I possibly theoretically reach?
- Price relative to media: how much more expensive is buying the “home buyers” segment than blasting an untargeted media campaign?
For campaigns, we have primary goals:
- CTR: how many impressed profiles are even preliminarily clicking on an ad?
- Engagement: what is the user doing once they click an ad? Immediately navigating away (could be erroneous click) or staying and viewing, scrolling, listening to content?
- Conversion: After engaging with an ad, is this profile more likely to purchase the product relative to a control set or not?
And we have secondary campaign goals:
- Yield: how many conversions can we generate using a campaign?
- Efficiency: how much media spend does it take to generate conversions?
- Lifetime Value: Does our media spend impact long-term conversion of profiles which are led to our marketing?
In terms of metrics that Data Scientists would look at and are deeply predictive of the “performance” metrics, some are:
- Frequency: how often are we seeing the profile overall? Is this a “fly by night” profile?
- Recency: how recent is the data of interest that indicates that we want to impress this user with our campaign?
- Coherence and Data Quality: Are there inconsistencies in the data profile? Is this profile both Male and Female? High and Low income? Of the declared data, does this agree with our modeled data? If not, how divergent is it?
- Bot and Viewability: Is this profile a bot? Does the signature represent that of a human? Bot? Something in between? If we impress this user with an ad, what’s the likelihood that ad will even be viewable (on screen greater than some fraction of the ad for at least some modicum of time)
What are the major pain points for data scientists in building DMPs for micro-targeting and micro-segmentation?
Micro segmentation creates several pain points:
- Infrastructure Load: Dramatic increase in the number of behaviors or data attributes which increases data retention, transfer, caching, processing costs and makes many downstream processes unfeasible.
- Tradeoff between effort and potential revenue: It makes sense to spend Data Scientist time working on Age and Gender since almost every campaign targeting or analytics will use it. This is not the case for a microsegment like: “intent to purchase an air intake for a Chevy engine in the next 30 days” or the potentially hundreds of thousands of such segments that could be created.
- Communication with customers and practical use: While the marketplace demands more and more micro segmentation, doing so outside the context of a system where customers can easily sort, select, and combine fine segments places too great a burden on people who may not have the tools to manage all the choices.
What would be the next frontier for AI-driven DMPs?
Although AI and machine learning are delivering next-level results in terms of segmentation, it is still important to remember that it is still a relatively nascent technology and that we have really only begun to scratch the service in terms of how it can be applied to data management. In the year ahead, we should see more significant strides being made in how AI can contextualize data, personalize experiences, and automate which data is being collected and which isn’t based on audience trends. This will drive even deeper disruption in the data and DMP spaces and will really begin to reveal opportunities that were completely unforeseen by DMP insiders even a couple of years ago. It will also allow marketers to cut out major amounts of time by virtually eliminating much of the manual labor of data collection and audience targeting.
Thanks for chatting with us, Omar.
Stay tuned for more insights on marketing technologies. To participate in our Tech Bytes program, email us at email@example.com