Digital ads appear way more effective than their actual effectiveness because they are sold on the fact that digital ads increase click-throughs. These people will buy the product anyway, even if they don’t see an ad. It is possible to reduce your advertising budget and make more.
Digital ads are wildly underrated. An extensive study of ads on eBay revealed that brand search ad effectiveness is overestimated up to 4,100%. Similar analysis of Facebook ads revealed a number of 4,000%. It seems that companies don’t know the answer to the question posed by John Wanamaker , a 19th-century retailer: Which half of my company’s advertising budget was wasted?
This question should be answered. It’s not lack of information that is the problem Wanamaker encountered, but a fundamental confusion about correlation and causation.
The Conversion Fallacy
Marketing reps often sell ad space to clients and claim that ads can create or cause behavior change. This phenomenon is commonly called lift. The conversion rate is the common way they back up their claim.
I ask my students to imagine me standing at the door, handing out flyers advertising the class on their first day. Then I ask them “What is the conversion rate of my ads?” Because 100% of those who saw the ads “bought” the class or enrolled in it, they always correct me. I then ask them: “How much did those ads affect your behavior?” They all answer, “Not at all,” since they were all already enrolled in the class before they saw the ad. So, although the conversion rate is 100%, the lift it causes — the amount of behavior change that it induces — is zero.
My example, while it is simplistic, illustrates how confusion around lift and conversion can cause problems when measuring marketing ROI. Big companies pay big money to consultants to target their ads to the most likely buyers of their products. The conversion from click-to-cash will not be profitable if the targeting isn’t directed at customers who don’t have the necessary preparations to purchase the products. Advertising is about getting people to purchase your products (or donate to a campaign or take a vaccination) that they otherwise wouldn’t have.
Let’s suppose we want to find out if (A) a person’s lifetime earnings are lower because they joined the military (B). It is not possible to simply compare the salaries of those who enter the military with those who do not. There are many factors (C) that could cause differences that we may not see in the raw numbers.
People with better-paying jobs, for example, are less likely than others to join the military. This is called B causing A. People with higher education and skills are less likely to join the military. (C causing A and B). What appears to be a causal relationship between lower average wages and military service may actually be an indirect result of these other factors. It is important to control for other factors and still find the right relationship to study.
This can be done by creating a control team. Randomly assigning people to the military will result in the treatment group (or the control group) having the same education, skills, as well as their age, gender and temperament. If there is enough data, it will be possible to compare the distributions of all the observable and non-observable characteristics among people who were assigned to the treatment and control groups. This would explain why the outcomes differed between the two groups. We can confidently say that their military service is the only thing that can affect their wages.
This is the problem. Scientists would have a hard time justifying a study that randomly enlisted people in the military. These cases are called “natural experiments”, which are natural sources of random variation that replicate a randomized experiment.
Josh Angrist used a good natural experiment to measure how military service affected wages. This was the draft lottery that was imposed upon U.S citizens during the Vietnam War. Each male citizen was given a draft lottery number. These numbers were randomly chosen to determine who was drafted. Draft lottery was a natural experiment which created random variation in people’s chances of joining the military. This variation was used by Angrist to calculate the causal effect on wages of military service.
Similar to my experiment with the weather, Christos Nicholaides and me used it as a natural experiment for understanding the impact of social media messaging upon exercise behavior. People who run more often have more friends who run. However, the weather variation helped us determine the extent to which social messages from friends caused us to run more.
You will quickly discover that online ads have different effects than you thought. Yahoo! For example, the Yahoo! However, only 78% of those sales were from repeat customers who clicked on the ad. 93% of actual sales took place in brick-and-mortar stores and not online. The standard model of online advertising causality, where viewing leads to click and then leads to purchase, doesn’t accurately reflect how ads impact what consumers do.
Causal Marketing: The Benefits
These findings may help explain why Procter & Gamble (the granddaddies in brand marketing) were able to improve digital marketing performance despite having their digital advertising budgets cut. Marc Pritchard (P&G’s Chief Marketing Officer) cut the company’s budget for digital advertising by $200 million, or 6%, in 2017. Unilever cut its digital advertising budget by almost 30% in 2018, even more than it did in 2017. What did the result look like? The result? A 7.5% increase for P&G’s organic sales growth in 2019, and a 3.8% increase for Unilever.
Both companies also changed their media spending from a narrow focus on frequency, measured in clicks and views, to one that focuses on reach, or the number of customers they touch. Their data had previously shown that some customers were being bombarded with social media ads from them ten to twenty times per month. This bombardment caused diminishing returns and possibly annoyed loyal customers. They reduced their frequency by 10 percent and transferred those advertising dollars to new customers, who weren’t seeing the ads.
To understand the motivations behind purchase, they also studied first-time buyers closely. This allowed them to pinpoint promising customers. In their fourth quarter earnings call for 2019, they explained that they had moved from “generic demographic targets” such as “women 18-49” to “smart audience” such as first-time mothers and washing machine owners.
John Wanamaker’s question was answered by the tsunami of personal, granular data generated online. This data can be used by marketers to determine which messages are effective and which ones don’t. Be sure to distinguish between correlation and causation as P&G, Unilever, and avoid targeting customers who are already loyal.