Sonal Patel
Dec 8, 2021

Finding digital measurement success, part 2: Attribution and incrementality

Too often marketers conflate these two terms. Quantcast's SEA MD disentangles them.

(Shutterstock)
(Shutterstock)

In our first installment in this series ("Finding digital measurement success, part 1: Cohorts vs clicks"), we established that it’s important to use a cohort of metrics to measure success, but there are two additional methods that savvy marketers employ to truly quantify success: attribution and incrementality. While these terms are widely used to solve the measurement challenge, they are often conflated, causing confusion. 

Let’s start by defining what they mean

Attribution and incrementality quantify different things:

Attribution looks at the touch points along the journey that have impacted purchase. It’s correlative rather than causal because while it tries to assign credit, it cannot explicitly give it to any one touch point for the sale. It answers the question: “What touch points were associated with a consumer conversion?”

Incrementality measures the impact of a single variable on an individual user’s behavior. For digital display marketing, it is most commonly used to measure the impact of a branded digital ad (exposed group) against a Public Service Announcement (PSA) ad (control group). The lift is measured as the percent difference between the two. Incrementality demonstrates the value of advertising, helping to answer the question: “Did my ad result in a purchase?”

A deep dive into attribution

While nuanced in its own way, it’s also important to understand the challenges and solutions with attribution. In the example below, there are so many touch points within a consumer’s buying journey today, and that’s where it becomes difficult to understand which advertising partner has helped to drive the final conversion.

This overview of a journey that ‘Sarah’ might take reveals the challenges of conversion and performance metrics:

To mitigate this, the first thing marketers need to do is apply common sense: what do you expect to happen, and do your campaign results align with your expectations?

The next step is to think about measurement in ‘shapes’ rather than individual numbers (e.g. an individual CPA), as these singular figures often hide the reality and complexity of the campaign results. You might find it a lot easier to evaluate the success of tactics when you don’t consolidate results to one single number; think of an ad campaign as a portfolio of ad impressions that aren’t in isolation.

Looking at incrementality

Incrementality testing compares the marketing results between a test group and a control group, which can help advertisers better understand if the KPIs are a direct result of their own campaigns or extraneous effects.

At Quantcast, we define incrementality testing as measuring how a specific marketing event was casually influenced by a media channel or tactic, in this case display, over a set time period and budget.

The challenges here are inventory bias, cookie churn and gamed benchmarks.

  • Publisher inventory bias is caused when ad exchanges and publishers are selective about the inventory they will serve on their sites, which affects the performance of creatives differently.
  • Cookie churn problems stem from cookies moving from control to treatment (and vice-versa), potentially driving lift down to zero because it scrambles the causal signal.
  • Poor or gamed benchmarks occur because your control (or baseline) will drastically impact your results. Some people use non-viewable impressions as a control, but this adds in a new behaviour that could skew results.

To help solve this, we recommend deploying adaptive ‘block’ or ‘allow’ lists to address publisher inventory bias, experimenting on traffic that is trackable to address cookie churn problems, running one consistent study across vendors to set a level playing field with consistent benchmarks, and aligning your measurement and attribution criteria.

Finding digital measurement success with cohorts

Reaching and influencing audiences, cutting through the noise, and coming up with a value proposition that can steer behaviour is incredibly challenging. Reducing this to a single metric is ideal, but likely impossible as measurement continues to change as the approach to digital advertising becomes increasingly multifarious.

As mentioned in "Finding digital measurement success, part 1: Cohorts vs clicks", every metric you look at, every audience you try to reach, every methodology you use, must all be evaluated as part of a cohort, ensuring you weigh up the pros and cons of different approaches. These principles will help you to consistently learn from the continual feedback loop and evolve your own measurement strategy, ultimately improving the performance of your brand.


Sonal Patel is managing director for Southeast Asia at Quantcast.

Source:
Campaign Asia

Related Articles

Just Published

5 hours ago

Tech On Me: Are Chinese tech giants doing enough to ...

In this week's edition: Chinese social media platforms take on xenophobia, Australia looks to prevent teens from using social media, Meta's plans to introduce generative AI into the metaverse, among other tech news in the region.

6 hours ago

Samsung’s new global campaign taps travel bug to ...

The work by BBH Singapore shows how new AI features in the Galaxy S24 like 'circle to search' turn travel photos into mobile tools.

6 hours ago

Agency Report Cards 2023: We grade 31 APAC networks

Campaign Asia-Pacific presents its 21st annual evaluation of APAC agency networks based on their 2023 business performance, innovation, creative output, awards, action on DEI and sustainability, and leadership.

7 hours ago

Agency Report Card 2023: Wavemaker

With a sharp ascent to the top spot in Campaign’s Media rankings for 2023, Wavemaker had a solid year of performance even amidst an uncertain economic landscape.