Using Big Tools to measure Small Impacts

A retailer client considering pulling back on a certain marketing vehicle because “it wasn’t working.”  The supporting evidence was the same-store sales percentage growth numbers (“comps” in retail parlance) of the markets in question.  Of the eight markets in question, 5 were down, 2 were flat, and 1 was up.  Even comparing those results against the rest-of-footprint didn’t really change the message: these markets were down.

While this question sounds very straightforward, there’s a lot to unpack there, so we walked through a broad set of themes:

Did the marketing push them negative?  Even at it’s very worst, advertising doesn’t reduce sales except in some very rare cases (going overtly political, announcing a very large sale that starts well into the future, etc).  So when we see those markets with negative growth, it points us instead to the idea that comps aren’t a great instrument at measuring small impacts like the vehicle in question.

But it must not have had a material impact, right?  This hinges on defining “material.”  If we rephrase this a bit to “it wasn’t able to offset the other headwinds in the market” then I think it’s a little clearer.  But regardless, it’s a tall order for a moderate spend to offset all the (often much bigger) factors that drive growth rates in a market.

Why aren’t comps a good instrument for TV?

  • Magnitude:  Given a small level of spend combined with the fact that media’s payback is usually spread across 3-6 weeks, the impact we’d be looking for is diffuse and somewhat small.  We’d be looking for small single-digit impacts in this case, not the double-digits that were in these markets.  It’s pretty clear that there are much bigger things going on in the market.
  • Last year as comparison period: In order for this to be a clean comparison we would need to ask, “what was happening in these markets at this time last year?” Was TV or other media executed in these weeks last year?  For any weeks that had TV last year, 0% comps could be considered a win, with positive comps being a rather high bar.
  • Media lags:  The “adstock” concept  shows that the week of execution generally has only about ~45% of the total impact- the rest often happens in the following weeks as the effect decays.

Here’s an old parable we sometimes use to illustrate the point:

A police officer sees a man intently searching the ground near a lamppost 
for his house key.  After helping search for a few minutes he asks whether
the man is certain that he dropped the keys near the lamppost.  
"No" he replies, "I dropped them in the park."
"Then why look here?" asks the surprised and irritated officer.
"The light is much better here" the man responds.

The point is that “the light is much better” in comps because everyone knows them and they’re straightforward to measure, but it’s not where the keys are.

Our summary was that there is mild evidence that the marketing vehicle wasn’t a huge win.  But with such an imprecise instrument, we advised holding the course until their next quarterly model gives them a much better look.