Posted by willcritchlow
This is a post of two halves. The first half runs through my thoughts on what makes for good metrics, while the second half focuses on a specific process for building appropriate reporting metrics for your individual situation.
Now, I’m a very numbers-driven person. I studied math(s) and once thought I was going to be an inventor until I realised that inventing = engineering, and I needed to be good at partial differential equations.
I find, however, that I’m often on the side of less measurement and quantitative decision-making. I like using editorial discretion in our conference programming and often “disagree” with the audience feedback(*). I like running split tests, but spend a lot of time aiming for the big wins (that are easy to spot with the most rudimentary measurement) over small percentage gains gleaned from detailed analysis.
(*) it’s interesting for me to think about what I mean by “disagreeing” with quantitative data from large groups of people. I think this may be called arrogance but at least I’m in good company.
My typical way of working is to spend a lot of energy thinking about hard problems - often in the abstract and often diving deep into whatever data I have to hand - before making the best decision I can in the messy real world. I am not as good as I should be at looping back around on my decisions afterwards and sense-checking them against the resulting outcomes.
I once asked a management consultant friend of mine if his company ever went back and checked their revenue/cost forecasts for companies that they did due diligence on. He looked at me as if I’d reordered his PowerPoint slides. When he got over the shock, he asked me what the point of that would be?
Taking that charitably, I think he was saying “the value is in the planning, not the plans.”
This all leads me into the first half of my post (note that, throughout, I'm going to use Distilled examples because I can be more transparent with our numbers than with those of any of our clients):
Of course, at a business level, cash rules everything. Run out of cash and you die. But as a marketer, you are so far removed from cash collection or burn rates (in most businesses) that this is not helpful.
Understanding the correct metric to use in any given situation is a large part of the skill of a consultant or marketer. I constantly find myself recommending different metrics in different situations.
People like investors and boards care about KPIs (Key Performance Indicators) that demonstrate the health of a company at a glance. These high-level metrics are good for busy executives and remote investors because they guide behaviour for those people:
An example of a company-level KPI for a profitable, growing company like Distilled that doesn’t have a bankroll of investor cash is a (conservative) projection of minimum cash levels over the coming months. Growing a company is cash-intensive, and one of the trickiest parts is funding growth out of operational cashflow. As long as the cash situation looks good, management time should be spent growing the company, but if the cash situation were ever to be poor, there would be nothing more important than resolving that situation.
When we get down into individual marketing campaigns, however, the kind of KPIs beloved of executives become useless. While an executive will focus on whether total online revenue is on-budget (which feeds into the operating model and cashflow mentioned above), knowing that we are above or below budget doesn’t change the day-to-day activities carried out by the marketing team.
Two things go wrong in marketing projects:
So, for ourselves, I always advocate measuring activity and outcomes. Add some KPIs into the mix to communicate effectively with the execs and you are well on your way to an effective reporting pack.
The reason these metrics guide behaviour is that they are ultimately connected to the company’s financial objectives. It’s important that you can see the path from your metrics to those company-wide goals even if there are a bunch of assumptions needed to get there.
At Distilled, we recently started working with an experienced finance guy - his last gig was CFO at a public company - and had a very interesting few days connecting together our various financial reporting. At the end of it, we had a model that connected top-line revenue and costs in the P&L through non-cash balance sheet movements to cashflow. Of course, it bakes in a variety of assumptions (some of which have a critical impact on the outcome - like debtor days).
Even taking into consideration the importance of those key assumptions, it has revolutionised our management accounting to be able to see our financial data all connected together. Of course, we can then both run scenarios on the key assumptions and calibrate them against the real world.
The equivalent assumptions in online marketing are things like conversion rate and churn rate. Of course, in some projects these aren’t assumptions but rather variables - if you are directly seeking to change user behaviour - I’ll talk about this in a little more detail below.
As the usercycle guys put it, “what happens here?”:
The Lean Startup is a methodology designed (as the name implies) for startups, but there are a lot of analogies to marketing campaigns that rely on earned media. Typically, there is a long period of time during which a lot of action generates precious little in the way of end results before (hopefully) the curve starts trending upwards and ultimately (again, hopefully) surpasses any of the ways you could buy new customers.
Eric Ries talks about innovation accounting as a way of defining, measuring, and communicating progress during these long, lonely months. If you are going to succeed as an online marketer, you are going to have to master a similar set of skills.
Our goal during this phase is best described as “learning” - we want to find the things we should be doubling down on, the things to kill before they cost too much money and give ourselves enough evidence to quieten both our own inner demons and those hard-to-convince bosses and clients.
For startups, I’m a big fan of Dave McClure’s pirate metrics - so named after the acronym AARRR:
He argues that your metrics should carefully measure each of these stages in the lifecycle of a customer and, importantly, that you should track them over cohorts of users (we use two-week long cohorts for DistilledU).
During the phase where you only have leading indicators, you may get executive pressure to forecast numbers. My approach here is to plug them into some simple assumptions that are easy to highlight and understand as being currently guess-work (“if these pages accrue search traffic at 80% of the average of similar pages, we will grow traffic X% year on year”).
Cost per acquisition (CPA) is a critical metric for paid marketing channels where the costs scale linearly (or super-linearly) with conversions. In particular, it passes the “actionability” test I described above. If your CPA is too high you can reduce bids, increase conversion rates or increase customer spend.
When we are considering channels with non-linear relationships between cost and conversions, it’s not easy to work out actions from average CPAs. It could be that you need to do more of what you were doing to benefit from flywheels and economies of scale. It could be that you need to throw away the plan and do something different because there is fundamentally no path from here to a profitable campaign.
Much of the artistry present in search and other earned marketing channels comes from the difficulties of working with this uncertainty.
Some tips that I’ve found useful in practice:
Look at the biggest picture you can
Look at the biggest picture you can - earned channels perform best over longer horizons, when multi-touch conversion is considered and lifetime value is counted appropriately. I would far rather be working out whether a five or six figure spend brought a big enough total uplift than deciding if an individual blog post (or even bigger piece of creative) has earned its keep.
Make sure you are considering the profit margin of an incremental sale
It is tempting to think about the marginal profit of a single conversion as:
This is misleading any time you have fixed costs involved. Let’s take an extreme example from our business illustrate this:
In our own business, I tell anyone working with marketing to consider our whole business as 100% gross margin:
In reality, what I’m doing here is conflating two hard-to-measure things - the marginal cost of an incremental sale and the lifetime value of a sale above and beyond the first transaction.
Here’s a step-by-step process to follow:
In order of increasing complexity:
My favourite approach here is to make some simplifying assumptions that we can return to later to sense-check. Let’s think about some assumptions we can make in the most complex situation of building an email list for a consulting business:
The goal of all of this work is to come up with KPIs at the micro-conversion level that correlate with the bigger-picture business goals. You need to trade off “small” (benefits = easier to influence, quicker to change, quicker to measure) against “close to the money” (benefits = speaking the language of management, real business benefits).
In the complex situations, I typically find that the sweet-spot is somewhere between visitor growth (to appropriate pages from a good channel) and micro-conversion growth (i.e. email list growth or contact form submissions).
In the simpler businesses, you can typically get closer to the real business metrics and simply work directly with revenue growth.
This step is probably overkill in many client engagements but it’s important for in-house teams (and those in-house teams should be sharing their models with external teams in my opinion).
The output should be the simplest Excel model you can think of that captures the important business drivers. The important part here is really the planning process rather than the specific plans that come out of it.
Here’s the majority of the inputs I created when I was building the DistilledU business model before it had launched to the public (i.e. while it was still in private beta):
In our case, I pulled a bunch of numbers from Rand’s exceptionally transparent funding post and used them to benchmark against our visitor numbers and conversion rates.
The output was a single sheet Excel model that forecast revenue growth (most of which was to be driven by inbound means):
In truth, pretty much every single assumption in my model is wrong - many of them by quite some distance (including our ability to generate conversions from paid advertising which has been even worse than we forecast). But the planning process was the valuable part, and although the real world is never as neat and tidy, we have ended up not a million miles away:
And now we know where the levers are that we need to pull to get the business results we want (one of which is conversion rate - we’ve already had one successful A/B test that nearly doubled conversion rate thanks to Optimizely - my favourite testing platform - but that’s a story for another day).
Any serious discussion of measurement in marketing needs to understand the lifetime value (LTV) of a conversion. As discussed above, it can be hard enough to work out the immediate value of a micro-conversion, never mind the lifetime value.
Here are some techniques I have used to get to workable LTV numbers:
Assume static churn rates
If you are working with a subscription business, you can estimate LTV as:
monthly average revenue per user (ARPU) / monthly churn
So if you make $35/month / user on average and have a churn rate of 9%/month you can estimate LTV as $389
Make the numbers work with first purchase only
If you have an efficient enough marketing engine, you may get to beyond break-even on average on first purchase. In this case, you can build a reporting pack based on first purchase (sometimes with some hard-to-estimate factors in the other direction such as the variable margins mentioned above) and include notes of additional uplift available from subsequent / repeat purchases. This is the approach I’ve taken with big ticket / b2b services with long lead and delivery times - even if there are repeat purchases, they come so far in the future that they are irrelevant on the short-term planning horizon.
Average everything together - assuming all users are similar
If you have access to the right data, you can bucket together large sets of users and their purchases over some large time horizon (12-24 months is sensible for many online businesses), discount appropriately and get to a very rough average LTV. This misses many subtleties and variation in underlying LTV. It’s probably most effective with small-to-mid-ticket basket sizes that aren’t skewed by large repeat purchases in the way that, say, consulting services would be.
I’ve even used this approach to bucket together the “LTV” of email subscribers - many of whom purchase nothing. Doing some back-of-the-envelope calculations led me to a rough value of $6 per email subscriber per year for Distilled, for example. If I were to rely on this for paid marketing, I would need to keep a very close eye on the trends in this value as I artificially added subscribers from a different channel mix than that which grew the list to its current size.
Build a simple model
For mid-ticket purchases where basket sizes (and repeat purchasing behaviour) can vary over a wide range, I’ve found it best to build an explicit model based on a simple “propensity to convert again” in each time period after initial purchase.
For those working through this at home, you end up modelling a simple Poisson Process but you can get 80+% of the way there with a simple “x% of prior customers buy again in the subsequent quarter and y% in the quarter after that”. I tend to move towards longer time-periods when modelling this kind of process as purchases that fall close together might as well be counted simply as a larger basket.
So you now have access to a bunch of (estimates / modelled versions of) metrics like LTV, churn rate, CPA and have picked AARRR metrics that correlate with future success. Finally (we’re nearly there, I promise) you need to put this together. I suggest that you think about building two different reporting packs:
KPI pack - most likely updated and reviewed quarterly
Brings together big numbers over long time periods - for example, it might contain:
You want to show graphs like this one (revenue from organic search to deep pages within DistilledU):
Project steering pack - most likely reviewed monthly
Based on time-boxed or cohort-based data, focusses on metrics that give you deep insight into what you need to change to do even better in each of the key activities you are undertaking. This is likely to be highly custom to your specific business and marketing campaigns but here are some examples from campaigns I’ve been involved with recently:
Conversion rate optimisation:
The table below shows (a section of) our cohort analysis for DistilledU and the bump in conversion rates to paying and engaging with the content when we announced that we were including all of our conference videos within DistilledU subscriptions:
This was a test - and initially one that we marketed only to our existing community. It wasn’t without its costs (we estimated that it would reduce video sales by $50-100k / year) but the success led to (a) keeping it in place and (b) a successful landing page A/B test that we’re going to write up on our own site soon.
I hope this meander through meaningful metrics has been useful to you. I’d love to hear your experiences in the comments and any thoughts you have on how I can improve any part of my approach.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Comments are closed.