Three classic measurement blunders

In a meeting today I started to talk about three classic blunders of measuring sustainability: start measuring years after you start executing; choose indicators without knowing how you will use the data; and, expect targets to be hit regardless of circumstance and punish individuals for missing goals. 

With apologies to The Princess Bride (if you don’t know what I mean, see the clip and then watch the film), here they are.

1. start measuring several years after you started executing
You’re busy. Someone senior has just signed off on that big thing you’ve been trying to get to happen for ages. You’ve got a target of helping X million people do Y. Let’s go and deliver!

A few years later things are going well, perhaps not everything you hoped but good enough. You realise you’re going to have to report progress – and then you realise you didn’t define what is activity on your part counts as ‘helping people do Y’. Also, you don’t have a baseline. You haven’t kept tabs on the costs and benefits (direct, indirect, tangible or intangible). Or any on-going information in a common format.

You have committed the first classic blunder of measurement – you didn’t set up a measurement system at the start.

The most obvious remedy is to set up a measurement system at the start. If you’ve already set off, it’s not to late to start setting up your measurement system.

2. choose indicators without knowing how you will use the data
You’re interested in outcomes. You’re not one of these process-guys who gets all nerdy and navel gazing. You put all your resource into measuring the results. Easy.

and/or

You’ve got very limited resource, and limited time to set up. So, you just pick the indicators which are already being collected, either in-house (often finance-related) or by someone else.

When you get the data it tells you that…well, the outcomes are behind what you want. But you don’t know why so you can’t do anything about it.

The second classic blunder is to choose indicators without thinking through how they will be used. In my view,  there are two basic uses of measurement data:

  • external disclosure for transparency and accountability, whether with formal regulators or the informal ‘civil regime’ of other stakeholders.
  • internal decision-making, to improve performance and set better goals.

On external disclosure there are guides, requirements and so on which constrain choice (I know this because I was on several panels that created the third generation of GRI). I’m more interested in metrics for internal decision-making because there are more choices (so people need more help) and because the purpose of the external disclosure is also to change internal decision-making.

My remedy is this: you start with a hypothesis which says (roughly) I do A, that drives B, which drives C (and so on until…) which drives the outcome I want. (In development  circles this is called a LogFrame – see the excellent Sustainability Indicators book for more.)

You then select input, process, output and outcome indicators which tell you about whether your hypothesis is actually happening, plus the outcome. This way you have the opportunity for two levels of learning:

  • Single-loop: improving performance based on your existing hypothesis. (“We can see does drive B etc, so let’s do more A.”)
  • Double-loop learning: improving your the hypothesis itself. (“Turns out A does drive B, but B doesn’t drive C so we’ll need to do something else.”)

As everyone knows, the map is not the territory, as anyone who just obeys their satnav will tell you. If you’re map is broadly right, you can drive according to the map. If you’re map is wrong you need a new map. And you need a measurement system which can help you tell if your map is good enough.

The measurement system, then, is a way of delivering organisational learning, the vital ability to adapt as things change.

3.  expect targets to be hit regardless of circumstance and punish individuals for missing goals
You are under pressure from your boss, and s/he from their boss, to hit that target. Five years ago you promised that you would help X million people do Y – and it looks like you’re going to be short on Y. So, you put pressure on the people who are supposed to deliver.

A couple of things can happen at this point. Even if you do hit Y – hurray! – there could well be a catch. Perhaps you’ve made a short-term gain but undermined the following years. Perhaps people have lied, or gamed the indicator.

Maybe the pressure you apply makes people think that the effort is not worth the candle – “it was always a stretch, I know what I’m being asked to do won’t work, that sustainability gets in the way of my real job anyway, I’ll just ignore that nagging person”.

Third blunder: using the target as a stick and then using the data to punish under-performance. Metrics can set up many unintended consequences and perverse incentives. If you put delivering the target as well above everything then you risk people lying and/or dis-engaging. As importantly, you miss using the chance to use the data as insight into why you are short – is it really about more effort? Or could there be some other reason.

The remedy? I think it has to be about a culture which values learning, where people are accountable for performance, including whether they are improving the hypothesis as they go. Interestingly, it turns out that quality is particularly important when you are doing something new: you need to reward people when their get better at forecasting what happens next (for more see the HBR classic ‘Building breakthrough businesses within established organisations’ (£)).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s