Measuring the Decision

Measuring the impact is hard, but critical if we want to become a learning organisation.

Easy to say, but hard to crack

Now comes the crucial feedback loop we prize, if we are to make progress. Build/Measure/Learn - then rinse and repeat.

Of course, this is easy to write, but the hardest thing to really crack. At this point at EE, we still haven’t really cracked this in order to get real data on the majority of our APs. So what follows is more to state where we are aiming for, rather than what we have accomplished!

In our Advice Process template (see the AP Housekeeping section), we have an area titled success metrics. The purpose of this is threefold:

  1. Improve the quality of the thinking that goes into making the decision.

  2. Provide a starting position, or anchor, for the build / measure / learn loop.

  3. Help to de-bias future decisions, based on the result of this decision.

Measuring impact

A significant part of the individual and organisational learning we get from a decision is through quantifying its resultant impact.

Having good metrics is important, but they are hard to define. This article on good metrics provides useful guidance and we are currently experimenting with a revised format for capturing success metrics, using a hypothesis-like structure. Please use the following format to define your success metrics, a worked example is here:

  • State hypothesis: Start with a clear hypothesis: IF we do x THEN we will achieve y and will know this BECAUSE of z.

  • Define the time frame: Define a timeframe to contain the experiment (which reduces risk and investment) and allows for a comparison. E.g. 4 weeks.

  • Describe the result: Describe the intended outcome including an explanation of any terms used.

  • Define the measurement: Describe a measure of the desired direct result* (and how you’ll obtain the measurement) in one of two ways depending on the nature of the AP: a) A comparative numerical measure of the intended direct result (a quantity/ratio of a quantitative something, stated in terms of it’s current and desired state) Or b) A quantitative measure of a qualitative test (measured through surveys or interviews) including an initial baseline where an AP relates to a change.

  • Link to a business objective: <Describe how the direct result will contribute to EE’s Global or a BU’s objectives (see your BUL or Exec to find out more). *Direct result versus a non-direct result: in working through example APs we noticed that some success metrics related to future or subsequent outcomes that may depend on the AP decision but are not a direct result. We found this confused ‘what’ was being measured and made it more difficult to know if the AP was successful. For example, we might measure sales leads rather than actual sales.

Hints & Tips

Resulting correlation does not imply causation: don’t create an overly tight relationship between results and decision quality. Just because you had a good (or bad) outcome, it doesn’t follow that it was a good (or bad) decision. It could just be down to dumb luck (see that earlier reference to poker).

Break down the decision into a series of small steps: then measure as you go. At decision time, you’re often information poor. More often than not you can break down a big decision into a series of smaller ones. If you gain new information and you end up abandoning or pivoting based on this new information, congratulations – you’ve just made a good decision.

Last updated