Unlocking Success Metrics: Measuring Digital Value with OKRs Skip to main content

Unlocking Success Metrics: Measuring Digital Value with OKRs

Dom Graveson

In today's challenging economic climate, taking a serious approach to measuring value can be a powerful launchpad for bringing your organisation with you on the journey of digital transformation and evolution.

This is the fourth article in the five part series titled 'A Value-First Approach', written by Netcel's Director of Strategy & Experience, Dom Graveson. In this series, Dom explores the foundations of taking a value first approach to digital strategy and implementation, and provides practical advice on building your own framework for measurement and building sustained support for your digital journey.

The different ways we can measure Digital Value:

You can break down the measurement of value of your investment in digital experience and the enabling technologies in Optimizely and other data and systems integrations, across the following components:

The value of digital experience.

This is typically achieved through a goal tree-based attribution of value that translates big picture Objectives into directly measurable Key Results (O and KR's). These can be across journeys that are both direct and indirect — for example ecommerce online purchasing is usually more directly attributable than marketing awareness). This way we can link behaviour we can measure through analytics as a direct result of investment in features in the product and experience – the core of effective product ownership.

A simplified goal tree might look something like:

Outcome/ambition: Revenue growth (increased income)

  • Key Result: Increase in conversion rates.
  • Key Result: Higher average customer value
  • Key Result: Higher average order value
  • More users spending money.
  • More money spent per user.
  • Effective up and cross sell

Outcome/ambition: Customer loyalty (increased income)

  • Key Result: Frequency of visitors returning more than twice in a month
  • Key Result: Dwell time on key pages, subscription to topic-based content feed
  • Returning visitors
  • Engagement on the site


Outcome/ambition: Reduced cost to serve customers (reduced costs)

  • Key Result: Fewer low value calls such as password resets
  • Key Result: Higher numbers of call based up and cross sells of higher value products and services (revenue growth)
  • Reduced call centre demand through self service
  • Redeployment value of your teams doing higher value work

It should be said these KRs should have a measurable target - for example, 'Effective cross and upsell' should have an aspirational target and would be better worded and made specific for your organisation by changing to 'attributed cross and upsells increase by over 50%'.

Delivering great experiences drive great business and audience outcomes when the feature decisions are focused on driving the right desired behaviours – these are unique to your relationship with your audience – from browsing products, to feeling inspired to donate or volunteer, to purchasing add-ons and upgrades, to taking a next step and making an enquiry. Effective measurement is a direct result of having a robust sense of where the value is for your digital experience rather than the generic measures that mean very little in terms of informing your prioritization of features. There’s only really insights in the quality of your traffic (who are they? What do they want? What are they struggling with? Etc.. quantity, in many cases, can be a false positive.

You can’t always measure the behavioural outcomes of experience improvements immediately. It takes time. A useful approach to this can be to develop a ‘leading’ and ‘lagging’ metric for each branch on your goal tree. A leading metric is one that suggests you are heading in the right direction – something you can measure that you hypothesize will lead to the desired outcome. A lagging metric is one that proves the outcomes has been achieved. Leading metrics build support and show early traction, lagging metrics provide proof.

For example, if your outcome is to lead a healthier lifestyle, it will take time to prove that has happened. So you might change some behaviours – quit drinking, eat more healthily. These in themselves don’t prove you have a healthier lifestyle, but you can hypothesize that they will contribute to that outcome – measuring these behaviours would be examples of leading metrics. However, at a later date, measuring your lung capacity, or ability to run a 5k in a particular time indicates your objective of a healthier lifestyle has been achieved – this is a lagging metric.

Leading and lagging metrics are important in getting a balanced view of your digital performance and enable earlier indications than a traditional outcome only based approach which can take months to give useful results. Combine the two to ensure you are building stakeholder support as you proceed and enable evidence based experimentation, which reaching for the big ticket objectives of your organisation. 

If you want to explore how to measure the impact and value of your digital initiatives - get in touch, we'd love to hear from you.

Read the previous articles in this series here: 


To receive the rest of the series direct to your inbox, join our Community here

 

Digital success is a journey – find out how we can help you on yours.

Get in touch