4 min read

Design and uncertainty

From 3 Horizons and the Hypothesis Prioritisation Canvas to my own triage

There have been a few blog posts recently about how discovery can include design. This coincided with some work I was doing to explain why some design work could go to development even if it hadn't been tested, while some others might not even be appropriate to usability test yet.

Talking about uncertainty

A few people have been writing about uncertainty and delivery such as Kuba Bartwicki and Steve Messer (and yes, this will get cyclical as he mentions me).

There are a few models that I've been using over the last few years to talk about how some things are more uncertain than others.

Three Horizon Model (McKinsey)

This is a model that I've known about since university: I've always liked the McKinsey 3 Horizons model for reminding businesses that they need to both plan for the near future but also what may be happening further ahead, and that the further ahead could head off more immediate ideas.

Three Horizon Model - left is Horizon 1 (now to 1-3 years, with core business, horizon 2 emerging, and horizon 3 (transformation) 5-10 years. It shows that there could be disruption at various points.
Example of 3 horizons (source: powerslides.com)

Wardley Maps (Simon Wardley)

Businessman Simon Wardley created Wardley Maps as a way to help businesses decide buy vs build and other efficiency vs innovation business questions.

I particularly liked models that he's put out to help teams understand when to use Agile, Lean, and Six Sigma.

Pattern - no one size fits all. Left in genesis is Agile, in-house, focus on reducing cost of change the missle in product is lean/off the shelf product - focus on learning and reducing waste, right at commodity is six sigma/outsource - focus on reducing deviation
Model from Wardley about climatic patterns for business (source: Simon Wardley)

Wardley has other metaphors such as pioneers, settlers and town planners which are helpful in considering change.

Cynefin (Dave Snowden)

Cynefin (said, kin-ev-in: it's Welsh for 'habitat') is a problem framing model created by former IBM management consultant Dave Snowden.

Cynefin model: centre is confusion, then with left side disordered and right hand side ordered. Bottom left quadrant is: Chaotic Lacking constraint Decoupled Act-sense-respond Novel practice;  top left quadrant: Complex Enabling constraints Loosely coupled Probe-sense-respond Emergent practice; top right quadrant: Complicated Governing constraints Tightly coupled Sense-analyse-respond Good practice; bottom right quadrant: Clear Tightly constrained No degrees of freedom Sense-categorise-respond Best practice
Dave Snowden's Cynefin

It's a bit of a weird one to use in a lot of circumstances, but what I find useful is the reminder that somethings it's better to do something rather than wait to do analysis.

Say, Do and Make Tools (Liz Sanders and Pieter Jan Stappers)

This is something of a model that doesn't have anywhere near as much attention as I think that it should, possibly as it's in a (gasp!) printed book from the 2010s: The Convivial Toolbox. Liz Sanders and Pieter Jan Stappers, using work from Dutch university TU Delft, use this model to explain why sometimes we need to let people make things just to learn about it.

This is a model that can be reflected in design thinking—though I think that people sometimes miss that design thinking is about learning rather than solutions.

3 pyramids with an axis of surface to deep. What people: say, think, do/use and at bottom know/feel/dream; methods: interviews, observations and at bottom generative design, and knowledge: explicit, observable and at bottom tacit and latent
Say, Do and Make Tools

Hypothesis prioritisation (Jeff Gothelf)

Looking more to the startup world, I enjoyed a lot of Lean UX as popularised by Jeff Gothelf and Josh Seiden, but the tool I've used the most with designers is Gothelf's Hypothesis Prioritisation Canvas.

Hypothesis Prioritisation framework - axes are value and risk. High value/low risk is Ship and Measure, High Value and High Risk is Test, Low Value and Low risk is Don't Test, Usually Don't Build and Low Value and High Risk is Discard
Hypothesis Prioritisation Framework (source: Jeff Gothelf)

More than anything, it's a way to remind people that not everything has to be tested - though I do find it hard to use the bottom half of the grid much.

There are also models for scoring ideas such as ICE (Impact, Confidence, Effort) and RICE (Risk, Impact, Confidence, Effort), though I feel that this is more about investment in bets rather than actually making decisions on the type of work to be done.

My contribution to the model—design triage

While all of the above models have informed my work, when it comes to helping public sector designers decide when they test and when they ship, I've found a few things missing:

  • some more heuristics to understand when to ship in a world where getting it wrong could mean real harm
  • means to help people understand that it's necessary to do testing on concepts, even just to learn about them (the closest that gets to this is Sanders' and Jan Stappers' Say Do and Make Tools, but it's a bit too 'design lab' for everyday use)
  • general questions to ask to make decisions on what to do

So this is my model as of January 2024.

Model with questions: is the work validated with users? (and 'can you say this with confidence') and Could a wrong design cause user harm (incorrect data entry, access issues, mistrust?) and Are there information gaps about technology, users or technology that affect the design?. Answers: ship and measure (high certainty) usability testing (medium certainty), and discovery research and concept testing (low certainty)

It helps to give some questions and order, and also remind people that it's actually cool to do testing of ideas, just with the reminder that it really is ideas and as much about generating more ideas for responses as anything else.

There's a Google Drawing of my model available.