Measure progress of your design thinking project
Having coached many client project teams in adopting design thinking, I’ve developed leading (quantitative) indicators/ metrics to measure the progress of each design thinking project. At a glance, it gave me a high-level “sense” of the progress of each design thinking project. In addition, it also provides data for both intervention & reporting.
Below is the list of metrics that I’ve used, categorised by the Stanford d.school process.
Empathise
Observation
# users observed
Duration of observation sessions
Fieldwork locations of observation sessions
Interview
# empathy probes designed & utilised
# users interviewed
Duration of interview sessions
Fieldwork locations of interview sessions
Immersion
# empathy immersion experience designed & conducted
Duration of empathy immersion experience
Define
# observation sessions unpacked
# interview sessions unpacked
# empathy immersion experiences unpacked
# insights - observation
# insights - interview
# insights - immersion
# Point of Views crafted
Ideate
# How Might We questions crafted and utilised
# ideas (quantity)
# idea clusters (diversity)
Prototype
# prototoypes built
Time utilised to develop prototypes
Test
# test users
Duration of test sessions
Fieldwork locations of test sessions
# test sessions unpacked
# insights - test
In a recent client engagement, one project team did not see the value of generating huge quantity of diverse ideas, in an early iteration cycle. The team reported the following:
2 HMW questions crafted and utilised
10 ideas (quantity)
3 idea clusters (diversity)
Noticing what was reported, it was definitely a red flag worth investigating further. Upon checking in with the team, they shared that they allocated only 15 minutes for both divergence and convergence. With such a short duration, they did not manage to push past obvious ideas and explore options, nor were able to converge with a clear intention. As such, I’ve recommended that the project team allocate 30 - 45 minutes as a “guide” for the Ideate step, as they progressed in their project.
With another client, the project team built 3 prototypes and tested them. For each of the prototypes tested, they reported the following:
1 test user per prototoype
1 test session unpacked per prototype
2 or 3 test insights per prototoype
The team was keen to move forward with iteration based on the outcome of testing each prototype, with only 1 test user. The rationale shared is that they gained 8 test insights across 3 prototypes which they deemed sufficient to inform their next steps.
Based on the numbers reported, it seems that their test sessions were of “good quality” which results in 2 to 3 test insights. However, testing the prototype with only 1 test user (even at the early design cycles) might not be sufficient in making an informed decision on next steps for iteration. This also raised a red flag as the project team justified that the output (i.e. number of test insights) is more important than the time & effort invested (i.e. input); some even claim the high ROI!
Although I do not dispute the quality of the test sessions, I am hesitant to agree that there are sufficient data points to make an informed decision of next steps. However, the project team decided to iterate based on their test insights. This led to mixed findings, which resulted in the project team struggling to make sense of and take a stand, when they were required to propose a final solution to their project sponsor.
In real-world projects, project output is reported (and the focus), rather than the project input. However, keep in mind that the project output is a function of its input!
As such, the next time you measure the progress of your design thinking project, do also keep an eye on the input metrics (aka leading indicators) which are of equal importance to your output metrics. As the saying goes “garbage in, garbage out”