upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

A Strategic Approach to Metrics for User Experience Designers

Carl W. Turner

Journal of Usability Studies, Volume 6, Issue 2, February 2011, pp. 52 - 59

Article Contents


The Trouble With Metrics

In March 2009 Google head designer Doug Bowman resigned his post because he was forced to justify every design decision using metrics. A plaintive blog posting explained his reasons for quitting (Bowman, 2009). Metrics are not a substitute for expert judgment in design and aesthetics, and the Google episode demonstrates a case of trying to apply measurement to the wrong thing. People make decisions, but metrics are only data. Business decisions, as with design decisions, are based not only on data but on instinct, craft, emotion, and a compelling story.


Sidebar: A Bad Metric That Call Center Managers Love

An example of a bad metric is call containment rate for self service interactive voice response (IVR) telephony systems (Leppik, 2006). Containment is measured by the number of calls ending in the IVR divided by the number of calls ending in the IVR plus the calls routed to a live agent. Focusing on containment of calls “in the IVR” pushes managers to make bad decisions regarding IVR design, e.g., disabling the zero key or otherwise making it difficult for callers to reach a live agent. This often occurs over the protests of designers who know that people hang up for a variety of reasons, including getting lost or stuck in the IVR.

More appropriate metrics would include cost per call answered (both for live and in automation) and customer satisfaction levels for each channel. Then the designers and the business can discuss what should be automated and what calls should be handled by agents, and how the services of each can be designed to meet the objectives in each metric.

Sidebar: Metrics Are Very Political

There’s no getting around it. Putting a metric on the company’s scorecard or other top-level list of metrics is political, and so requires a good deal of support and savvy. If you don’t have the political clout to put a metric on the scorecard, make sure your own unit’s metric is (a) closely aligned in support of a valid scorecard metric and (b) pushes you to do things you really want to do. So, if a scorecard metric is “increase customer satisfaction scores for all web applications by 20%,” then make sure to have a unit metric in place to “practice full user-centered design on all significant customer-facing applications.”


Previous | Next