Home
[an error occurred while processing this directive]

Resources: UPA 2004 Idea Markets

How does corporate culture affect the reporting of usability results?

Activator: Lori Anschuetz
Senior Usability Specialist
Tec-Ed, Inc.


Executive Summary

This Idea Market discussion quickly shifted from how corporate culture affects usability reporting to how usability groups are responding to the information needs of internal audiences. Their MBA-trained executives require numbers to drive business decisions. Their design teams want qualitative information to provoke ideas and inspiration. After analyzing their audiences, the attendees are devising programs or employing new methodologies to provide the appropriate usability data.
For many of the attendees, corporate culture is positively affecting the types of usability data they collect and thus the character of results they can report. More important, as they gain the eyes and ears of their executives, they are poised to affect the corporate culture.

Origin of the Question

As a usability consultant, it’s my responsibility to report research results in a way that communicates effectively with the client. Some clients don’t care about how the report is organized, written, or visually presented as long as it addresses the agreed-on research goals. Other clients have strong ideas—as well as model spreadsheets and document templates—about how a report should communicate results.
This Idea Market topic grew out of my experience conducting small-sample usability tests for established usability groups whose reporting requirements are at opposite ends of the spectrum. At one end, the clients concentrate on quantitative results and count everything. Descriptions of problems include the number of participants who had the problem and the percentage of total participants they represent.
At the other end of the spectrum, the clients emphasize the qualitative nature of the research. The focus is on challenging elements in the user interface. If the researcher observes participants having difficulty with the interface, the difficulty should be reported as a problem—the number of participants experiencing the problem doesn’t matter.
I wondered to what extent such diverse reporting requirements might reflect the corporate culture, the usability group’s own philosophy, or something else.

Pressure for Quantitive Data

Pressure from executives

This Idea Market topic drew mainly people who work for larger organizations with established usability groups. To begin the conversation, they shared concerns about management pressure to quantify findings from usability studies with small numbers of participants. They acknowledged that their executives are accustomed to basing business decisions on numbers, but they worried that the executives are asking for something without really understanding how to interpret it.
For example, the attendees noted that they typically conduct exploratory usability testing with 5 to 8 people, yet sample sizes of 5 to 20 people are not statistically significant. They know when they report 5 of 8 participants had a particular problem that they can’t extrapolate that 62 percent of users will have the problem. One man remarked that for systems with a high volume of transactions, a 1 or 2 percent change in metrics can be significant. However, when he runs a study with 5 participants, he can “measure” change only in 20 percent increments.
An exception to this pressure for providing quantitative data came from employees of a not-for-profit media organization. They pointed out that their executives aren’t MBAs, but rather creatives who care about whether the user is being entertained or having fun. One of them had just heard Steve Denning’s presentation on “Narrative: The Key to Connecting Communities,” and said she thought a storytelling approach might be effective for her executives.

Pressure from practitioners

The attendees themselves appreciated usability reports that include participant counts and percentages, which build their confidence in the reported results. One woman related her experience with a vendor hired by her company to redesign its website. Because the vendor was evaluating the usability of its own designs, she was asked to review the vendor’s usability reports.
The reports described problems as experienced by “most,” “many,” or “all” participants, with no counts or percentages to clarify this language. The attendee wanted to be sure the results weren’t being skewed in favor of participant satisfaction with the vendor’s design, so she asked for basic statistics to help her understand the report. The vendor’s reply: “Metrics are not part of our methodology.”

Approaches to quantitative data

Six Sigma/benchmarking

Some attendees’ companies have or are developing usability programs to collect appropriate quantitative data during the product lifecycle. One woman explained how her company uses Six Sigma techniques and benchmarking to “know where we are at and whether we’re going up or down.” The benchmarking usability studies typically involve 50 participants and generate inferential statistics on ease of use, success rates, errors, and so on. After benchmarking a product, the focus shifts to small-sample usability studies to explore and confirm design solutions.

Usability magnitude estimation

One man described how his usability group responded to a recent executive request to build a library of usability scores for the company’s products. The executives wanted one “standard, rolled-up usability score” per product, but didn’t care how the usability group arrived at it.

The group investigated usability magnitude estimation (UME), a new method for measuring usability developed by Mick McGee and described in his HFES 2003 and CHI 2004 papers. As implemented by the attendee’s group, after study participants perform a task with a product, they are asked to rate the usability of the product according to an objective definition of usability. Participants devise their own rating scale with the understanding that if the product usability for a subsequent task is twice (or half) as good, their rating must be twice as high (or half as much). At the end of a study, the participants’ ratings for all tasks are standardized and averaged to create a single usability score.

The usability group created a pilot library of UME scores by conducting usability studies with 10 participants for each of the company’s products. The attendee reported that the participant-assigned ratings correlated with ease of use and success rates as observed during the study sessions. He was excited about the potential of UME for supporting usability comparisons across products, of the same product over time, and at lower levels of detail, of tasks common to multiple products.

Decline of Likert-scale questions

Likert-scale questions commonly have been used for collecting quantitative data after tasks and/or at the end of a usability session. However, participants’ answers to these questions are often at odds with their behavior.

The attendees said they do not use Likert-scale questions much, preferring to ask open-ended questions during post-task and post-test debriefings. They suggested the best use of Likert-scale questions might be for determining rank order or making comparisons over time. For example, some companies use Likert-scale questions to have participants rate the ease of every task and satisfaction with time on task in all usability studies.

Tips for reporting qualitative usability studies

The attendees agreed on the importance of small-sample usability studies, whether conducted independently or as part of a larger usability program. Even with four or five participants, behavioral patterns emerge and participants’ comments converge.

The attendees shared these tried-and-true approaches to reporting results from qualitative usability studies:

  • Make an oral presentation of results to your stakeholders. Be prepared to think on your feet and tailor your message to your audience in real time.
  • Consider taking a “good news, bad news” approach to reporting results. Don’t forget to hail the good news, the features and functions that enhance the user experience. Express bad news as “opportunities,” and offer ways to take advantage of them.
  • Begin your report with an executive summary that essentially says, “Here’s what I think you should do. The end.” The rest of the report becomes supporting documentation.
  • Don’t include counts, percentages, mean averages, and other numbers in the report unless your audience requests them. The credibility of the usability practitioner may eliminate the need to report such numbers.
  • If you must report basic statistics, always include the confidence level and value range.
Usability Resources UPA Store UPA Chapters UPA Projects UPA Publications Conferences and Events Membership and Directories About UPA