[an error occurred while processing this directive]

Resources: UPA 2004 Idea Markets

What Big Win Did You Have Last Year?

Activator: Scott A. Butler, Progressive Insurance

Activator Questions

  • What surprised you about your role?
  • What specific activities worked?
  • Will you be spending less time on certain activities?

Two questions that were added as a result of reviewer input follow. In general, participants were not "activated" by these questions and no one provided answers for them..

  • How did you contribute to your company's bottom line?
  • Do you think your experiences can be generalized to the industry/world?

Summary of Results

Table 1: Frequency of activity types
in participant narrative.
Activity Type


Usability Test 9
UI Design 5
Field Inquiry 2
Follow-up 1
Design Guidelines 1

Table 1 at right presents a frequency distribution of the types of activities that participants mentioned when they were recounting their big wins from last year. Overall, I was disappointed with the small number of people who volunteered a "big win." I worry that my topic was boring in comparison to the other engaging topics in the room. I worry even more that many practitioners do not have a "big win" to talk about, either through lack of retrospection or actual success on the job. In any case, four main points can be derived from the participants' data:

1. Usability in the small was predominant: In reviewing results, the first thing that struck me was the degree to which usability testing dominated practitioner narrative. Given Ginny Redish's keynote address that highlighted the distinction between usability-in-the-small and usability-in-the-large, it seems that most of our colleagues are conducting usability in the small, i.e., usability testing. Why is this?

  1. Is this because usability testing is easy to sell to management? 
  2. Is this because most practitioners have a skill set that is limited to usability testing?
  3. Are there organizational factors that limit our contributions to a product development effort to providing only usability testing services?
  4. Is it some combination of the above?

2. We are becoming usability-designer hybrids: Usability testing was frequently paired with UI design tasks, which probably means that as a profession, we are beginning to drift towards other professions that are usability-designer hybrids (e.g., Technical Writers, Web Experience Developers, and Information Architects.)

3. Design guidelines are losing favor: As a practitioner who uses guidelines, I was surprised to hear that other practitioners are moving away from guidelines. One practitioner, a consultant, said that his company does "1-day usability tests" instead of working with guidelines and doing heuristic reviews. While these two activities are not substitutes, perhaps usability tests are easier to sell than a design review, thereby accounting for the shift in service offerings. Another practitioner said that his company writes guidelines, but that they are very narrowly focused to a technical platform and an application domain, e.g., "phone menu systems for insurance agents." 

4. Consultants know how to sell; corporate practitioners should learn from them: A consistent theme from consultants was that usability testing gave them credibility in the eyes of their customers. These consultants were offering services like UI design training and design services. Usability testing seemed to provide a sort of "seal of approval" in the eyes of their customer. The "1-day test" was a consistent theme among consultants and the beneficial effect of this kind of activity -- for credibility and design ideas -- is something that corporate practitioners should be aware of.

Implications of Results

In Redish's keynote address, she said that some define "usability" as usability testing. The results of this Idea Market make it clear how that can happen since usability testing is the predominant user-centered design activity that usability professionals report performing.

Before continuing, let me make it clear that I think that all of the "big wins" that participants presented were great. These individuals clearly performed user-centered design activities that were regarded as successes in their organization and as a result, these individuals are certainly well-regarded by their customers and/or colleagues in their company's development organizations.

Moving from the individual level to the aggregate state-of-the-practice, I would hope that when asked about a "big win", the modal response for our profession would have moved beyond the usability test. The more compelling responses in this Idea Market topic involved usability testing as a point-of-entry into a team or as the foundation for other UCD activities; these compound/complex responses were what I would characterize as exemplifying "usability-in-the-large."

As the leadership body for the usability profession, the UPA should take steps to move the state of our practice from usability-in-the-small to usability-in-the-large. One way to accomplish this would be to develop conference presentations around a well thought out and comprehensive usability curriculum that provides practitioners with the knowledge, skills, and strategies to grow their usability consultancies beyond usability testing. In 2004 we "connected communities" and in 2005 we are "bridging cultures." It seems like more advanced practitioners are "owning the user" -- perhaps that is our niche in development organizations and some humanist wordsmithing could turn this into the theme for a future conference.   

Raw Data

  • Out of box experience improved via basic usability test; test was multi-day.
  • We stopped doing expert/heuristic reviews and instead do 1-day usability tests.
  • Got CEO to buy-in to usability, changing his/her viewpoint from "We don't need to listen to users" to saying, "What are we going to do about that usability report?"
  • Dire usability test results came true and gave the usability team some credibility. The team found out about this trend because they followed-up on their usability report results.
  • Trained a team in UI design and they bought into the process. This would not have happened without iterative usability testing resulting in continuous product improvement.
  • A consultant presented her work at a CHI conference via a case study. Customer was an insurance company and product was a website used by agents that was  redesigned. In this case, the customer released their product from confidentiality because they thought the entire project presented their company in a positive light. The following activities were included in this project:
    • Stakeholder involvement in design and usability tests.
    • Multiple usability tests.
    • No checklists or heuristic reviews were conducted
    • Paper prototypes made and tested
    • Field inquiries conducted.
  • Usability test results are included as part of FDA submission for new medical devised. Usability report is templatized so it can "snap into" FDA submission.
  • Designed/improved voice recognition for internal/external insurance agents.
    • Wrote guidelines for enterprise-wide voice recognition system design.
    • Guidelines came from results of interactive usability tests.
  • Less time publishing long lists of problems, instead, practitioner sits down and works with developers.
  • Internet sales site flow was redesigned due to usability test input.
  • Collect feedback and ratings from alpha testing with key customers. As a result of perceived benefit from this kind of activity, practitioner was able to stop writing voluminous product specifications and instead focus on design and user-centered activities. Company hired a lower-salaried staff to take over specification writing activities.


Usability Resources UPA Store UPA Chapters UPA Projects UPA Publications Conferences and Events Membership and Directories About UPA