The U P A voice
Nov 2004 Contents

Featured UPA Links:

Job Bank

Member Benefits:
Discounts and Special Offers:

Event Discounts

 

amazon.com
Order your usability books now from the Amazon.com link from this site. Part of all sales help fund UPA.

Cleaning up for the housekeeper
or why it makes sense to do both Expert Review and Usability Testing

by Kathleen Straub, Ph.D.
Chief Scientist / Executive Director
Human Factors International

Once in a while a client will tilt their head and look at me with one of those smiles. “You want to do an expert review and then also usability testing?” they say. “Is this one of those consulting tricks? Why would I need to do both?”

It’s a fair question. To the casual observer, usability testing and expert review probably look very similar.

  • Both identify and prioritize opportunities to improve the user experience
  • Both evaluate sites/applications at both the task level and the detailed-design/presentation level
  • There is typically overlap in the findings
  • When done properly, both yield concrete recommendations for improving the design

To make matters worse, practitioners, when asked, often offer an opaque, pithy line like: “Expert review is quick and dirty. Usability testing takes longer…but it’s more comprehensive.” That’s right in terms of speed, but wrong in terms of spillage. And totally misses the point that Expert Review and Usability Testing address different goals. No wonder clients are suspicious.

Speed and Spillage

It is true that an Expert Review (ER) can be executed more quickly than Usability Testing (UT). But, that has nothing to do with how comprehensive the method is. It reflects the practical problem of recruiting participants for UT. Within Expert Review we evaluate sites/applications against an extensive set of best practice guidelines. We conduct this review systematically. In fact, Expert Review is more comprehensive.

In contrast, in Usability Testing we observe users doing a specific set of critical and frequent tasks that are central to the business goals of the site. The focus is narrower. But critically, the findings provide insight into the user’s conceptual use model for the site or application. Expert Review can’t do that.

Contrasting UT and ER

In fact, there should be some overlap in the findings of Heuristic Review and Usability Testing. However, there will also be insights that differ, even with a very seasoned practitioner (Fu, Salvendy & Turley, 2002).

Expert Review examines details of human computer interaction guided by basic research about how humans interpret, understand and interact with objects in the world. As such, Heuristic Review exploits our generic understanding of human cognition to identify design/presentation details that may facilitate or impede a user’s progress within a task. These include issues such as affordances (How obvious the right next-thing-to-do is.), consistency and the effectiveness of layout and color to guide the user experience.

Usability testing identifies gaps between the site model and representative user conceptual use model in the specific context of use. Effective usability testing means observing representative users doing things on the site. Users bring unique domain knowledge and experience to their user experience. Designers—even experts—don’t have the same perspective.

  Expert Review Usability Testing
Addresses this question Is the design optimized based on what we know about how people interact with computers?

Can key users

  • find the information
  • complete the transaction
Recommendations derived from...
  • Current HCT and Cognition research
  • Industry standards and best practices
    sector and user group specific experience
  • Direct observations from users doing tasks on the site
  • Analysis of GAP between users' conceptual model of use and site model
Complimentary benefits Focuses on what the design brings to the user Focuses on what users bring to the design
Benefits
  • Rapid results
  • Tactical recommendations
  • Comprehensive evaluation
  • Synthesizes recommendations across the task experience
  • Contextualizes recommendations to the specific objectives of the site and the limitations of the users

Some believe that this difference between ER and UT (evaluating it yourself versus observing others) means that less experienced practitioners can do equally effective UT. All you have to do is identify where people stumbled, right?

Anybody can break things

If the goal is just to figure out where things are broken, then it doesn’t take much experience to do usability testing. On that approach, however, there isn’t really all that much value to usability testing either. Do you need someone else to tell you that the transaction is confusing? Not really.

But that’s not really the goal. The goal is to improve the user’s experience. This means that noting where people stumble is important. But, if your usability testing is worth paying for, that’s just a means, not the end.

Within UT, testers should be able to systematically identify points in the task flow or information search where users slow or stray from the intended path. With this data, the tester should also identify mismatches between how the site/application works and the user’s conceptual use model. For sites and applications, this means exploring the fit between the task model and the user’s expected task flow. On the web, we also add comparison of the navigation structure and information structure. Once you identify the mismatch, the next step is to design the fix. Interestingly, to ensure this level of result from your usability testing, it is even more critical to have experienced practitioners.

So, adequate usability testing reports describe where a site or application fails. A good usability testing report describes this in terms of mismatches between the user’s model and the site model. Really good usability testing reports provide guidance on how to minimize (or get rid of) the gaps by implementing fixes that move the site model closer to the user’s mental model. Expert reviews can’t do that. Experts don’t have access to that kind of data without observing users. See the difference?

Cleaning up for the housekeeper

When I was a kid, my friend’s mother always made her clean up her room before the housekeeper came. We were always perplexed by this. Wasn’t that the housekeeper’s job? Actually, the housekeeper was very clear on this for us: She did NOT straighten. She cleaned. She had a specific amount of time set to clean the house. If she spent that time distracted with straightening, she would not get to cleaning.

The same concept applies here. Doing ER is like straightening up before the housekeeper gets there. If you conduct ER first, ER provides feedback that allows the developers to ‘tidy up’ the interface so that the usability testing can focus on cleaning. If you don’t straighten first, both the tester and the participant are distracted and waste time. However, if the right methods are applied at the right time, ultimate outcome is a really clean house….er interface.

So, doing both ER followed by UT optimizes the return on the usability investment. ER identifies fundamental or generic challenges within the user experience. Usability Testing highlights contextually specific gaps between the user model and the site model. Executed together, UT builds on the ER, providing complimentary feedback supporting focused and actionable design recommendations. Thus, the power of combined usability review techniques significantly enhances the power of the review.

 

References
Fu, L., Salvendy, G., and Turley, L. (2002) Effectiveness of user testing and heuristic evaluation as a function of performance classification, Behaviour & Information Technology, 21(2), pp. 137- 143.

  Usability Professionals' Association
140 N. Bloomingdale Road
Bloomingdale, IL 60108-1017
Tel: +1.630.980.4997
Fax: +1.630.351.8490
UPA: office@upassoc.org
Contact the Voice