upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

Extremely Rapid Usability Testing

Mark Pawson and Saul Greenberg

Journal of Usability Studies, Volume 4, Issue 3, May 2009, pp. 124-135

Article Contents


Traditional usability testing typically occurs in a laboratory-like setting. Participants are brought into the test environment, a tester provides tasks to the participants, and the participants are instructed to "think aloud" by verbalizing their thoughts as they perform the tasks (e.g., Dumas & Reddish, 1999; Nielsen, 1996). Observers watch how the participants interact with the product under test, noting both problems and successes. While a typical usability test normally takes at least one hour to run through several key tasks, it can take many days or weeks to set up (e.g., lab and equipment set up, protocol preparation, recruitment, scheduling, dealing with no-shows, etc.). The key problem is that it may be quite difficult and/or expensive to motivate people-particularly domain experts-to participate in such a study. While this can be mitigated by running the test in the domain expert's workplace, this introduces other significant problems, such as disruptions to the expert's actual work.

Another possibility is to use a trade show as a place for conducting usability tests, especially for new versions of a product that would naturally fit a trade show theme. We can consider the benefits of a trade show in light of Dumas and Reddish's (1999) following five characteristics of usability testing:

  1. The primary goal to improve the usability of a product…
  2. Participants represent real users,
  3. Participants do real tasks,
  4. You observe and record what participants do and say and
  5. You analyze the data and recommend changes

A trade show emphasizes characteristics 1, 2, and 3. Characteristic 2 is the one that is maximized: there is a plethora of potential participants, all very real users with domain expertise, not only present but likely willing to participate in the usability test. They should be highly motivated to try out, and thus test, new product versions. Their attendance means they have a large block of time for doing so. Next, a trade show setting sets the scene for characteristic 1 because trade shows largely concern advertising, familiarizing, and ultimately selling a product to potential customers. Product features, usefulness, and usability dominate discussions between participants and those manning the booth. For characteristic 3, participants are engaged by the theme of a trade show, they could easily reflect upon the actual tasks that they would want to perform on a product or critique the tasks they are being asked to do. In turn, the feedback gained is likely highly relevant to real-world use.

Yet there are issues. A trade show is not a laboratory, nor is it a workplace. Trade shows are crowded and bustling venues, where vendors compete with others to attract people to their booths. A trade show exhibit booth is a hectic, noisy, cramped space that exists for three days and could be visited by 500 people or more. Booth visitors can be users, competitors, students, or future customers. Each visitor may spend anywhere from one minute to 60 minutes in a booth. Distractions are rampant. This is not a typical usability test environment! This makes characteristics 4 (observe and record) and 5 (analyze) more problematic for the evaluator and constrains the kinds and number of tasks (characteristic 3) that can be done. Yet for companies with limited time and resources to get their product to market, a trade show could offer a realistic way to gather a broad brush of domain experts in one place for product testing.

Of course, there are evaluations methods within human computer interaction (HCI) that others developed for time and resource limited environments (e.g., Bauersfeld & Halgren, 1996; Gould, 1996; Marty & Twidale, 2005; Millen, 2000; Thomas, 1996), but none specifically address the trade show setting. Gould (1996) was perhaps the earliest advocate of rapid testing. He describes a plethora of highly pragmatic methods that let interface designers quickly gather feedback in various circumstances. His examples include placing interface mockups in an organization's hallway as a means to gather comments from those passing by and continually demonstrating working pieces of the system to anyone who will take the time to watch. The advent of quick and dirty usability testing methods in the mid 90s formalized many of these processes. Each method was an attempt to decrease the cost of the test (time, dollars, resources, etc.) while maximizing the benefit gained (e.g., identifying large problems and effects, critical events, and interface compliance to usability guidelines, etc.) (Nielsen, 1994; Thomas 1996). Other methods were developed to specific contexts. For example, Marty and Twidale (2005) described a high-speed (30 minute) user testing method for teaching, where the audience can "understand the value of user testing quickly, yet without sacrificing the inherent realism of user testing by relying solely on simulations." Millen (2000) discussed rapid ethnography, a collection of field methods tailored to gain a limited understanding of users and their activities within the constraints of limited time pressures in the field.

No method specifically addressed running rapid usability tests in a busy trade show or conference exhibit hall booth. The question remained, how can we use the trade show as a place for conducting usability tests? Consequently, our goal was to see if we could adapt and modify existing usability testing methods to the trade show context, which we called extremely rapid usability testing (ERUT). Our experiences with ERUT involved a pragmatic combination of HCI evaluation techniques: questionnaires, co-discovery, storyboarding, and observational think-aloud tests. It was an example of taking formative testing methods and applying it to a particular context of use. We wanted to exploit the "best" of each method, i.e., the portion that delivers the maximum amount of information within the severe limitations of the trade show. ERUT is not a formal or exhaustive usability evaluation of a product, nor a replacement for other methods. Rather, ERUT applies and mixes various informal discount methods to provide insights into the usefulness and usability of primary product features.

ERUT developed opportunistically. This paper's author, Mark Pawson, and another colleague were invited by Athentech Inc. of Calgary Alberta to attend the PDN PhotoPlus show in New York to perform rapid usability tests on the Perfectly Clear® digital imaging enhancement software. Pawson already worked as a usability evaluator, and both he and his colleague were experienced in working trade booths from a marketing perspective. We developed ERUT to quickly gather real-world feedback about the usability and usefulness of this product and to shed significant light on whether Athentech's unique selling proposition resonated with the customer.

In the remainder of this paper, we describe our experiences developing and using ERUT to evaluate Perfectly Clear® at the PDN PhotoPlus trade show. We caution that ERUT as described here is a case study of our experiences and the lessons we learnt, rather than a rigid prescription of how to do usability testing in a trade show environment. That is, it can be seen as a starting point for practitioners to adapt usability testing to their own trade show settings.

Previous | Next