upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

Conducting Iterative Usability Testing on a Web Site: Challenges and Benefits

Jennifer C. Romano Bergstrom, Erica L. Olmsted-Hawala, Jennifer M. Chen, and Elizabeth D. Murphy

Journal of Usability Studies, Volume 7, Issue 1, November 2011, pp. 9 - 30

Article Contents


Introduction

Iterative testing is a well-known technique that is advocated by many usability practitioners (e.g., Bailey, 1993; Comaford, 1992; Lewis & Rieman, 1993; Mandel, 1997; Nielsen, 1993b). If we assume that stakeholders want users to be successful in using their site, iterative testing prior to launching a Web site should be effective in that developers are able to make quick changes based on the users’ interactions with the design and test the revised design using measures of success (e.g., efficiency and accuracy). Incorporating testing from an early stage in the design process allows for iterative testing. In iterative testing, a usability test is conducted with a preliminary version of a product; changes are made based on findings from that study; another round of testing occurs, perhaps with a slightly higher-fidelity product; changes are again made based on results from testing, and another round of testing occurs, and so on, until the usability goals are achieved or until a critical development deadline is reached (Mandel, 1997). In actual experience, however, practitioners and project managers often find that limited resources, such as time and money, management and developer resistance, and meeting the logistical requirements of multiple tests do not permit iterative testing and that the best they can do is conduct one usability test. Our experience shows that conducting iterative testing is worthwhile, and the benefits of iterative testing can be realized, even when challenges arise.

Although there is ample anecdotal evidence that iterative testing is advantageous, and many books, internal studies, and proceedings papers support the benefit of iterative testing (Bailey, Allan, & Raiello, 1992; Douglass & Hylton, 2010; Dumas & Redish, 1993; George, 2005; Health and Human Services, 2006; Karat, 1989; Lewis, 2006; Medlock, Wixon, McGee, & Welsh, 2005; Nielsen, 1993b; Norman & Murphy, 2004; Rubin & Chisnell, 2008), to date, few empirical studies have been published in peer-reviewed journals that demonstrate the usefulness of the method. In the present paper, we demonstrate the value of conducting iterative usability testing by presenting a case study of successive, iterative testing of the U.S. Census Bureau’s American FactFinder (AFF) Web site.

AFF is a free, online information-dissemination tool that allows the public to find, customize, and download the results of surveys and censuses related to the U.S. population and economy (http://factfinder.census.gov or available from the Census Bureau's home page, www.census.gov). Large numbers of people with diverse backgrounds use the site daily. In 2010, the site received an average of 3,018,580 hits per day with an average of 369,683 unique visitors per month (AFF Monthly User Statistics, accessed May 10, 2011 from the U.S. Census Intranet). The volume of data in AFF exceeds 40,000 individual tables organized into detailed coding schemes for over 1,500 population groups, over 80,000 industry codes, and between 2 and 14 million geographic areas. Some of the major functions available to users on the AFF Web site include downloading tables and files, building tables, making comparisons, and viewing information and boundaries on a map.

There is evidence that users have difficulties using the legacy AFF Web site (i.e., the version of AFF that existed when this project began). Throughout its existence (about 11 years), AFF has received daily “feedback” emails from users of the Web site detailing their problematic experiences with the site. An online “pop-up window” survey that was administered to randomly selected users in 2010 resulted in identification of usability problems as well. In the delivery of the new AFF Web site, which had to be ready in February 2011 to begin releasing results from the 2010 Census, iterative usability testing was deemed critical for the discovery and remediation of any potential usability issues. The new bookmarking, presentation, and navigation capabilities, as well as new data services, user activity services, logging, and “shopping cart” services were designed with the intention of making the user experience easier, more efficient, and more satisfying. This series of usability tests was designed to test the usability of the new interface with typical users of the AFF Web site.

The usability team became involved with the project after the requirements-gathering stage was complete. The project manager approached the usability team for advice about usability testing, and the usability team recommended the iterative approach. Because we had done prior work together on the legacy site, the project manager trusted the usability team and our work. We agreed that we would meet regularly to plan the series of tests and, once testing began, to discuss findings and recommendations to improve the site. Usability was included in the contract with the contractor designers and developers and was expected to be incorporated throughout the project. Throughout the series of tests, we collectively worked with the project manager and the designers and developers (henceforth referred to as the AFF team) to design the study. Thus, together we planned to conduct iterative testing, though no one knew how many iterations we would undergo before the launch of the new site. We agreed that we would track usability metrics (accuracy, efficiency, self-rated satisfaction) with the hopes and expectations that these would increase across iterations. We encouraged the AFF team to attend the live usability sessions and observe users interacting with the Web site. We all agreed that at least one person from the AFF team would be present for each session.

 

Previous | Next