upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

Conducting Iterative Usability Testing on a Web Site: Challenges and Benefits

Jennifer C. Romano Bergstrom, Erica L. Olmsted-Hawala, Jennifer M. Chen, and Elizabeth D. Murphy

Journal of Usability Studies, Volume 7, Issue 1, November 2011, pp. 9 - 30

Article Contents


Iteration 2 Cycle 1: Search and Navigation

Iteration 2 was a medium-fidelity usability test of static screen shots of the revised user interface (Romano, Chen, Olmsted-Hawala, & Murphy, 2010).  Because the development team had begun coding the interface, we had screen shots to work with and conducted this iteration on a computer screen.  Conducting the test on the computer allowed us to conduct eye tracking, as discussed below.  As in Iteration 1, we examined the success and satisfaction of the participants, as measured by their performance and self-rated satisfaction.  We repeated three tasks from Iteration 1 and introduced 11 new tasks to examine features that were not present in the Iteration 1 prototypes.  As in Iteration 1, the Iteration 2 evaluation aimed to identify design features that supported participant success (accuracy) and satisfaction as well as design features that were problematic.  The primary purpose of the second usability test was to see whether participants understood the new AFF Web site’s search and navigation capabilities and some table and map functions that were not available in Iteration 1. 

Materials and Testing Procedure

We tested the interface with seven novice and seven expert users. In this round, it was important to recruit both novice and expert users because we were testing some of the site’s functionality.  Members of the design team developed the screen shots used in this study.  We tested the Main page (right panel of Figure 3), the Table View page (left panel of Figure 5), and the Map View page (left panel of Figure 6).  The screen shots were not clickable, but participants were instructed to interact with the static Web pages on the computer screen as if they were part of a fully-functioning Web site. 

The participant sat in front of an LCD monitor equipped with an eye-tracking machine that was on a table at standard desktop height.  During the usability test, the test administrator sat in the control room on the other side of the one-way glass.  We wanted to sit in separate rooms to avoid any confounds with the eye tracker (e.g., the participant might look at the test administrator if she was sitting beside him/her).  The test administrator and the participant communicated via microphones and speakers, and the participant thought aloud while they worked4.  As with Iteration 1, each session lasted about an hour.

Results

Overall accuracy was still low: The average accuracy score across novice participants was 55%, and across expert participants, it was 56%.  Accuracy scores ranged from zero to 100% across participants and from zero to 100% across tasks (i.e., as in Iteration 1, some participants were unable to complete all tasks).  Satisfaction was also low: The average satisfaction score was 5.69 out of 9, with 1 being low and 9 being high.  The average for novices was 5.49, and the average for experts was 5.89.

Although overall accuracy was still low, accuracy increased for two of the three tasks that were repeated from Iteration 1 (see Table 2).  With this increase in performance, team morale also increased.  It appears that users better understood the areas of the Main page (that the tasks tested) in Iteration 2.  For the task that decreased in accuracy, we inferred that the lack of guidance on the page played a key role. 

Finding 1: There was no direct, useful guidance about how the user could modify tables and maps.

On the Table View page, shown in the left panel of Figure 5, participants were supposed to click on the “Enable Table Tools” button in order to modify tables.  Some of the participants did not use the button at all, and some used it in later tasks.  According to eye-tracking data, most participants looked at the button, yet they never selected it.  As in Iteration 1, extra items on the Table View page, such as the gray line between the table functions and the table, shown in the left panel of Figure 5, may have led participants to be unsure whether parts of the page worked together or separately.  On the Map View page, shown in the left panel of Figure 6, participants said they did not understand the process of mapping their results, and participants often clicked on the legend rather than on the modifiable tabs designed for that purpose.  To support the user’s understanding of the page, we recommended removing elements on the page (e.g., the line that was between the button and the table) and moving other elements (e.g., the Enable Table Tools button) closer to the table.

Finding 2: Labels and icons were confusing to participants. 

The label on the “Data Classes” tab on the Map View page (left panel of Figure 6) was not clear to participants—they were supposed to click on this tab to change the colors on their map, but only three of the 14 participants did so.  When asked about this label during debriefing, none of the participants could conceptualize what Data Classes would offer.  We recommended using clear, meaningful labels and an action verb as the first word in an option label, such as “Choose Color and Data Classes,” instead of Data Classes, to give users some sense of what might be available there.  We suggested each of the options start with an action verb, like “Find a location,” but the developers didn’t want to change the labels in this way, as Data Classes is a label that is recognized by expert users.  While we recognized this to be an important issue, we knew that this would come up again in future iterations, so we decided we could address it later.  Participants also said that they did not understand many of the map icons because they were not commonly-used icons.  This highlighted a tradeoff that was made on this Web development project: An industry-leading commercial-off-the-shelf (COTS) software that came with existing icons was used to avoid custom coding and to maintain a clear upgrade path.  While this promoted cost savings and allowed the project to stay current with software developments and improvements, the true trade-off was that users did not understand the icons.  For unclear icons, we recommended hover “tool tips” or “mouse overs” to appear when the cursor is placed over the icons, in addition to changing some of the unfamiliar icons to ones that are more easily recognized.

Plans for Iteration 3 

We met with the AFF team and recapped findings and recommendations from Iteration 2.  The designers and developers recognized the importance of usability testing and followed many recommendations, including changing Enable Table Tools to “Modify Table.”  During our meeting, we hashed out ideas on how to improve icons, such as a zoom-out country-view icon on the map that was not easily recognized by participants.  One of the usability team members suggested a United States-shaped icon, and the AFF team liked it and decided to try it.  We then planned for a third round of usability testing of the newly designed site.  See the right panel of Figure 4 for the new Search Results page, the right panel of Figure 5 for the new Table View page, and the right panel of Figure 6 for the new Map View page.


4While some have found that thinking aloud affects where people look on the computer screen during usability testing (e.g., Eger, Ball, Stevens, & Dodd, 2007), others have found no differences in fixations between concurrent think aloud and retrospective think aloud among young and middle-age adults (e.g., Olmsted-Hawala, Romano Bergstrom, & Hawala, in preparation).

 

Previous | Next