upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

Conducting Iterative Usability Testing on a Web Site: Challenges and Benefits

Jennifer C. Romano Bergstrom, Erica L. Olmsted-Hawala, Jennifer M. Chen, and Elizabeth D. Murphy

Journal of Usability Studies, Volume 7, Issue 1, November 2011, pp. 9 - 30

Article Contents


Iteration 4 All Functions Plus Help Available

Iteration 4 was a usability test of a medium-fidelity prototype, similar to Iteration 3 though with a higher degree of functionality in which the back end was working, but not all Census data sets were loaded (Olmsted-Hawala, Romano Bergstrom, & Chen, 2011).  Thirteen tasks were repeated from earlier iterations and one new task tested new functionality (see Table 2).  

As with the other iterations, this evaluation identified design features that supported participant success (accuracy) and satisfaction as well as design features that were problematic.  The primary purposes of the fourth usability test were to assess whether participants understood how to use some of the Web site’s geography selection functions.

Materials and Testing Procedure

We tested the user interface with eight novice participants.  In this round, we modified tasks to target specific areas where we knew we had data (e.g., a task referred to the year 2005 because that particular data was available and loaded into the system).  We tested the geography overlay (right panel of Figure 7) that would appear once users selected geographies from the Search Results page (left panel of Figure 7).  The testing procedure was identical to Iteration 3.

Figure 7

Figure 7. Search Results page from Iterations 2 and 4: left panel, Geography overlay from Iteration 4: right panel

Results

Accuracy dropped from Iteration 3: The average accuracy score across participants and tasks was 52%, and accuracy scores ranged from 7% to 96% across participants and from 20% to 86% across tasks.  Satisfaction also dropped from Iteration 3: The average satisfaction score was 5.20 out of 9, with 1 being low and 9 being high.  We did not have any preconceived ideas about how participants would interact with the new functionality, and we were all disappointed with the decrease in performance.  Prior to this iteration, we had not observed participants interacting with the geographies, and in hindsight, we believe this should have been tested earlier with lower-fidelity testing because this is such an important part of the Web site.

As with previous iterations, we assessed the usability of the Web site. In the following sections, we highlight the high-severity issues discovered in this usability test.

Finding 1A: Using the geography overlay to add geographies was confusing for participants.

Participants often experienced difficulties adding in geographies. For example, most participants did not know that once they clicked on the state that they had in fact added it to the “Your Selections” box.  The lack of feedback caused participants to click on the state numerous times, but still they did not notice that the state had been added to their selections.  Participants did not seem to understand that they needed to add geography filters to the Your Selections box, and instead some tried to add in a geography using the geography overlay.  Participants said they expected their actions to load their geography, and they expressed confusion about the outcome.  Participants who had seen the Your Selections box were confused why their state did not pop up in that area.

Finding 1B: Participants did not see the search results that loaded beneath the geography overlay.

The geography overlay that appeared when users clicked on the Geographies tab on the main page was very large, and it obscured the search results that were beneath it.  When participants added a topic using the geography overlay, they often missed that the results, which were beneath the overlay, had been updated based on their action.  For the purpose of adding geographies, the developers had intended participants to open the overlay and select geographies—they said they thought it would be clear that the overlay covered results that changed when users selected additional filters, and they thought it would be clear that the additional filters would be seen in the Your Selections box (on the upper, left side of the right panel in Figure 7). This round of testing highlighted the geography overlay to be a real “show stopper.”  Participants were unable to notice the subtle changes that occurred on the screen, and thus, they said they thought the filter was not working.  They were unable to progress past this point in selecting filters/options, and this was reflected in the low accuracy score and satisfaction ratings.  We recommended making the geography overlay smaller by either making it narrower or pushing it lower on the page.  We also recommended changing the way all the other filters worked, such that they all would open to the right of the tabs, as users expect consistency within a Web site, and this functionality was not performing consistently among the different filters.

Summary

Accuracy for two of the 13 repeated tasks increased from Iteration 2, while accuracy for six decreased, and accuracy for two remained equal (see Table 2).  In this iteration, participants were unable to attempt three of the tasks due to the show stopper that impeded them from getting to deeper levels of the site.  We met with the design team and discussed findings and recommendations from Iteration 4.  This version will be tested in mid-2011 with the live Web site.  We plan on repeating many of the tasks from previous iterations and continuing with the iterative process until optimal usability is achieved.

 

Previous | Next