upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

Rent a Car in Just 0, 60, 240 or 1,217 Seconds? Comparative Usability Measurement, CUE-8

Rolf Molich, Jarinee Chattratichart, Veronica Hinkle, Janne Jul Jensen, Jurek Kirakowski, Jeff Sauro, Tomer Sharon, Brian Traynor

Journal of Usability Studies, Volume 6, Issue 1, November 2010, pp. 8 - 24

Article Contents


Key Measurement Results

Table 2. Reported Key Measurement Results for Task 1, Rent a Car—All Times Are in Seconds

Table 2

Rent car in xx seconds: M=More research needed, OK=Current statement “Rent a car in just 60
seconds” is OK or defensible, R=Rephrase statement, number=replace "60" with number.
Team A, C, D, E, H, K, N, and O did not provide confidence
intervals in their reports. Confidence intervals were provided after the workshop.

Eleven of the fifteen teams reported the mean (average) of their time-on-task measurements. Three teams (A, D, and F) reported the median and one team (G) reported the geometric mean.

Seven of the fifteen teams reported confidence intervals around their average task times. Confidence intervals are a way to describe both the location and precision of the average task time estimates. They are especially important for showing the variability of sample sizes. Eight teams left this information off (we are unsure if they calculated it). Some of these teams indicated that they were not computing confidence intervals on a regular basis. Another claim that was heard among these teams was that “you cannot come to statistically significant conclusions with such small sample sizes.” The information was provided after the workshop.

The methods used to compute the confidence intervals for time-on-task were

Team L, M, and P did not provide information in their report about the computation method. The information was provided after the workshop.

Nine of the fifteen teams provided qualitative findings even though qualitative findings were not a subject of this workshop. Teams argued that they obtained a considerable insight during the measurement sessions regarding the obstacles that users faced and that it would be counterproductive not to report this insight. 

The format and number of reported qualitative findings varied considerably from 3 qualitative findings provided by team G to 79 qualitative comments, including severity classifications, provided by team N. Qualitative findings were reported in several formats: problem lists, user quotes, summarized narratives, word clouds, and content analysis tables.

“Rent a Car In Just 60 Seconds”

The scenario asked teams to provide data to confirm or disprove the prominent Budget home page statement "Rent a car in just 60 seconds" (see Figure 1) and to suggest an alternative statement, if required.

Table 2, row "Rent car in xx secs," shows that there was little agreement, with one team choosing not to respond. Several teams suggested that Budget's statement may indicate a lack of understanding of users' needs, because their study showed that time was not of the utmost importance to participants—it was more important to get the right car and the right price, and to ensure that the reservation was correct. In other words, Budget's implicit assumption that users value speed above all may be wrong. It was also suggested that the statement could rush users through the process.

Four teams suggested that the current statement was OK or defensible. Team L said, "Since it is theoretically possible to rent a car in less than 60 seconds, it is technically not an incorrect statement and thus it is OK to keep it." Team E and G argued similarly. Team K suggested that Budget should "focus on improving the efficiency and intuitiveness of the car rental process and pay less emphasis on getting the time you spend on renting a car through the site."

Three teams suggested that additional research was required, but none provided further details about what was required.

The five teams that provided specific times suggested times from 90 to 240 seconds; it was not always completely clear how they arrived at their suggestion.

We have asked Budget to comment on these findings, to no avail.

Reported Time-on-Task

Figure 2

Figure 2. Reported time-on-task for tasks 1-5: The diamonds show the time-on-task reported by each team. The black vertical lines show the 95% confidence interval reported by the team before the workshop, or computed after the workshop for the teams that initially did not provide confidence intervals. The light blue bars show the average of the Confidence Intervals (CIs) for each task, that is, average upper CI limit to average lower CI limit. The graphs also show that some teams reported completion times that are not centered between the upper and lower limit of the confidence interval, for example team A, task 1. This is because these teams chose to report the median as the central measure.

Satisfaction Measurements

Of the 15 teams who participated in CUE-8, eight teams used the System Usability Scale (SUS) as the post-test standardized questionnaire. Four of the teams used their own in-house questionnaires and one used a commercially available questionnaire (WAMMI). Of the teams who used SUS, one team modified the response options to 7-scale steps instead of five. The data from this team was not used as part of the SUS analysis. The remaining seven teams left the scale in its original 5-scale form and provided scores by respondent.

Previous | Next