upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

The Effect of Experience on System Usability Scale Ratings

Sam McLellan, Andrew Muddimer, and S. Camille Peres

Journal of Usability Studies, Volume 7, Issue 2, February 2012, pp. 56 - 67

Article Contents


Introduction

A quick look at the human-computer interaction literature shows a few recent studies dealing with the longitudinal aspect of usability evaluation—that is, testing over time to take into consideration previous user experience with a product or product versions. For example, testing users over an 8-week period and recording frustration episodes and levels, Mendoza and Novick (2005) found that users' frustration levels decreased significantly over the duration of the study as proficiency levels increased. In a 2005 ACM article entitled "Does Time Heal? A Longitudinal Study of Usability," Kjeldskov and his co-authors reported similarly that, in relation to problem severity, there was "a significant difference between the mean severity ratings for novices and experts, with the latter generally experiencing the usability problems of the system as less severe” (Kjeldskov, Skov, & Stage, 2005, p.190). Performing tasks repeatedly with two comparable products, this time over a period of a few days, Vaughan and Dillon (2006) suggested that product comprehension, navigation, and usability were also useful measures for uncovering performance differences between designs over time.

The renewed interest in longitudinal usability stems, in part, from a concerted effort—of real, practical benefit to product development teams iteratively designing and reviewing interfaces with customers—to understand implications for factors such as user profiles for testing, review methodologies in company development processes, or strategies for usability results analysis. Those who may have attended the 2007 ACM SIGCHI conference workshop entitled “Capturing Longitudinal Usability: What really affects user performance over time?” would have heard this concern voiced: “Typical usability evaluation methods tend to focus more on ‘first-time’ experiences with products that may arise within the first hour or two, which trends the results more towards ‘discoverability’ or ‘learnability’ problems, rather than true usability problems that may persist over time” (Vaughan & Courage, 2007, pp. 2149-2150).

Software intent and target user base should always have implications for test participant selection. For example, some software may only be intended to be used infrequently by first-time users (such as Web-based IT systems, installation programs, etc.) and should typically support novices by being fast and easy to learn and use. Other applications, such as some of our own oilfield domain applications, are designed for more frequent use and for highly experienced domain experts. These applications boast features that may take a longer time to learn to use but, over the long run, support expert users in being more effective in doing particular work.

Specifically tasked with assisting product development teams in iteratively designing, evaluating, and quantifying the user experience for suites of product interfaces, our software analysts have used standard survey instruments like Questionnaire for User Interaction Satisfaction (QUIS; Harper & Norman, 1993) and SUS (Brooke, 1996) for quantitative information about product satisfaction to supplement results from more direct product review methods. In 2009, we collected data from 262 users of two oilfield products we were developing. These users had varying degrees of experience with the product and thus allowed us to examine the effects of experience on usability ratings. Further, we were able to explore whether these effects differed by the domain products being evaluated.

Lewis (1993) reported finding differences in user ratings on a questionnaire similar to SUS, the Computer System Usability Questionnaire (CSUQ), stemming from the number of years of experience these users had with the computer system. More recently, Sauro (2011a) found, from over 1,100 users visiting some 62 Web sites (airlines, rental cars, retailers, and the like), that users who had been to the Web site previously rated these Web sites as much as 11% more usable than those who had never been to these Web sites prior to rating them with SUS. His examination of 800 users with varying years of usage of common, commercial desktop products like Word, Quicken, Photoshop, and the like found the identical average difference based on experience—in general, “a user with a lot of prior experience will rate an application as more usable…especially…the case between the users with the most experience and those with the least (or none at all)” (Sauro, 2011b, p.1)

Compared to off-the-shelf office products or personal Web applications, we were curious if we would find an experience effect for domain specialists using geosciences products in their professional job roles.

 

Previous | Next