Featured UPA Links:
The Problem with Usability Change Recommendations
by John Ferrara
Contemporary user testing methods have proven highly effective at identifying problems in computer interfaces. By directly measuring users’ ability to complete key tasks, practitioners can expediently uncover what are often colossal failures of usability that are otherwise difficult to perceive. User testing, then, affords a strong empirical basis for recommending that designers make changes to resolve the problems found.
Most test reports take the additional step of actually suggesting what those changes should be, and it’s at this point that they start running into trouble. While the existence of the problems is based on observational evidence, the efficacy of the proposed changes is not established by the test itself.
This can invite the false impression that the recommendations are determined with the same rigor with which the problems are found. In fact, there is usually no proof that the changes will actually resolve the observed problems. This is an important issue in usability practice.
The point would be academic if it didn’t carry tangible and substantial harms. Change recommendations can be costly to implement, and a failure to demonstrate improvement can be damaging to the reputations of usability professionals and to the quality of user interfaces.
Change recommendations may require a significant investment of time and money to implement. For example, testing of a company’s career website may find that users tend to submit a generic curriculum vitae (CV) for all jobs, even though they may have diverse skill sets that could be better highlighted in customized resumes. The report may recommend a function allowing more than one curriculum vitae to be maintained on the site so each may be customized to different job postings. Building in such functionality could involve the coordinated work of multiple interface design, graphic design, programming, and content editing resources for several weeks or even months.
In an iterative design process, the interface will be evaluated several times over – during the development of an interface and after its deployment. Continuing our example, user testing during development can ensure that the multiple CV function itself is usable, but what will happen once it’s deployed? Subsequent testing may find users disinclined to use the function, preferring to continue sending a single CV for all jobs. The inherent problem of user frustration with the need to submit multiple CV’s is not resolved by the recommendation.
Subsequent testing cycles that show no improvement or even degradation in user performance make a misappropriation of efforts apparent. This brings the credibility of the usability professional into question, and may damage his or her ability to influence design decisions. Furthermore, the rate at which a user interface can be improved is stunted in such situations. Usability professionals serve user interests best when design recommendations can be shown to have a positive effect.
Changes can be recommended while avoiding such problems, but it requires modifying the largely standard existing practices. Usability professionals should properly qualify the recommendations in reporting, and objectively validate those that do not have obvious benefit. One way of doing this is as follows.
Before writing a usability report, divide change recommendations into two groups. First, those that are so clearly beneficial that their efficacy is self-evident. Take for example a Web site that has a search field of limited length; changing it to allow users to submit long strings will obviously improve the field’s usability. Include these recommendations in the report, and annotate each as having “Clear benefit.”
The second group of change recommendations is those that do not obviously resolve the identified problem. Such recommendations may be included in the report at the practitioner’s discretion, but should always be qualified as “Validation required.”
Following completion of the report, the next step is to actually validate those recommendations that fell into the second group. There are many ways of achieving this, but all validation methods must:
For example, a GOMS analysis (Goals, Operators, Methods, and Selection Rules) can provide an expedient way of validating an improvement in procedural efficiency over the existing design. Combining such analytical modeling methods with the results of user testing makes a compelling case for change.
Comparative performance analysis is another good approach. This would involve prototyping the recommended state of the interface, then testing user performance of the problematic task against the existing treatment. Testing could be handled by a remotely distributed application that automatically assigns the task and measures user performance.
Validation gives designers the confidence to implement change recommendations with the certain knowledge that it will improve the user experience. If no improvement is found for a change, the HCI practitioner will be able to avoid recommending changes that would prove futile in subsequent tests. In both cases, usability reports become more reliable and persuasive.
140 N. Bloomingdale Road
Bloomingdale, IL 60108-1017
|Contact the Voice|