Home

Resources: UPA 2004 Idea Markets

How do we perform usability testing when the tasks are so novel that users don’t even know they want to perform them?

Activators: Samuel A. Burns and J. Patrick Williams
School of Information
The University of Texas at Austin

Background & Follow-up Questions:

We recently encountered the problem of how best to evaluate the usability of a technology when little is known of its typical use case. Our work with a newly developed information visualization tool challenged many of our preconceptions of how to conduct such an evaluation. An exploration of these issues with other practitioners will serve to support future research on this and other emerging tools.

In addition to our main question, we offered these additional follow-up questions:

  • How is this different from testing any application that is approached by a novice?
  • How should one test for discoverability, versus testing for usability?
  • Is it a good idea to inform users of features that they don't discover?
  • How do we leverage old metaphors to afford new tasks, especially in light of the impending explosion of ubiquitous computing?
  • How do you test exploratory behavior?

Discussion

As we explored the topic, we began to recognize a general consensus that regardless of how novel a task is, investigators must build as much meaningful context around it as possible in order to support both the validity of the use case and the needs of test participants. Of course, this idea applies to the vast majority of usability tests, but as the discussion developed, we focused on the nuances of testing tools that involve novel tasks and explored methods for generating ideas upon which to build meaningful context.

The group expressed that participants must be willing to take risks when engaging in new tasks and it is the investigators’ responsibility to create logical, user-driven scenarios that meet participants’ needs for comfort and sense-making. Though developers may see a distinct novelty in the application or be reluctant to place it within a limiting use case, investigators should provide a rich, complete user experience within which test participants’ comfort and familiarity can be maximized. In order to provide this context, investigators must perform considerable pre-test research, engaging with potential users about their impressions of, ideas about, and expectations of the tool. Rather than performing a general blue-sky test of the technology with a large group of participants, presenting the tool to smaller groups of users in several phased tests might generate more useful comparison data and yield a greater universe of use cases and test scenarios.

Investigators must also identify users who have real-world needs for the task. In the early stages of testing, investigators might use an approach that offers participants a broader understanding of the application’s capabilities. One method for achieving this could be to provide participants with a mental picture of the beginning and end stages of the novel tasks and explore with them how one would go about arriving at the result. In early tests, it may yield rich data to have participants talk through this progression before allowing them to attempt the task.

Focus groups and paired participant testing were also suggested as methods for generating ideas, situations, and constraints for test scenarios. Allowing initial participants to explore, discuss, and criticize the application side-by-side with other participants could generate a larger pool of possible use cases and scenarios than single-participant sessions, as participants would likely build upon the responses of others.

Another method discussed for reducing the novelty of the new task and generating comfort involved participants themselves contributing to the context around the task. Investigators can lead test participants in a dialogue about situations in the participants’ own lives suited to the task well before introducing the application. When the dialogue reaches a point at which a participant can envision a need for the task, investigators can introduce the application and allow the prior dialogue to guide the test session. This method is likely to generate interesting data due to the fact that users are comparing an imaginary application with the actual one, and such dialogue would be of additional value in the construction of complete scenarios for later, more structured testing.

Testing tools for information visualization, like the application that inspired our original question, can be problematic because the content displayed by the tool may be of little consequence to the test participant. Several involved in the discussion suggested testing multiple content sets and finding groups of participants with similar levels of content expertise. The results of the tests could then be compared among groups.

Several in the group cited examples of testing they had performed on applications involving novel tasks. Typically, these tests involved advanced features of an application or wholly new technologies. In both cases, the question becomes: What are appropriate methods for “catching up” novice users to the point that they will be ready to use an advanced or novel feature?

Conclusions

Based on our discussion with the group, we devised the following set of tips for testing novel tasks and new technologies:

  • Identify those who will benefit from the new tasks. Examine how they work and consider the situations and problems under which the tasks would add value. Verify and expand on these situations and problems in both one-on-one sessions with the application and in focus groups.
  • Design your test to allow for different groups of users and to enable comparisons among groups in different contexts of use.
  • Users may be more likely to accomplish novel tasks when they understand the possible outcomes and when they have a deep understanding of the context for use. Using familiar content can make these connections more apparent to the user. Providing users with a clear picture of the desired result of a task could also prove helpful.
  • Allow your test to evolve as your understanding of appropriate and typical use cases increases. Strive for highly interactive early tests that emphasize exploration and discovery and that afford the test participant and the evaluator rich dialogue. Build later-phase tests based on these more exploratory sessions and retest with a more tightly structured test
  • Using sample content that test participants find meaningful is important when testing tools for information visualization. Example content should engage participants and encourage exploration

Future Research

These questions rose out of our recent work testing the usability of a tool developed by computer scientists at Texas A&M University. The tool uses "streaming collage" to help users gain a more comprehensive understanding of large image collections they have no prior experience using. We will use the input we received in the Idea Markets session to develop further tests of the tool's interface and are interested to see how the results from the new tests compare to our prior results.

Usability Resources UPA Store UPA Chapters UPA Projects UPA Publications Conferences and Events Membership and Directories About UPA