Resources: UPA 2006 Idea Markets
do we create realistic usability testing scenarios that engage participants
while addressing the issues we need to explore?
Thought Starter Questions
- When have you resorted to a less-than-realistic scenario? How did it affect your results?
- How do you identify participant characteristics in the absence of personas?
- How do you develop scenarios and ensure they are realistic?
- If you have many possible scenarios, how do you decide which to use?
- How do you involve participants in creating scenarios?
- How do you help participants accept scenarios and play their roles?
- What are effective techniques for communicating scenarios?
- How do your approaches or scenarios differ across cultures?
This Idea Market topic drew researchers in usability groups at large B2C and B2B organizations, researchers who are their company's sole usability resource, and researchers working for large consulting firms or on their own. The attendees described two main approaches to creating realistic usability testing scenarios:
- Base the scenarios on prior design research including stakeholder interviews, user interviews, user observation, and contextual inquiry.
- Collaborate with participants to create scenarios in the test session. For example, discuss product use with the participant and select tasks based on responses, or use participants' feedback on tasks to inform scenarios—as well as user profiles and locations—for future tests.
Scenarios are communicated orally, in writing, or both. As long as participants are comfortable critiquing a product, they can “play” themselves rather than assume a role.
Attendees admitted to using unrealistic scenarios, sometimes by design—for example, to collect perception data—and sometimes unintentionally. In the latter case, they tried to adjust the scenario on the fly to prevent suspect data.
The risk of less-than-realistic
Less-than-realistic scenarios may result in sparse and less-than-reliable data—or no data. One attendee described her experience conducting a three-city usability test of a website for older adults. As part of the test, participants were asked to follow a thread on a message board. In the first city, one third of participants refused to do the task; one participant admonished the researcher, “I have real friends. I don’t need to talk with strangers.” A similar percentage rebelled in the second location. By the last location, the product team had made the task optional. They had already learned that they would need to promote the message board feature to make it more inviting and less scary to its intended users.
Attendees agreed that unrealistic tasks can have a place in usability testing. A task such as “Review this page” helps elicit participant preferences that the product team may have been unaware of. What’s more, you can follow up on these “unrealistic” results by conducting automated usability testing or an online survey to collect additional data.
Personas and participants
To aid in developing scenarios, personas must do more than describe what representative users are. Personas must also describe what representative users do, including how, when, and why the persona performs certain activities.
Assuming your personas are “deep,” you can use them to do cognitive walkthroughs and mimic users performing tasks with proposed designs. One attendee also proposed having product team members adopt personas for early or mock usability testing, to help fix personas in their minds. Both approaches may reveal additional scenarios for later usability testing with users.
Another attendee lamented that she often discovers during usability testing that the product team’s user profiles are incorrect. For example, while her stakeholders insist that executives use their system, the reality is that executives hand off data entry to administrative assistants who do not understand the system’s decision-support features.
Attendees said they gather scenarios from stakeholders and from user interviews, observation, and contextual inquiry during early design research. For products with global audiences, these studies can reveal whether and how user profiles and tasks differ. For example, one attendee’s on-site observation of botanist-researchers in different countries suggested that, thanks to their shared discipline, their similarities are stronger than any cultural differences.
Because these approaches to developing scenarios can still miss nuances of users’ actual tasks, another attendee said her group tries to gauge the efficacy of scenarios during usability testing. After participants complete a scenario, the researcher asks, “How does this match how you would use this product?” and, if appropriate, “How often do you perform this task?” The answers to these questions help the group decide on key tasks for future testing.
To enlist participants’ help in creating scenarios, attendees suggested using a flexible script or protocol and beginning each session by interviewing the participant about how s/he use or would use a product. The responses identify tasks the participant might perform later in the session.
You can use the participant’s task list in multiple ways, depending on the goals of the usability test:
- For exploring high-level goals, such as the effectiveness of navigation, ask the participant to choose the tasks to perform.
- To address specific issues, map the participant’s tasks to your prioritized research tasks. First ask the participant to perform tasks that appear on both lists. Then, if the participant seems open to trying new things, ask the participant to perform the remaining research tasks. Otherwise, consider how watching the participant perform his/her other tasks might support the research goals.
- For participants who cannot articulate how they use or would use a
product, use previously collected task lists to suggest things the participant
One attendee noted that participants
may be more receptive to some scenarios—such as setting up a new
printer—in their own homes rather than the usability lab. Participants
are more comfortable at home, plus she gained additional insights—for
example, about what print drivers the participant already has—by
conducting usability tests in the field
Scenarios and role-playing
In the U.S., attendees generally do not ask usability test participants to assume roles, instead encouraging them to behave as they normally would when using a product. Otherwise, the data will be suspect.
However, in cultures where participants hesitate to give feedback, you may need to take a different approach, such as the Bollywood method. Named after India’s movie industry, the Bollywood method presents an imaginary situation and asks the participant to play a specific role as s/he uses the product to accomplish some goal. The role-playing gives the participant the freedom to critique the product.
Attendees said they typically communicate scenarios to participants orally and/or in writing. They prefer “progressive disclosure” to extensive description; that is, they first supply some background or context, and then present a short task statement that instructs the participant to do something. After the participant performs that task, they introduce another short task statement, and so on.
Realistic scenarios help motivate participants and ensure the success of usability testing. In organizations that embrace user-centered design, researchers can leverage early research to develop scenarios. Participant behavior and comments during test sessions may lead researchers to refine less-than-optimal scenarios. In this case, researcher and participant should work together to clarify context and identify tasks that tell the user’s story.