Tutorial Details

Download: Schedule of Events (PDF) subject to change
By Day - Selected
By Presenter
By Track
View My Agenda

A Step-by-Step Guide to Online (Unmoderated) Usability Testing

A practical guide to online usability testing for anyone

Online (unmoderated) usability testing is a cost-effective, efficient, and reliable method for collecting qualitative and quantitative data from hundreds or thousands of users simultaneously. It is an increasingly popular method for measuring the holistic user experience. This tutorial will give attendees the skills they need to conduct an online usability study from start to finish - including planning, design, implementation, analysis, and presentation of results.


Tutorial by Bill Albert (Bentley University), Donna Tedesco (Fidelity Investments), Tom Tullis (Fidelity Investments)
Category:
Track:
Usability Fundamentals (UF)
Time:
9:00am to 5:00pm on Tuesday, June 21, 2011

About the Tutorial

Description: As technology and product design permeates society, measuring usability and the holistic user experience plays a more critical role in the success of products. Users have little patience for poor designs; the competitive marketplace and ability for product reviews to be shared quickly and widely can make or break the success of a product. Online usability testing offers the ability to collect the behavior and perceptions of hundreds or thousands of users simultaneously. It brings together both quantitative and qualitative data to make informed decisions about designs. It can be used for many goals and purposes, from something as specific as comparing the way a button is worded, to benchmarking a product-wide user experience. It can be used as a stand-alone usability method or in conjunction with other methods such as in-lab testing, heuristic reviews, and focus groups. Many user experience professionals recognize the potential value of this method but lack the information necessary to design a study and choose the appropriate method to implement and analyze a study with confidence. The small amount of information available today is directly from vendor services. The few vendor services in this domain have varying selling points and cost structures which makes it overwhelming to choose a service, and difficult to obtain unbiased information about the method. Having conducted hundreds of online studies, the presenters have the knowledge and experience to offer unbiased information about the many services available, as well as tips and tricks for implementing the study that is not offered elsewhere. The tutorial will follow the four required steps for carrying out an online usability study: 1) Planning a Study: Identifying target users, type of study, budget and timeline, metrics, recruiting, sampling, and incentives. 2) Designing the Study: Introducing the study, screening and starter Questions, constructing tasks, post-task post-session questions, and special topics for study design. 3) Piloting, launching, and data preparation: Technical checks, usability checks, full pilot checks, timing the launch, monitoring results, cleaning up and recoding data. 4) Data analysis and presentation: Analysis for task success, time, efficiency; calculating confidence intervals; analysis of click stream data; segmentation analysis; and identification of usability issues. The workshop will be primarily lecture-based with hands-on exercises throughout the session. The exercises will focus on a designing, launching and analyzing the results from an online usability study, allowing attendees to apply their learnings about the various phases of online testing throughout the day. There will be three exercises after Steps 2 – 4. Participants will work in teams of three in order to learn from each other. The presenters will arrange for one of the major vendors in online usability testing to provide a set of test account that can be used during the tutorial. Although no prior experience or knowledge of online usability testing is necessary, attendees should have a basic understanding of the principles of traditional lab usability testing. Online studies build off a foundation of these principles. For example, when discussing task construction, the presenters plan to talk about the tips and issues unique to creating tasks for online studies, rather than delving into an extensive background on task construction principles for any usability study.

Agenda: Introductions: 8:30 – 8:45

Introduction to online testing, and overview of exercises: 8:45 – 9:15

Planning a Study: Identifying target users, type of study, budget and timeline, metrics, recruiting, sampling, and incentives: 9:15 – 10:00

Morning break: 10:00 – 10:30

Designing the Study: Introducing the study, screening and starter Questions, constructing tasks, post-task and post-session questions, special topics for study design: 10:30 – 11:15

Exercise 1: Designing the Study and discussion: 11:15 – 12:00

Lunch break: 12:00 – 1:30

Review of online tools and discounted approaches: 1:30 – 2:00

Piloting, launching, and data preparation: Technical checks, usability checks, full pilot checks, timing the launch, monitoring results, cleaning up and recoding data: 2:00 – 2:30

Exercise 2: Launching the study, and collecting the data: 2:30 – 3:00

Afternoon break: 3:00 – 3:30

Data analysis and presentation: Analysis for task success, time, efficiency; calculating confidence intervals; analysis of click stream data; segmentation analysis; and identification of usability issues: 3:30 – 4:15

Exercise 3: team analyze data from their study: 4:15 – 4:45

Group discussion of results: 4:45 – 5:00