User Experience Magazine: Volume 7, Issue 3, 2008
Feature Articles: Usability in Healthcare
Unmoderated remote usability tests involve presenting tasks online to representative users who try to perform them using a website or prototype. The users participate at their own computer and when it’s convenient for them. All data collection (task success, task times, subjective ratings, comments) is completely automated.
We often use this technique to compare alternative designs for a given website. Since participants work in parallel on the web, it’s not uncommon to collect data from hundreds or even thousands of participants in just a few days. A sample study comparing two very different websites about the Apollo Space Program (the official NASA site and the Wikipedia site) is described. Each participant was given four randomly selected tasks out of the same set of nine to perform using one of the sites.
In about one week, a total of 192 people participated in the study online. We found that users of the Wikipedia site successfully completed significantly more tasks, were marginally faster, rated the tasks as significantly easier, and gave the site significantly better overall ratings. Usability data for each task pointed to areas of the sites that worked particularly well for the users or that were particularly problematic.
Extensive verbatim comments on both sites helped explain why users encountered some of their problems. Most of the strengths of unmoderated remote usability tests derive from the larger number of participants that they make possible. Most of the weaknesses derive from the fact that there is no opportunity for direct interaction with the participant.
Unmoderated remote testing is not a replacement for traditional lab testing or moderated remote testing, but it can be a very useful adjunct.
Most user experience practitioners realize that they need to get out of the lab and spend more time in “the wild” where products are actually being used. Common methods to capture user experience, such as in-lab usability testing, contextual interviews, surveys, focus groups, and ethnographic research are useful, but pose challenges associated with either results validity and generalizability or the amount of effort and complexity of logistics they involve. Experience sampling captures user experience in the place and time it occurs. When employed over time, it can provide insight into users’ activities, motivations, and product learnability and long-term usage patterns. We now have the technology to conduct experience sampling research using the participant’s own mobile device. This innovative solution allows us to conduct large usability studies of mobile phones and their features through automatic and remote monitoring. However, the potential for this approach reaches far beyond the studying of the phone itself; the mobile device can be used to gather and report data on a variety of experiences through texting, pictures, and video. This article discusses how advanced mobile technology coupled with experience sampling methodologies can enhance our research toolbox and describes practical uses of the technique in the field.
By allowing clients to observe research sessions and communicate with the moderator, new remote testing tools enable them to participate directly in remote research. The benefits to this approach are many: research becomes more transparent, which can enhance stakeholder buy-in; clients of various disciplines can contribute context and insight to user comments; clients can request the moderator to probe unforeseen issues during testing as they arise; clients can provide live assistance to the moderator in interpreting or addressing issues that come up with prototypes. Through a series of short case studies, this article discusses the tools, methodologies, and logistics of getting clients involved in research.
Remote unmoderated testing provides statistically significant results, which executives often say they require to make informed decisions about their websites. However, usability practitioners may question whether the large numbers, although seductive, really tell them what’s wrong and what’s right with a site.
This article aims to show how remote unmoderated and lab-based moderated testing differ and how they’re similar. Included are some actual cost figures from the three leading vendors of online test tools. The goal is to help readers decide for themselves which tradeoffs they want to make. Surprisingly, there aren’t that many.
Remote unmoderated user experience research is a valuable methodology to include in any consultant’s toolkit. The ability to conduct task-based research that collects both qualitative and quantitative data enhances return on investment (ROI) significantly. This methodology combines task success, click-stream data and attitudinal data, with all three types of data taken from the same people at the same time as they interact with a website. It can be used at many stages of a project, to conduct many different types of research.
It is both an effective and efficient method. It provides the ability to include people from around the world, working in their own language, and in their own environment. It can be used to collect statistically large sample sizes, while still screening participants to be sure they meet the profile for the study.
It is cost-efficient, allowing studies to include more participants, be conducted simultaneously, and eliminates the need for researchers to travel. Most remote systems also provide good analysis tools, or the ability to download data into a variety of formats.
Like other usability methodologies, it allows researchers to look at the behavior of the users through the clickstream data and gain insights into their attitudes. These insights lead directly to improvements in the design of the site.
Remote, unmoderated usability does not replace other methods, but with a larger sample size, geographic reach, globalization, and ability to quickly run studies, it adds value on it own merits, as well as providing a cost-effective solution.
As the web becomes a prominent sales and communication channel, user experience measurement becomes increasingly critical. Many professionals wonder how this goal can be accomplished.
In order to make solid decisions and manage user experience (UX) research, you need to follow four basic steps:
- Gather quantifiable data and metadata.
- Test over large samples of geographically dispersed users.
- Identify your industry standards and key performance indicators (KPIs).
- Perform iterative testing, and record the metrics gathered for future analysis and comparison.
The article covers the following major points:
- What? The importance of having metadata to manage usability and UX.
- How? Alternative and complementary methods and resources (such as unmoderated remote user testing).
- Why? Benefits for UX managers and researchers in both local and international testing.
- Examples of how it gets done and the ROI obtained.
When working with or in a global company, it is common to find yourself in a situation where you need to conduct moderated usability testing in multiple locations in a short time. This often means that you cannot perform all the testing by yourself, but need other usability experts or teams to help.
Collaborating with multiple groups provides both opportunities and challenges. Being able to test the product in many regions in a short time is one advantage. Working together with local usability experts allows you to consider and evaluate cultural issues better. The opportunity to discuss the findings with various experts can help you deliver a better report to the customer.
There are also a variety of challenges. Coordinating the tests takes time and effort and ensuring that the different teams conduct the test sessions and report the findings in a unified way also takes some planning. Compiling the final report may also take a lot of time if one does not have a clear plan on how to do it.
This article presents some of the key lessons we learned when trying to capitalize on these opportunities while avoiding the potential pitfalls. It covers planning, briefing the teams you collaborate with, and concludes with suggestions related to reporting the findings.
What's Proof Got to Do with It? - Traditional Statistics, Resampling Statistics, and Neural Networking in Usability Design
By Norbert Elliot, Robert Barat, Kamal Joshi
Full text - members only (PDF 110Kb) (HTML)
Empirical analysis of pre-design and post-design contexts affords a rich way to demonstrate evidence of usability improvement to clients. In this narrative treatment of three forms of statistical analysis, the authors show that traditional and innovative statistical techniques—whether used in remote or synchronous contexts—can provide invaluable information to design specialists. Such information may be used to strengthen design models, to document evidence of improvement to clients, and to provide a venue for increased design effectiveness.
Think Locally, Test Remotely by Aaron Marcus, Editor in Chief
Remote Testing: Less Time, More Reach, and More Participants by Tomer Sharon, Guest Editor
Full text - members only (PDF 34Kb) (HTML-Marcus) (HTML-Sharon)