Quantifying usability and user experience (UX) may be quite tricky, but this doesn’t mean it can’t or shouldn’t be done. Quantifying UX data is certainly not a new goal for UX managers. As Norton and Kaplan put it when they talk about the Balanced Scorecard, “You can’t manage what you can’t measure,” a cliché among managers of most activities. The question then is: how do you measure user experience?
Most managers today appreciate user experience and usability as key factors to online success. Further, most brands competing in the web marketplace today have performed user research, including oneon-
one usability testing in the laboratory, focus groups, personal interviews, contextual inquiries, etc. While all these research techniques are perfectly valid, they do not allow for appropriate UX quantification and measurement. If it’s so important, shouldn’t we actually be quantifying and measuring UX so we can properly manage it?
Sometimes large organizations make decisions based only on empirical studies. UX managers who want to succeed need to present solid facts and statistics, otherwise they will simply not be listened to. Design teams need to make the right design decisions and then demonstrate to management that the decisions were the best choices, backed up by statistical data.
Usability professionals generally agree that in usability testing, you don’t need large samples of users to identify most design flaws. What about when it comes to measuring, benchmarking, and using metrics and key performance indicators (KPIs) to make solid strategic decisions? Also, what about obtaining statistically representative results? As Robert Rubinoff wrote in “How to Quantify the User Experience” (Sitepoint.com, April 21, 2004),“Many look to the user experience as an overall indicator of website success. Analyzing how effectively a website provides for a net positive user experience can often turn into a subjective affair, rife with opinion and short on objectivity.”
UX Metadata, KPIs, and Benchmarks
Clients in our work experience often say it loud and clear: “We need to combine the qualitative data from the lab environment’s small sample of users with quantitative data, obtained with larger samples, from many geographical locations.” Another typical comment is: “Okay, so I have a 60 percent success ratio from users
who performed a certain task on my website. Now, is this good or bad? What about my industry standards? Whose benchmark we should be looking at?”
Few UX managers in even fewer organizations actually perform web customer experience management: identifying the current KPIs from their sites and their industry, defining objectives for the site, and managing a process that will meet those objectives. You simply cannot do this without first quantifying some form of usability. What does it mean to “quantify or measure usability and UX?” Basically, it means to know the level of effectiveness, efficiencies, satisfaction ratios, expectations, perceptions, and opinions of larger, statistically representative samples of users. So when you say that 70 percent of users spent more than five minutes trying to complete a task online, you can feel confident that this result is valid. These statistics are your data.
Also, you should feel positive about the fact that the users participating in the study came from many locations, not just the Silicon Valley or New York City areas, for example. These numbers are your metadata (literally,“data about the data”), and they’re especially important when performing international user testing with different interfaces and languages. Quantifying usability and UX has another important component:
benchmarking. If you want to define a benchmark, whether it’s for your own website or within your industry, again you need to rely on statistics. For example, the current industry benchmark in a specific retail market is offering a better customer experience than other competitors because it’s allowing users to perform tasks faster, more effectively, and more efficiently. These are all numbers that can be quantified and measured.
What are the benefits for site managers, and where is their return on investment (ROI)? You already know that providing an excellent online customer experience definitely pays, but in order to demonstrate it to management, you need to quantify it clearly. Benchmarking and obtaining
metadata allow you to set your own standards, to define your user experience goals, and to determine whether you’re meeting your standards.
Steps to Quantifying Usability
How do you actually do it? It’s a lot different than web analytics, where you only know what happens with the traffic. From a methodological standpoint, one way to quantify usability by testing large samples of geographically dispersed audiences is unmoderated remote testing (or “task-based online usability studies” or“automated remote testing”). This alternative methodology perfectly complements qualitative research and allows you to test hundreds of users who participate simultaneously in their natural contexts (home or office), from their own PCs, and from geographically dispersed locations (local or international).
While the users perform, the software collects several kinds of UX data, including effectiveness ratios (the percent of users able to complete the tasks successfully), efficiency ratios (time, number of clicks, and problems encountered), and satisfaction ratios (based on user feeling about the interactive experience). See Figures 1 and 2.
On top of these measures, you can collect click streams—the navigation paths chosen to complete tasks—helping to understand user behavior better.
The UX statistics gathered through unmoderated remote testing allow you to feel confident about the quality of data on the user experience by including the metadata on statistical significance and the low margin for data error. It’s also a very cost-efficient research method, due to saving time and travel costs, as well as user-recruitment and incentive costs. Although qualitative research is still recommended to obtain direct
one-on-one, face-to-face feedback from users, unmoderated remote testing provides UX managers with KPIs and metadata that, realistically, they cannot obtain with any other research method. Since the structure of the study is task-based, it’s much more than an online survey. Users actually interact and try to perform a task, then are asked specific questions about it. You know the user’s intent, and you know why users do what they do.
After hearing a qualitative usability study presentation for the marketing team of one major international airline, the marketing director said, “We just can’t make this kind of decision [redesigning the flight search and check-out process] based exclusively on research with just ten users. We’d need to combine it with quantitative data and test at least 250 users or so.” He also added, “I’d like to see some industry benchmarks, both for the current website design and with the future version we’ll be launching. So for this, I’d need to quantify usability and see if we are doing good or bad, compared to industry standards.” He did not feel comfortable going up to the CEO to request a redesign budget with the evidence from a small sample of users. We then offered him the possibility of quantifying usability problems, along with effectiveness, efficiency, and satisfaction ratios for a sample of 250 users through an unmoderated remote user test. We also tested new samples of 250 users on five competing sites with the same test script. Then we were able to compare each of the competitors side by side and see which was the benchmark and how this airline stood among its competitors. Finally, we contracted to conduct the same research every year so the client has historical data to analyze.
Another client had a similar challenge, but with yet another factor to be considered. She wanted to quantify usability for six sites, in six different markets with local users and in the local language—but did not have the budget or time to spend on travel.
The international research was conducted through unmoderated remote testing. Since these tests do not need live moderation, and since users participate simultaneously, the whole project was accomplished in approximately two months, including the process of localizing and translating the original script and gathering the results in each market. With a relatively small budget and timeframe, she was able to collect rich UX data from hundreds of users.
One important thing to mention about these two examples is that for both clients we recommended—and actually ended up conducting— some selected qualitative research after the remote test was finished.
In the second case, the client picked out the three most important markets to study. The goal was to run a more targeted and thorough qualitative study in the lab on the main issues found in the previous quantitative study. This way, we were able to get both quantitative and quantitative data, both important to our client.
Quantifying a website’s usability and user experience is an attainable goal for UX managers. More and more client companies understand how usability plays an important role in a website’s success and they know that there are ways to obtain the necessary data, both qualitative and quantitative, for industry benchmarks and many other measures. They can’t manage it if you don’t measure it.
Retrieved from http://www.uxpamagazine.org/attainable_goal/
Comments are closed.