The Magazine of the Usability Professionals' Association
By Keisha McKenzie and David A. Edgell
Training users on a new software package can be very expensive and should be a consideration in software development. When users cannot be pulled from their work areas for long periods of time to learn new packages, companies often use a pyramidal training model in which super users at each level receive training and then train others in the level below them. Ease of learning and recall during training is a significant part of usability for such users, and testing with them should take this into account.
While testing a software package used for managing academic conferences, we used this user training scenario. Our test included traditional roles, but also allowed us to verify memorability and learnability because the user reversed roles near the end of the period and trained one of our team members.
Although fixed roles help to structure the testing process, workplace roles are not always fixed. Our approach in the lab simulated one form of software training used by many organizations by varying participants’ roles in-test.
Some usability methods address collaborative training by adding extra participants to play additional roles—for example, pairing neophytes with experts in collaborative walkthroughs or team tasks. Another method, called “teach-back” or “train-the-trainer,” is used in public health. In this method, the participant teaches the facilitator the content she has learned. When patients can explain the treatment plans or procedures they have just discussed with their doctor or nurse, practitioners can see how much the patients have learned and understood. We applied similar reasoning to our test design.
Tests that yield data on learnability and memorability can provide companies with greater insight from their testing. Companies using teach-back in their own tests might discover how to improve training for real and not just ideal users. Such tests may also help companies to anticipate real users’ issues with learning and recalling product features.
Applying the Method to a Real Test
We recently applied this method in one of our software usability tests. Between 2007 and 2008, members of our American team worked virtually with a German software developer on documenting and testing an academic conference management package. The software is an open source tool and undergoes continuous development. Typical users of this system are faculty members, graduate students, and department staff, and because pilot testing suggested that these groups all use the software similarly, we focused on testing with faculty members.
Our central test question was how usable faculty members found the software while they performed routine conference management tasks. At our institution, previous testing on the software had sought to improve its documentation. Our most recent test, however, focused on the software itself. We wanted to find out how the product’s design and architecture supported or obstructed its use.
For our test, we recruited five full-time faculty members to evaluate the software. All participants had between one and twenty-three years’ experience managing academic conferences, but only two had prior experience using software to support that work. Our pre-test screening ensured that these users were familiar with the tasks involved in administering a conference and managing submissions. Before testing, we also populated the test software with data from a recent conference. Our participants were familiar with the subject and so the familiar data helped create a realistic experience during the test.
Building Learnability and Memorability into a Test
We designed our tests around the following four tasks:
Assign an incoming paper to a reviewer
Create a conference session
Assign an accepted paper to a session
Teach a graduate assistant to assign an incoming paper to a reviewer
In the first test section, one team member performed standard, scripted facilitation and participants worked through three logically related conference management tasks, each building on the last.
In the second part of the test, roles shifted: in the final task, participants taught the facilitator how to complete the first task. This required them to remember how they had completed Task 1, and demonstrate their recall by teaching someone else how to complete it. An average of twelve minutes lapsed between users completing the first task and beginning the fourth. This time lapse gave participants enough time to focus on other aspects of the program before trying to teach.
Parsing Learnability and Memorability from Test Data
To support observation and analysis, we video recorded the sessions using two movable cameras and one fixed, overhead camera. We focused the movable cameras on the participant and the facilitator so that we could capture their expressions in either role. We also video captured screen data while participants used the software.
To analyze the data, team members reviewed the session videos individually, and then met to complete an affinity diagramming session together. We watched the sections of the videos that corresponded to the first and fourth tasks and noted individual points on separate Post-its. We then organized observations into groups of similar content, reviewing and rearranging them into emerging categories.
Through this inductive process, we developed seven themes in the data:
General efficiency, error, and satisfaction issues:
For example, the data we grouped under “confidence” showed that participants recalled and were able to teach how to assign a paper to a reviewer; it highlighted the users’ confidence levels with the specific task. On the other hand, “feedback” data points highlighted participants’ approach to teaching and offering teacher-style feedback to their “graduate student assistant.” Thus, the data in this group gave us insight into participants’ general attitude to the new teaching role.
All participants but one moved from a position of confusion to a position of confidence during this round of testing. While working through Task 1, participants tentatively explored the program and expressed their uncertainty with it. By Task 4, however, they confidently gave our team member directions and encouraged her through the task. This learning process took place despite the fact that more than half of our users found the software to be complex and non-intuitive. The program has a dense menu structure, some menu labels change between screens, and important buttons are unusually positioned. Without the role reversal method, however, we would not have been able to so clearly isolate and identify issues regarding the software’s learnability and memorability
Evaluating the Method
Like all methods, our approach has potential advantages and disadvantages:
Usability professionals applying this method should also consider the following tips:
Applying this method again with modifications could indicate whether participants might perform better on learnability and memorability tasks if given advance notice that they would teach other users. More difficult tasks and more complicated software than those we tested could prove less compatible with this method, and further applications of the method among non-academics would indicate whether it can work as well for corporations as it did in a university setting.
Switching participants’ roles mid-test responds to the fact that workplace roles are much more flexible than traditional lab usability testing acknowledges. Reversing roles in our test allowed participants to demonstrate the kind of routine role-switching that occurs when one user has to train another. Additionally, this method provided us with significant data on learnability and memorability that otherwise would have required recruiting additional participants.UX
Keisha McKenzie, M.A., is a technical editor and institutional policy writer at Texas Tech University. Now completing her doctoral dissertation in technical communication and rhetoric, she researches how public institutions and governments interact with their users through policy communication.
David A. Edgell has more than ten years experience as a technical communication practitioner in healthcare and information technology. He is a Ph.D. student in Texas Tech’s Technical Communication program. His interests include patient education, medical consent, and virtual communities.
Usability Professionals' Association
promoting usability concepts and techniques worldwide
User Experience Magazine is by and about usability professionals, featuring significant and unique articles dealing with the broad field of usability and the user experience.
This article was originally printed in User Experience Magazine, Volume 8, Issue 3, 2009.
© Usability Professionals' Association
Contact UPA at http://www.usabilityprofessionals.org/about_upa/contact_upa.html