upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

The Usability of Computerized Card Sorting: A Comparison of Three Applications by Researchers and End Users

Barbara S. Chaparro, Veronica D. Hinkle, and Shannon K. Riley

Journal of Usability Studies, Volume 4, Issue 1, November 2008, pp. 31-48

Article Contents

Study 1

Study 1 examined the usability of the three card sorting applications from a researchers' perspective. The term researcher is used to describe academicians and practitioners who use card sorting methodology. These users typically are involved in the set up stages of the card sort study and the analysis portion once the card sort data is collected.


The following sections provide information about the participants, materials, and procedure used in this study.


Eight participants, ranging between 23 and 36 years of age (M= 29), volunteered for this study. Three male and five female participants were recruited from a doctoral Human Factors graduate program in a Midwestern university. Participants were selected based on their experience using the card sorting technique to aid in information architecture design. All had conducted card sorts with physical index cards and had used the electronic card sort program USort/EZCalc. All participants were frequent computer and Internet users but none were familiar with any of the card sort programs evaluated.


One Pentium-class computer running Windows XP at 1024 x 768 resolution was used to run the study. Participants were digitally recorded using a Web camera and the software program Morae™ 2.0 to capture and combine both the video footage and the on-screen events of the application for each task. In addition, Morae™ was used to gather performance data, including time on task and navigation for each participant. Two of the card sort programs were web-based (OpenSort and WebSort) and accessed via a campus network T1 line. The third application, CardZort, was a Windows application and was accessed directly from the test computer.


Participants were asked to complete a background questionnaire regarding their computer and Internet habits. They were then asked to complete a series of four tasks representative of those that researchers typically perform when creating and analyzing results for a card sort study. The tasks were as follows:

  1. Enter a prescribed list of 35 items to create a card set for an open card sort exercise.
  2. Find the results from the card sort study to analyze (the participants were told to assume that the study was concluded, despite the fact that they just completed the task to set up the cards).
  3. Download and view the results.
  4. Interpret the results.

All participants completed the tasks for all three programs. The tasks were presented in sequential order while the order of the card sort programs was counterbalanced across participants. After each task, the participants were asked to provide a difficulty rating (1 = Very Easy and 5 = Very Difficult) of completing that task. After all tasks were finished for a program, participants were asked to complete a satisfaction survey (Brooke, 1996) and discuss what they liked and disliked about the card sort program. Measures of task success and time-on-task were collected for Tasks 1-3. Qualitative comments were gathered for Task 4, which asked them to interpret the results. OpenSort offered three methods of viewing the results while WebSort and CardZort only offered one. Participants entered a different list of 35 items for each program (Task 1) and the appropriate results were made available for Task 2. After completing the card sort tasks with all applications, participants were asked to rank their preference of the programs.

Previous | Next