upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

Creating Effective Decision Aids for Complex Tasks

Caroline Clarke Hayes and Farnaz Akhavi

Journal of Usability Studies, Volume 3, Issue 4, August 2008, pp. 152-172

Article Contents


Laboratory Studies of Decision Aids in Product Design Decision Making

This section briefly summarizes two laboratory studies (Akhavi and Hayes, 2007) that used the decision aids described above to investigate the costs and benefits experienced by mechanical designers when using these decision aids verses no aid.

Study 1

In the first study, we asked seven student designers (all at the intermediate level of design expertise) to rank, from best to worst, four different design alternatives for the elbow joint on the robotic arm and three different design alternatives for a mounting plate (which would be used to mount the arm on the wheelchair). All students had been working on the robot arm design task since the start of the semester, so they were familiar with the task and the criteria for an effective solution. While it might have been desirable in some respects to use subjects who had no prior experience with this particular robot arm design task and the alternatives, it was necessary to use subjects who already had familiarity with the task in order to be able to understand the criteria and the properties of the alternatives, which were non-trivial to understand. Each student was asked to individually use the two decision aids described earlier to assist with the ranking: the decision aid based on the fuzzy technique and the other based on the deterministic technique.

We found that all students produced identical rankings for the solutions regardless of the decision aid used. However, the fuzzy decision aid required significantly more time on average than the deterministic one: 12.5 minutes versus 7 minutes, p-value = 0.02. This time difference appeared to result directly from the additional data entry required for the fuzzy method, which required entering two values for each alternative and criterion. The deterministic method only required one value. After discussions with the students we concluded that they all produced identical rankings because for both the elbow joint and the mounting plate, the alternatives were clearly very different from each other in quality with obvious winners and losers. In such situations, designers can make choices readily without the assistance or overhead of a tool. The students did, however, comment that they liked the way the tool allowed them to systematically layout the criteria and value judgments for all alternatives. They printed out the decision matrix produced by the tool so they could include it in their final project report as a convenient summary justifying their design decisions.

The important lesson learned from this first study was that computer decision aids may not add value for all design decisions, particularly if the top alternative can easily be distinguished from the others. For the next study, we designed a situation where the alternatives were very close in quality so that the top alternative was not easy to identify without careful consideration.

Study 2

The second study explored the use of decision aids in the context of the manned lunar excursion vehicle design task.

Subjects

Twenty-six participants were used in the study. Eighteen were senior undergraduates in a capstone design course, and eight were engineering design professors. The students were considered to be intermediate level designers (not novices) and the professors were considered to be experts in lunar vehicle design.

Tasks

All students in the class were given the same design task: to create a design for a lunar excursion vehicle. A total of 12 designs were created by four teams of students. Each team presented its three designs to the class, and the class decided which was best, average, and worst. Next, all 12 designs were re-sorted into three sets, each containing four designs. Set A contained only the best designs from each team. Set B contained all the average designs from each group, and Set C contained all the worst designs from each group. Thus, each set contained four designs that were similar in quality and would require some thought to rank them from best to worst. Furthermore, because the designs within each set crossed the boundaries of the student teams, none of the students had yet spent time comparing any of the designs with in the new sets. Thus, we created fresh comparison tasks for the students participating in the experiment.

Method and tool inputs

Each subject in the study was then asked independently to apply a different decision making method to each of the three design sets. The following were the three methods:

The reader should note that the mathematical methods do not apply any internally encoded design expertise. The design expertise and judgments come entirely from the human participants. The tools simply provide a systematic structure and method to facilitate their comparison process.

Tool outputs

Both mathematical methods computed and displayed a single overall value score for each alternative based on a combination of all criteria. However, the fuzzy method actually produced a probability density function for each design alternative describing a distribution of the probable values. Thus, we could have chosen to display the results in many ways, (e.g., as a function curve or a range) but for simplicity we choose to display only the average of each probability density function.

Ranking

Next, for each method participants were instructed to rank a set of four design alternatives from best to worst. The order and the pairing of methods with design sets were systematically varied. The participants had received instruction in their design class on how to use the weighted sum method to compare and rank design alternatives using a calculator or spreadsheet to perform the calculations.

Data recorded

The experimenters recorded the following data:

Previous | Next