upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

Plain Language Makes a Difference When People Vote

Janice (Ginny) Redish, Dana Chisnell, Sharon Laskowski, and Svetlana Lowry

Journal of Usability Studies, Volume 5, Issue 3, May 2010, pp. 81 - 103

Article Contents


Methods

We collected both performance and preference data.

For performance data, participants voted on two ballots that differed only in the wording and presentation of language (and the names of parties and candidates). To account for practice effects, we counter-balanced the order in which participants voted. (That is, Participant 1 voted Ballot A, then Ballot B. Participant 2 voted Ballot B, then Ballot A, and so on.)

For preference data, after participants voted both ballots, we showed them the comparable pages from the two ballots, asked them to comment on the differences they saw, and then required a forced choice of preference for each set of pages. After they had gone through all the pairs of pages, we asked for an overall preference with the options of Ballot A, Ballot B, no preference, and the reason for their preference.

Where did we do the study?

We collected data from 45 participants in 3 locations during May and June 2008. Our 3 locations (in alphabetical order) were

We chose the locations for both geographic spread (Middle Atlantic, South, Midwest) and diversity in the type of community (urban, small town, suburban community with a large minority population). In each location, we held the sessions in the usability center of a university. However, our participants were not students at those universities. They were people who live or work in the local communities. (Some of our participants were taking college classes, but no one was studying at the institution where they came to participate in the study.)

Who participated?

We recruited based on two criteria:

All of our participants met these criteria.

Although the following were not screening criteria, we were pleased to achieve diversity in gender, ethnicity, and age.

Because ballots must be understandable and usable to people regardless of their education, we focused on people with high school or less or with some college but not advanced degrees. By including people with lower levels of education, we hoped to understand more about issues that other researchers had raised regarding voters with lower education levels (Herrnson et al., 2008; Norden, Creelan, Kimball, & Quesenbery, 2006; Norden, Kimball, Quesenbery, & Chen, 2008).

We succeeded in recruiting based on our study plan, and, indeed, education turned out to be the only participant characteristic that correlated with accuracy in our results. Table 1 shows our participants by education level.

Table 1. Number of participants at each education level (N=45)

Table 1

*GED = General Education Development, a series of tests that people can take to show they have the equivalent of a high school education. Many people who drop out of high school take the GED later in life.

How did we recruit participants?

A professional recruiter helped us find appropriate participants. Participants came to us through the following channels:

Some of our participants, therefore, came to us because they responded to a request online. However, not all did. Some came through referrals. For example, one older gentleman had no email address. His niece read about the study and served as our initial contact to him.

Furthermore, even though most of our participants used email, had a cell phone, and were savvy about other technology, their sophistication with technology did not necessarily mean that they understood what a ballot is like, were used to ballots, or could vote accurately.

How were the ballots presented?

The ballots simulated the experience of electronic voting. However, we did not use any of the currently existing Direct Recording Electronic (DRE) voting systems.

Several reasons supported that decision:

Instead, the ballots were programmed into and presented on identical touch-screen tablet PCs. The PCs were Sahara Slate Tablekiosk L2500s with a 12.1 inch XGA screen.

You can see the setup that we used in Figure 1.

Figure 1

Figure 1. The setup that we used in the study

What were the ballots like?

We adapted our ballots from the NIST "medium complexity" test ballot (NIST, 2007). This is the ballot that Design for Democracy/AIGA used in its project for the Election Assistance Commission (Design for Democracy, 2007).

The NIST test ballot includes straight-party voting and has 12 party-based contests, two retention questions, and six referenda. In some contests, it requires more than one screen of candidates.

We adapted this ballot by slightly reducing the number of party-based contests and the number of referenda, including a few non-party-based contests, and never having more than one screen of candidates in a contest. Each of the ballots in our study included straight-party voting, ten party-based contests, two non-party-based contests, two retention questions, and three referenda.

The screen design and navigation were identical for both ballots.

We kept the same type font and type size in both ballots. We also followed best practices in information design. The political parties were indicated by color names to avoid any bias for or against actual parties. We did not name any party either Red or Blue.

Candidates’ names were made up but resembled a range of typical names. Research by the ACCURATE group at Rice University has shown that study participants are just as accurate and not put off by voting ballots with made-up names as by voting ballots with names of people they recognize (Everett, Byrne, & Greene, 2006).

You can see the ballots in the full report at the NIST web site (Redish et al., 2008, Appendices 2 and 3, 118-171). The ballots are also available separately at http://www.nist.gov/itl/vote/upload/Ballot-A.pdf and http://www.nist.gov/itl/vote/upload/Ballot-B.pdf.

What happened in each session?

Each participant came for an individual one-hour session. The timing of actual sessions ranged from about 45 minutes to about 70 minutes.

Each session had the following major parts:

Each person received $75 in cash for participating in the study.

What tasks and directions did we give participants as voters?

Just before they voted each ballot, we gave participants a sheet of specific directions to work with. This sheet told participants what party to vote for, what party-based contests to change, which contest to write in a candidate, and how to vote in all the non-party-based contests and for all the amendments/measures. Participants read through the directions for each ballot just before starting to vote on that ballot. They also kept the directions with them to refer to as they went through the ballot. When participants were at the Summary/Review screen at the end of each ballot, we gave them two additional directions that caused them to go back and vote differently in two contests.

We couched each direction in a sentence that put participants into the voting role. For example, the direction for the task of writing in a candidate for Ballot A was “Even though you voted for everyone in the Tan party, for Registrar of Deeds, you want Herbert Liddicoat. Vote for him.”

When they got to the Registrar for Deeds contest on Ballot A, participants saw that Herbert Liddicoat was not on the ballot. They then had to (a) realize that they needed to write him in and (b) succeed at writing in his name in the right way.

This way of giving directions to participants is typical of research on ballots (Conrad et al., 2006; Everett et al., 2006; Greene, Byrne, & Everett, 2006; Selker, Goler & Wilde, 2005; among others). Giving participants these directions was necessary to measure accuracy.

The directions for Ballot A and Ballot B were identical except for the names of the parties and candidates.

Figure 2 lists the tasks participants did (the different voting behaviors). These are not the specific directions that participants were given. For the specific directions that participants worked with, see the full report (Redish et al., 2008, Appendix 7, 183-185).

Vote for all the candidates from one party at the same time (straight-party).

Review the straight-party candidates to accomplish some of the other directions, leaving some alone and changing others per the directions.

Write in a candidate instead of their party’s candidate.

Change a vote from the candidate of their party to the candidate of another party in a “vote for one” contest.

Change votes in a “vote for no more than four” contest. (This and the previous two tasks required “deselecting” one or more of their party’s candidates if they had successfully voted straight-party.)

Skip a contest.

Vote per the directions in several non-party-based contests and for three amendments/measures. (The language of the directions carefully avoided exactly matching the wording of either ballot.)

Go back and vote the skipped contest from the Summary/Review page.

Change a vote from the Summary/Review page. (This and the previous task were directions given on paper to the participant at the appropriate time—when the participant was on the Summary/Review page.)

Cast the vote and confirm the casting.

Figure 2. List of voting behaviors in the study (To accomplish this list of behaviors, voters worked with a set of directions, as described in the text of this article.)

Previous | Next