upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

Plain Language Makes a Difference When People Vote

Janice (Ginny) Redish, Dana Chisnell, Sharon Laskowski, and Svetlana Lowry

Journal of Usability Studies, Volume 5, Issue 3, May 2010, pp. 81 - 103

Article Contents


Results

We set out to answer three questions:

The answer to all three questions for this study is "yes."

Participants voted more accurately on the ballot with plain language instructions

Each ballot in this study had 18 pages where participants voted (plus 8 other non-voting pages). We gave participants explicit directions for voting on 11 of those 18 pages. For 7 pages, we gave no directions, but the absence of directions for those specific contests was, in fact, an implicit direction to not change votes on those 7 pages.

Table 2 shows the correct and incorrect votes on the two ballots: Ballot A with traditional language instructions and Ballot B with plain language instructions.

Table 2. Participants voted more accurately on Ballot B, the plain language ballot (45 participants, 18 possible correct votes on each of two ballots).

Table 2

A within-subjects (or repeated measures) analysis of variance (ANOVA) between the number of correct votes for the ballots showed that the difference in accuracy between the two ballots is marginally statistically significant (Ballot A mean of 15.5; Ballot B mean of 16.1, F1,43=3.413, p < .071).

Using the plain language instructions first, helped participants when they got to the ballot with traditional instructions

The number of correct votes on the plain language ballot (Ballot B) differed very little whether participants voted it first or second. However, the number of correct votes on the traditional language ballot (Ballot A) increased from 14.4 for participants who voted Ballot A first to 16.3 for participants who voted Ballot A second. The interaction between which ballot was seen first and the total number of correct items on a given ballot is statistically significant (F1,43=23.057, p < .001). As Figure 3 shows, using the plain language instructions first, helped participants when they got to the ballot with traditional instructions. The reverse order effect (traditional instructions helping on the plain language ballot) was not nearly as strong.

Figure 3

Figure 3. Participants who worked with B first (plain language ballot) did better on A (traditional language ballot) than participants who worked with A first.

Education level made a difference in how accurately different groups of participants voted

We looked at correlations of accuracy with location (our three geographic sites) and with participants’ characteristics (gender, age, voting experience, and education level). Location, gender, age, and voting experience were not statistically significant differentiators of accuracy. Education was. Less education was associated with more errors (r = -.419, p < .004, effect size R2 = 0.176).

Participants recognized the difference in language

The answer to our second question, "Do voters recognize the difference in language between the two ballots?” is also “Yes.”

After voting both ballots, the participant moved with the moderator to another table. The moderator had two stacks of printed pages, a stack from each ballot. The moderator worked with the participant using the section of the test script that you see in Figure 4.

Thank you very much for voting both of those ballots. I would like to go over them in more detail with you now. I am going to show you some of the pages from the two ballots. I will show you the same page from both ballots at one time. On each set of pages, I want you to compare the instructions and comment on them.

[The moderator then turns over the first page of instructions for both ballots – always with the ballot the participant voted first on the left – and points out which is Ballot A and which is Ballot B. Every page also has A or B in a large letter on the top of the page. The moderator continues:]

Notice that the instructions on these pages are different. Please compare them and comment on them.

[When the participant stops commenting, the moderator continues:]

Thank you for your comments. Do you have anything else you would like to say about these two pages?

[When the participant indicates that she or he has no more comments, if the participant has not clearly expressed a preference yet, the moderator asks:]

If you had to choose one of these two pages for a ballot, which would you choose?

Figure 4. An excerpt from the test script showing how the moderator worked with the participant in the preference part of the session

Although we did not use the words "plain language, "language," or "design" when inviting participants to comment—nor at any time during the session—their comments clearly indicated that they were reacting to and recognized the difference in the wording and presentation of the instructions.

The following are just a few typical examples of what participants said:

Comparing the instructions to voters (at the beginning of each ballot)

Participant A3
About Ballot A: I don't like the paragraph being so large and all together.
About Ballot B: I like the bullets and that the important points are in bold.

Participant A6
About Ballot A: The paragraph form is so long. I gotta read all of this.
About Ballot B: I prefer this; it's less wordy.

Participant B17
About Ballot A: When I first read this, I was overwhelmed. I had to read it three times. There was so much to remember.

Comparing the pages about State Supreme Court Chief Justice where A uses "Retention Question" and "retain" and B names the office and uses "keep"

Participant A4
"Keep" is short and sweet compared to "retain." Some people might not know what that ["retain"] means.

Participant C32
"To keep." Yes, yes, I do [want to keep her]. Like I'm thinking 30 seconds less.

Comparing "accept/reject" to "for/against" as choices for measures:

Participant B15
I prefer "for/against"; they are simpler words.

Participant B23
I prefer "for/against"; it's what a normal voter would say; it's a more commoners' level.

Participant C35
"For/against" are more common words than "accept/reject."

Participants overwhelmingly preferred the plain language instructions

Both in the page-by-page comparison and in their final, overall assessment, participants chose the plain language ballot most of the time. On 12 of the 16 pages in the comparison, participants selected the Ballot B page more than 60% of the time. For those pages, the participants’ choice of B ranged from 64% to 98%.

The page with the highest preference for Ballot B was the final Thank you page. Ballot A just said "Thank you." Ballot B said, "Thank you." and then also said, "Your vote has been recorded. Thank you for voting." Participants overwhelmingly wanted to know that their vote had been recorded.

Participant A8
It's courteous, telling you it's recorded.

Participant B25
It makes you feel good. You feel better leaving. You know what happened.

In addition to the pages described earlier (initial instructions to voters, "keep" versus "retain," and "for/against" versus "accept/reject,") another page where participants significantly preferred Ballot B was the screen for writing in a candidate. On Ballot A, the page had a touch-screen keyboard and very minimal instructions. On Ballot B, the page had detailed instructions—well-written, well-spaced out, with clear steps, and instructions with pictures color-coded to match the action buttons (e.g., accept or cancel). The more detailed instructions were preferred by 87% of the participants (39 of 45).

Participant A5
[B is] more user-friendly; it tells you what to do if you make a mistake.

Participant B26
[B]; It's more in detail; it tells you what it really wants you to do.

On 4 of the 16 pages in the comparison, the participants’ choice was very close between the two ballots, and on 3 of those 4 pages, Ballot A was preferred slightly more often (ranging from 51% to 56% of the participants). Three of the pages that were very close in preference only had instructions about the maximum number that a voter could choose. For example, in the contest for County Commissioners, 23 of 45 participants preferred the Ballot A instruction "Vote for no more than five" while 22 of 45 preferred the Ballot B instruction, "Vote for one, two, three, four, or five."

The page that received the highest percentage preference for the Ballot A version was the page for the President/Vice President contest where Ballot A was, in fact, more explicit than Ballot B. Ballot B just said, "Vote for one." Ballot A said, "Vote for one. (A vote for the candidates will actually be a vote for their electors.)" We put the extra wording on Ballot A because we thought people would find it difficult to understand and unnecessary. And, indeed 44% (20 of 45) participants had a negative reaction to the extra sentence on Ballot A.

Participant B28
You don't really need all that.

Participant C35
It's information I don't care about. It just confused me more.

But 56% (25 of 45) thought people would want the extra fact.

Participant A9
I'm not sure it's necessary, but in the interest of full disclosure, it's more accurate.

Participant C39
It's better to have more information.

For detailed statistics and discussion of participants' page-by-page preferences, see the full report (Redish et al., 2008, Part 4, 72-102).

A large majority (82%) of participants chose Ballot B for their overall preference

The answer to our third question, “Do voters prefer one ballot over the other?” is a resounding “Yes” in favor of Ballot B, the ballot with plain language instructions. Eighty-two percent (37 of 45 participants) chose Ballot B for their overall preference. Just 9% (4 of 45) chose Ballot A, and 9% (4 of 45) chose “no preference.” The choice of the plain language instructions for ballots is statistically significant (p<.001).

This study allowed us to observe as well as count errors

Most research studies about voting look at residual votes (undervotes and overvotes) as errors. However, those researchers are reviewing ballots after an election. They rarely know why the errors happened. Did voters simply choose not to vote in a particular contest? Did they not understand the instructions on the ballot or in the help? Was the design hindering them? What specifically about the language or design was a problem? Research that focuses on already-cast ballots can only speculate. (See, for example, Alvarez, Goodrich, Hall, Kiewiet, & Sled, 2004; Kimball & Kropf, 2005; Norden, Kimball, Quesenbery, & Chen, 2008.)

In this study, we were able to observe people as they voted. Just by observing the act of voting, we learned a lot about when and how our participants had trouble with these ballots. In addition, many participants talked as they were voting about what they were doing and why they did what they did. These observations along with the error data help us to understand why participants had problems with both ballots.

Participants still had problems with both ballots

Plain language was better. But even the plain language ballot could not overcome some problems participants had. On both ballots, participants

Straight-party voting is confusing

Participants were more likely to correctly select straight-party voting on Ballot B (84.4% correct) than on Ballot A (77.8% correct). However, that still leaves a high error rate on both ballots, as you can see in Table 3.

One participant on A chose the wrong party. All the other errors in Table 3 for the first straight-party page were people not choosing a party when we directed them to vote straight-party and then change a few party-based contests. In a real election, not voting straight-party would not be an error. Voting contest by contest is acceptable. We coded it as an error because it was contrary to our directions and was an indication that the language on the ballot was not helping people understand the options for and implications of voting straight-party and then changing party-based contests.

The "errors" for the second straight-party page were from our observations. The second straight-party page only asked for a navigation choice—skip to the first non-party-based contest or go through the party-based contests to review and possibly change a party-based vote.

Table 3. Participants did not make correct choices on the straight-party screens

Table 3

In a recent study that focused on straight-party voting (SPV), Campbell and Byrne also found problems. "Voters had significant difficulty in interpreting SPV ballots and were reluctant to generate them, though this was improved when ballots had more clear and detailed instructions. Participants also tended to believe that SPV should not work the way they believed it had worked on ballots they had previously seen" (Campbell & Byrne, 2009, p. 718).

Campbell and Byrne are continuing to study straight-party voting.

We speculate that the result in our study comes from one or more of the following reasons:

Many voters do not understand levels of government

A second problem participants had with both ballots was changing the wrong contest. They mistook the U.S. Senate contest for the State Senate contest we directed them to change. They mistook the County Commissioners contest for the City Council contest we directed them to change. On the ballot they were voting, the U.S. Senate contest came before the State Senate contest. The County Commissioners contest came before the City Council contest.

To a certain extent, this problem might not arise in a real election where people know for whom they want to vote and know what roles those people have. Participants in our study were voting according to directions we gave them for people whose names and roles were new to them.

However, from comments participants made and our observations as well as the error data, we think the following reasons contributed at least somewhat to this problem:

Red boxes on the Summary/Review page confused some voters

A final problem that we must discuss from this study is what happened on the page that participants came to after they finished voting. Ballots A and B both had a page that showed participants how they had voted. On that page, contests in which they had not voted for the maximum number possible were shown in red. (See Figure 5.) This is a common graphical/interaction treatment for undervoted contests in electronic voting.

From our notes and reviews of the video recordings, 22 participants (49%) had no questions or problems on the Summary/Review page for either Ballot A or Ballot B. They were able to reach the end of the ballot having marked the choices as they intended and were ready to cast their ballots. Of those who had no observable questions or problems, 7 voted on Ballot A first and 15 voted on Ballot B first. This suggests that the instructions on Ballot B were more helpful to participants than the instructions on Ballot A.

However, more than half (23 or 51%) did have questions or problems on the Summary/Review page on at least one of the ballots. This is a disturbing number.

These problems were overwhelmingly related to resolving votes shown in the red boxes. Observational data tells us that 17 participants (37.8%) verbalized questions or concerns about the red boxes. (Note that because of errors they made while voting, some participants had much more red on the Summary/Review page than Figure 5 shows.)

This participant's comment sums up the problem many participants had:

Participant B26

[Reads the instruction about red messages.] But I did. I did what it told me to do. … I voted for the number of candidates. I'm concerned that it should have turned to blue. That would make me sure that I did the right thing. I wouldn't vote because [the red] is telling me I'm not doing the right thing.

Participants went to extraordinary lengths to get red boxes to turn to blue. They voted for candidates they did not really want or wrote in people to fill out the correct total, including adding blank write-in votes or writing in names they knew were fake, celebrities they knew were not running, or their own or friends' names

Figure 5

Figure 5. After voting, participants came to a Summary/Review page that showed how they had voted. The pages in this figure—Ballot A on the left and Ballot B on the right—show red for the two contests where participants were directed to vote for fewer than the maximum or none at all. (The page that the participant saw may have shown different contests in red, depending on how that participant had actually voted.)

In the end, following our further direction given at this page, participants could have cleared the red from one of these two contests—voting for the Water Commissioners, a contest we had earlier directed them to skip. However, following our directions, they could have still undervoted that contest by only voting for one (not two) Water Commissioners.

Also, by our directions, participants should have left the County Commissioners contest in red (undervoted). To turn the box blue, participants would have had to vote for five people for County Commissioner. However, only three candidates belonged to the political party that participants were voting for (Tan party on A, Lime party on B). Our directions to vote straight party and our not giving participants a direction to change from that straight-party vote for County Commissioners meant that a correct vote left the County Commissioner contest box red.

In our recommendations, we suggest better instructions for the Review page. We also suggest putting more information in the boxes, as we show in Figure 6, telling people for whom they have voted, those people's parties (in party-based contests), and how many more people they could vote for when they have undervoted.

Figure 6

Figure 6. More information in the box for each contest on the Review page may help people better understand when they have undervoted a contest and what their options are.

We believe that better instructions and more information will help, and we also recommend using a less strong color, such as amber, for contests where the person has voted for fewer than the maximum number and a toned-down red for contests where the person has not voted at all. For many of our participants, it was clearly the "redness" of the signal that caused them to go to extremes to "make it right."

Participant A2, after voting for one water commissioner
Why is it still red?

Participant A3, on seeing the red boxes
There's something I did wrong.

Participant A13, after voting for one water commissioner
Why is it red? Why is it still red? So I have to vote for two.

Previous | Next