upa - home page JUS - Journal of usability studies
An international peer-reviewed journal

User Experience and Accessibility: An Analysis of County Web Portals

Norman E. Youngblood and Susan A. Youngblood

Journal of Usability Studies, Volume 9, Issue 1, November 2013, pp. 25 - 41

Article Contents


Civic websites, such as e-government sites, are critical for fostering civic participation and for “mak[ing] opportunities for democratic engagement” (Salvo, 2004, p. 59). One example of these ideas is the one-stop, local e-government portal (Ho, 2002), in which a local government (i.e., municipalities, counties, and other small subdivisions of state governments) consolidates all its services and information into a single, coherent site, rather than spreading it across multiple agency-specific sites. For example, the site for Jefferson County, Alabama, provides one-stop access to 34 county departments, services, boards, and offices, such as the county attorney, the tax assessor, and family court. Ho (2002) argued these web-based government services open the door for using a customer-oriented approach to focus on end-user “concerns and needs” to both engage and empower citizens (p. 435). County governments continue the move from being “administrative appendages” of state-level government to providing a wide range of services (Benton, 2002; Bowman & Kearny, 2010, p. 274) and having stronger policy-making influence (Bowman & Kearny, 2010), so developing sites that are easy to use and providing access in ways in which users need access is important. These portal sites should be usable and accessible; ideally they will even be poised to meet the needs of users with mobile devices.

An e-government web portal provides users with a central point of entry to a government’s web presence, organizing links to its agencies and, in many cases, other governments. As an example, the web portal of the United States, USA.gov, includes links to state government websites, and many state government portals provide links to both local websites, such as those of municipalities and counties, and to national government websites. Ho (2002) argued that portals engage and empower citizens, even if institutional issues and the digital divide sometimes slowed the process. Huang (2006), in a study of U.S. county e-government portals, noted that 56.3% of counties had moved to a portal-based model and that digital divide demographics, including educational level and income level, seemed to play a role in county adoption of portals. County portal adoption rates varied widely by state, ranging from a high of 100% in Delaware to a low of 10.6% in South Dakota. Huang also found a relatively low adoption rate of advanced e-government services, particularly transactional services. As examples, the most commonly adopted transactional service, collecting taxes, occurred on barely 32% of county websites and only 14.5% of portals allowed citizens to conduct vital records transactions (birth, marriage, and death). Both of these studies helped establish local e government benchmarks.

Feature richness is part of quality—and it has been used as a criterion in research on local government web development (e.g., Cassell & Mullaly, 2012)—but users must be able to reach features easily to take advantage of them. Given that users underuse services that are already available online (e.g., Baker, 2009), Scott (2005) emphasized the need for local governments to maintain quality websites to encourage use. Scott’s (2005) measures of quality e-government included usability, which is often defined as “the extent to which a product [such as a website] can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” (ISO 9241, 1998). In other words, usability focuses on user experiences. Accessibility, the usability of a website by users with disabilities (Petrie & Kheir, 2007), has also been used as a quality measure in e-government (e.g., West, 2005; 2006; 2007; 2008).

Although corporate usability research is often proprietary and unpublished in journals (Sullivan, 2006), the results filter into best practices in information architecture and design, resulting in a number of free-to-access design standards and techniques. Government-oriented materials are available to government designers in a number of forms, including videos on test setup, scripts, and templates (U.S. General Services Administration, 2013a). Through its website HowTo.gov, the U.S. General Services Administration (2013b) also provides Listserv forum access and an online community, both open to government at all levels. Usability.gov—the self-proclaimed “one-stop source for government web designers to learn how to make websites more usable, useful, and accessible” (U.S. Department of Health and Human Services, n.d.)—makes many resources available, including usability and accessibility articles, instructions for conducting usability tests, and the U.S. Department of Health and Human Services’ (2006) Research-Based Web Design and Usability Guidelines. Even communities that have little or no funding for testing should review their sites to make sure they adhere to basic usability and accessibility standards, such as having a “Home” button at the top left of the page and having alternative text for images.

Alabama is an ideal state with which to examine these e-government issues at the county level, issues that could affect resident access to services and influence on policy making. At the state level, e-government in Alabama has made significant progress. West’s (2005; 2006; 2007) early studies of state-level e-government websites revealed significant usability and accessibility problems, with West ranking Alabama in the bottom 10% of state-level e-government in the United States. In 2008, however, West found that Alabama had made significant strides in state-level e-government, rising to the number 3 position nationally, though accessibility compliance has remained a problem (West, 2008). While Alabama e-government has improved tremendously at the state level, these improvements have not filtered down to the local level (such as municipalities, e.g., the consolidated cities and incorporated places that are parts of counties), and many Alabama municipal websites have significant usability and accessibility issues (Youngblood & Mackiewicz, 2012).

This study addressed the adoption of county portals, used automated and heuristic evaluation of usability and accessibility characteristics to assess whether county web portals in Alabama meet a selection of these standards, and examined whether the counties’ portals are poised to take advantage of new communication tools such as mobile devices. Practices at broader levels of government have the power to influence—for better or for worse—practices at more local levels (e.g., as municipalities going to county portals).

Usability in e-Government

Website user experience focuses on site usability and readiness to embrace and adapt to new communication technologies such as mobile devices. Usability has several important components, including user satisfaction (Zappen, Harrison, & Watson, 2008) and site effectiveness, the concept that users of a site should be able to surf, to search for a known item, and to accomplish tasks (Baker, 2009), such as finding a county’s public notices or accessing local codes. Usability is particularly important for transactional user experiences, including two-way communication. In the case of e-government sites, these experiences can include providing feedback and requesting documents. For two-way communication to be effective, the user must be able to navigate the site to find these features. When the user is able to easily accomplish these tasks, it not only encourages the use of the site, but can help foster trust and helps make users feel that “their input is valued” (Williams, 2009, p. 461). Parent, Vandebeek, and Gemino (2005) argued that trust is a critical component of e-government and that citizens need “a priori trust in government” to trust e-government (p. 732)—in other words, they will not trust an online presence if they distrust its brick-and-mortar counterpart. Huang, Brooks, and Chen (2009) found that a user’s e-government experience is influenced by the nature of the government entity’s online presence, particularly usability. This matches the findings of Fogg et al. (2003) that the strongest influences on a corporate site’s credibility are its design’s look and the information design and structure. Thus, the relationship is complex: Good design fosters credibility, and credibility gives users the security they need to participate in e-government.

Because web page design changes over time, usability attributes are defined by experience. For instance, icons for navigation—such as a house for home—have gone out of fashion, though such icons are addressed in specific usability guidelines from the late 1990s (e.g., Borges, Morales, & Rodríguez, 1996). Trends come and go, so usability taxonomies do well to rely on user experiences. Usability expert Nielsen’s (1996) taxonomy of usability attributes—learnability, efficiency, memorability, errors (as few system errors as possible and allowing users to recover easily), and satisfaction—has been widely adopted, though other taxonomies exist (e.g., Alonso-Ríos, Vázquez-García, Mosqueira-Rey, & Moret-Bonillo, 2009). At a given point in time, these attributes can be distilled into research-based and best-practice-based heuristics (Cappel & Huang, 2007; Nielsen & Tahir, 2002; Pearrow, 2000). For instance, learning to use a new website is easier if the site is designed to follow other current practices in layout and architecture. U.S. users look for a logo at the top, the main navigation on the top or left, and they expect the logo to be clickable (Cappel & Huang, 2007; DeWitt, 2010). Usability research suggests that conventions such as the placement of “Home” on the top left be followed, but otherwise, menu orientation and link order are less important than factors such as link taxonomy (Cappel & Huang, 2007; DeWitt, 2010). However, long menus (greater than 10 links) and vertical menus that extend below the fold can be a problem (DeWitt, 2010). Breaking such conventions makes the user look harder for information.

Accessibility in e-Government

In an abstract sense, many community leaders are aware that their constituencies include a number of people with permanent and short-term disabilities, be they sensory, cognitive, motor, or other disabilities. However, individuals with disabilities can be less visible in the population than their counterparts. The Kessler Foundation and National Organization on Disabilities (2010) found that 79% of working-age adults (18-64 years old) with disabilities are not employed (this includes those not looking for work for a variety of reasons), compared to 41% of non-disabled individuals. Furthermore, people with disabilities go to restaurants and religious services less than people without disabilities, contributing to their under-visibility, and the risk that they might be an afterthought in counties providing routine online services.

Accessibility, or making information accessible for users with disabilities, is critical not only for the legal reasons, but, more importantly, for ethical reasons. The large number of individuals who have visual, hearing, motor, and cognitive disabilities should not be excluded from e government. In Alabama, over 14% of the non-institutionalized, working-age population (i.e., over 422,000 people) has a disability of some sort (U.S. Census Bureau, 2010b). Visual, hearing, motor, and cognitive disabilities affect Internet use (WebAIM, n.d.-a), and 54% of people with disabilities use the Internet (Kessler Foundation and National Organization on Disabilities, 2010). In short, nearly 228,000 working-age Alabama residents with disabilities are likely online. And some populations are affected more than others, especially Alabama African Americans, whose per capita rates for serious vision problems exceed those of Alabama Caucasians (Bronstein & Morrisey, 2000).

For the Internet to be a valuable source of information and assistance, residents must fully and easily be able to access online resources. When accessibility measures, such as supplying alternative text describing an image for visually impaired users, are missing, users with disabilities are excluded from accessing and participating in e-government. Furthermore, because accessibility typically improves usability (Theofanos & Redish, 2003; World Wide Web Consortium, 2010), including on mobile devices (World Wide Web Consortium, 2010), an inaccessible site also can make non-disabled users expend unnecessary effort.

Users expect—and have a right to—accessible content. These expectations are not always met; many municipal sites do not meet standards (Evans-Cowley, 2006), Alabama sites included (Youngblood & Mackiewicz, 2012). Even a number of state (Fagan & Fagan, 2004) and federal sites (Olalere & Lazar, 2011) do not consistently adhere to all of the standards. This problem includes Alabama (e.g., Potter, 2002; West, 2005, 2006, 2007, 2008). Consequently, Alabama residents, like residents of other states, do not have full access to online information and services.

Automated accessibility evaluation (e.g., West 2008) can be used to look at a large number of sites in a short timeframe to capture a snapshot of design for accessibility. It helps identify the extent of accessibility problems, the types of problems (e.g., empty links), and the recurrence within each type (e.g., five empty links on a page), identifying critical issues meriting future research. Furthermore, automated review employing free, easily accessed, widely recognized, and easy-to-use tools can indicate the level of attention designers of a given site have paid to accessibility: If an automated evaluation reveals 15 serious accessibility problems, in our opinion, the site has fundamental accessibility flaws. Given that designers have easy access to the same and additional tools, these flaws indicate lax attention to designing for users with disabilities. An automated analysis that uncovers serious problems suggests that human site review such as expert analysis and testing—particularly by participants with disabilities—in combination with other methods (e.g., Jaeger, 2006; Olalere & Lazar, 2011) might be called for in future studies.

This study uses WAVE (WebAIM n.d.-b), a free automated tool for evaluating site accessibility that has been used as an indicator of web page accessibility in a range of studies, including in e government research (West, 2008; Youngblood & Mackiewicz, 2012). While early accessibility studies often relied on Watchfire’s Bobby for this type of testing, Bobby has been discontinued as a free service and WAVE has become a common substitute (e.g., West, 2008). WAVE tests for accessibility problems in the code of a website, including legal violations, such as failing to comply with Section 508 of the Rehabilitation Act Amendments of 1998. The following is an example of a standard that websites must comply with from Section 508: “A text equivalent for every non-text element shall be provided (e.g., via ‘alt’, ‘longdesc’, or in element content)” (B§1194.22[a]). WAVE checks for likely problems, possible problems, and good practices (e.g., “structural, semantic, [or] navigational elements,” such as heading tags), labeling them, respectively, as errors, alerts, and features. When a user enters a URL, WAVE follows the URL, checks the page for accessibility, generates a version of the original web page with an overlay of descriptive icons, and produces a count of errors. It identifies as errors the 20 types of coding problems that “will almost certainly cause accessibility issues” (WebAIM, n.d.-c, the WAVE 4.0 Icons, Titles, and Descriptions section), such as missing alternative text and missing form labels (see Table 4 for the full list).

We focused on the number of WAVE errors rather than the number of WAVE alerts for potentially problematic HTML, script, and media. Coding that triggers alerts is not consistently problematic: For instance, something that WAVE marks as a possible heading might not be a heading. Alternatively, a piece of code that WAVE marks as an error will most likely cause a problem. Many of the errors cause problems for users with vision problems, such as missing alternative text for an image, empty links, missing page titles, and blinking text. WAVE errors serve as a litmus test for a site designer’s attention to accessibility issues—even though a WAVE test is not as comprehensive or as sensitive as detailed expert review or user testing—and the number of errors is a consistent measure that has been used for comparison of sites in a range of disciplines (e.g., Muswazi, 2009; West, 2008; Youngblood & Mackiewicz, 2012). That said, as an automated tool, WAVE cannot make judgment calls. For example, it can detect the presence or absence of the ALT attribute for an image, but it cannot tell if the text is actually useful, i.e., if it provides a usable description of an image rather than reading “image.” In an ideal world, designers should include users with disabilities and/or expert evaluation as part of the development process.

Mobile Devices and the Digital Divide

Part of creating a positive user experience is enabling users to access sites in the ways they prefer to or are set up to access them, and a growing body of users employ cell phones to access websites. Accessible designs often help make websites more usable and portable, but media-specific style sheets (such as a style sheet that changes the page for mobile devices or for printing) are also important. These style sheets allow designers to provide alternative instructions in the code that help adapt the design of a web page based on how the user is accessing the page. For example, a designer might have separate style sheets for a regular computer screen, a mobile device, and for printing. Without a media-specific style sheet, an otherwise attractive and usable site can become unusable on a mobile device. Some devices, such as the iPhone, allow users to scroll and zoom, but these activities become cumbersome, especially on a complex page with multiple menus and designed for a 1024 x 768 pixel display.

Lack of mobile access is a problem not only for urban, affluent users, but also for users affected by the digital divide. Smith’s (2011) Pew Research Center survey found that 45% of cell-phone owning adults surveyed (in both Spanish and English) use cell phones to access the Internet. Some groups of people are significantly more likely to use cell phones to access the Internet: Only 39% of non-Hispanic White cell phone owners use their phones to access the Internet, as opposed to 56% of non-Hispanic Blacks and 51% of Hispanics. Although rural users are less likely on the whole to use cell phones this way, the demographics in Alabama suggest that residents would benefit from web pages designed for mobile access. Alabama has more than double the national-average proportion of non-Hispanic Black residents—26%, as opposed to 12.2% nationally (U.S. Census Bureau, 2010a; U.S. Census Bureau, 2010b; U.S. Census Bureau, 2010c). Even with a comparatively low Hispanic population, these minorities combined exceed the national average by 1.4%.

With an increasing percentage of Americans accessing the web via mobile devices, it is important that governments at all levels ensure that users can access e-government information on these devices. Prior studies (e.g., Shareef, Kumar, Kumar & Dwivedi, 2011) have called for researchers to begin examining how governments are leveraging mobile devices in delivering information and providing services.


Prior studies have found positive correlations between the demographic variables of county population, per-capita income, and median household income and the adoption of e-government services nationally (Huang, 2006) but no significant correlations between these demographics and website usability at the municipal-level in Alabama (Youngblood & Mackiewicz, 2012). This study focused on the following four hypotheses.

Portal adoption

Higher numbers of residents might require more coordinated websites to have a better user experience finding the resources they need in a complex local government, and higher income (as it manifests in budgets) might facilitate portals. Huang (2006) found that there was a correlation at the national level, between counties with higher populations, per capita income, or median household income, and portal adoption. We hypothesized that these correlations would hold true for contemporary Alabama county portals.


As sites grow in complexity, structuring information can become more difficult. For instance, a site with a single page and links to two county departments and three county services (e.g., a local hospital) presents less of a navigation challenge to a designer than a site with 40 pages and links to 23 county departments and services. If population and income were to provide more opportunities for complexity—more local services and greater funding departmental sites and the county portals themselves—the resulting complexity could pose design challenges. Thus, we hypothesized that there would be no correlation between county web portal usability (using basic, broadly accepted standards such as Cappel & Huang, 2007; West, 2008; Youngblood & Mackiewicz, 2012) and either county population or income.

Best coding practices

Best practice coding, including accessibility standards, valid HTML, and the use of external style sheets (such as Cappel & Huang, 2007; WebAIM, n.d.-d; West, 2008; Youngblood & Mackiewicz, 2012), facilitate user access. Again, the demographic variables above may generate both higher demand for quality sites and the resources to produce them. We hypothesized that there would be a positive correlation between best practices in coding and each demographic variable.

New communication technologies

The widespread adoption of mobile Internet access by the general population is still relatively recent. Coupled with Huang’s (2006) finding that counties tend to have low adoption rates of advanced services such as transactional services, regardless of demographics, we hypothesized that there would be no correlation between the adoption of new communication technologies and either county population or income.


Previous | Next