The U P A voice
April2004 Contents

Featured UPA Links:

Job Bank

Member Benefits:
Discounts and Special Offers:

Event Discounts

 

amazon.com
Order your usability books now from the Amazon.com link from this site. Part of all sales help fund UPA.

UK accessibility investigation of 1000 web sites - results released April 14th 2004

Jon Dodd

Bunnyfoot Universality

An investigation of 1000 UK Web sites carried out on behalf of the Disability Rights Commission (DRC) reveals unacceptably poor (in fact woeful) accessibility. At least 81% of sites failed to meet the minimum accessibility standard, and this figure is likely to be much higher.

Background

The 'formal investigation', launched in March 2003 is the most comprehensive research to date into the discrimination faced by disabled people on the Internet. You can download the full report from the DRC site (unfortunately at the time of writing an accessible HTML version is not yet available). The investigation was performed by the Centre for Human Computer Interaction Design at City University, London.

The investigators performed automated accessibility checking of 1000 home pages across 5 sectors (government, business, e-commerce, entertainment, web services), and then, for 100 sites, more in-depth manual checking and user testing. The user testing involved 50 disabled web users. The user group included blind and partially sighted people, people who were profoundly deaf or hard of hearing, people with specific learning difficulties such as dyslexia, and people with physical impairments. Each user evaluated 10 sites and was asked to complete 2 relatively simple tasks per site (e.g. find the rate of interest on a banking site).

Researchers also canvassed the views of more than 700 businesses which might commission web sites, and nearly 400 web site developers. This survey was backed up by interviews with 25 of the respondent businesses.

Results highlights and comment

Automated checking against WAI WCAG 1.0 - 1000 home pages

Note: Automated checking checks a subset of the 65 Web Accessibility Initiative (WAI) Web Content Accessibility Guidelines version 1.0 (WCAG 1.0) - to ensure compliance manual checks also have to be performed.

  • 81% of home pages failed to achieve the lowest level of accessibility compliance - single-A (priority 1).
  • Only 6 out of 1000 passed the automated checking to double-A (priority 2)
  • No sites passed triple-A (priority 3)

Further analyses on the automated checking revealed that:

  • On average 8 different WCAG 1.0 checkpoints were violated per page
  • On average 108 separate violations (including multiple violations of the same checkpoint) were present per page

Comment:

Unfortunately the 81% failure rate for web sites is likely to be even higher:

  • Manual checks would be likely to reveal further violations of checkpoints not revealed by automated testing
    • Indeed manual checks of the 6 sites that passed automated checking to level double-A revealed that in fact only 2 really passed
  • Automated checks can produce false positives (e.g. an image of a cat with alt text 'dog' would pass)
  • Other pages rather than the home page are likely to contain accessibility violations
    • More care is likely to be taken on home pages than those deeper in
    • Some pages are likely to contain more complex elements such as forms and data tables that require attention in order to pass certain checkpoints

User testing of 100 sites

50 people from the test user panel completed a total of 913 tasks (each user was asked to attempt 2 relatively straightforward tasks on each of 10 sites). Many of the tests were performed by the users themselves at their homes (or place of work), however a subset were performed in a usability laboratory observed by the researchers. The researchers compared the results of individual and lab tests and concluded that the results were equivalent and therefore all tasks were pooled. The exception was that at home users could not always be certain whether they had passed or failed to complete the task. Tasks for which it was uncertain were discounted leaving a total sample of 769 tasks. Individual accessibility and usability problems were noted during the testing sessions and these were categorized by the researchers.

'Control testing' was performed on 6 sites. 3 'high' and 3 'low' accessibility rated sites were tested by both blind screen reader users and matched (in terms of web experience etc.) sighted users with no disabilities (control users).

Main findings:

  • 24% of all tasks could not be completed by users with disabilities
    • Blind testers had significantly more difficulty than other impairment groups
      • A 47% failure rate compared to an average of 82% across the other groups
  • A total of 585 accessibility and usability problems were identified during the testing. The researchers assert that:
    • 55% relate to WCAG 1.0 checkpoints
    • 45% did not relate to checkpoints
    • 8 checkpoints accounted for the majority of problems and therefore these should be considered as somewhat of a priority
  • 'Control testing' - good accessibility results in a 'usability bonus'
    • On 'high' accessibility sites all test users completed nearly all tasks. On 'low' accessibility sites the control users completed all tasks, while blind users completed only 67%.
    • Blind users took on average 3 times longer to complete tasks than control users
    • Both user groups required approximately 50% more time to complete tasks on low accessibility sites compared to the high accessibility sites
    • Control group users were 5 times quicker on high accessibility sites than blind users on low accessibility sites

Comment:

The tasks on the whole were relatively simple in scope. Testing of more complex (but equally representative) tasks such as site/service registration, account maintenance etc. would be likely to increase the failure rate such that the situation might be even worse.

There is some dispute over the 45% of issues that did not relate to WAI checkpoints. And there is an initial response by the WAI which attempts to clarify the issue. The WAI analysis of the available data suggests that 95% of the issues are in fact covered by WAI guidelines - 77% by WCAG 1.0 and the remaining 18% by the User Agent Accessibility Guidelines 1.0 (which together with WCAG 1.0 and the Authoring Tool Accessibility Guidelines 1.0 form a complimentary set of guidelines from WAI).

Essentially the WAI guidelines when taken as a whole do appear to cover the vast majority of the issues uncovered in the DRC investigation. If the WAI guidelines are followed then good levels of accessibility are achieved. The disparity in the figures for issues that relate directly to checkpoints in WCAG 1.0 (55% v 77%) reveals one of the known issues present with this 1999 version of the guidelines, namely that some are open to interpretation and/or difficult to objectively assess. The new guidelines Web Content Accessibility Guidelines v 2.0 (currently a working draft) seeks to address this as well as providing other improvements. Both the WAI and the DRC are keen to point out that developers should follow the guidelines for site design WCAG Version 1.0 but they should not follow these in isolation: user testing, they both agree, is very, very important.

Gaining better usability for all users by implementing better accessibility (the usability bonus) has for a long time been an anecdotal assertion (one which I would certainly make). The results from the DRC study support this. However, there are likely to be many factors at play, a trivial example could be that a site owner that cares about accessibility is probably more likely to care about usability and general site quality too, such that they may have arrived at a more usable site independently of the accessibility issue. It would be good to see a more focused study of this, testing across more sites, with more users and more user categories.

Survey

Questionnaires were sent to 712 web site commissioners and 388 web development agencies, and interviews were conducted with 21 commissioners and 25 agencies.

  • 9% of commissioners, and 6% of agencies responded to the questionnaires - the researchers interpreted this result alone to suggest a relatively low level of interest in accessibility issues
  • Levels of awareness and appropriate action for accessibility issues were found to be much higher in large organizations (>250 employees) than smaller organizations (but the even apparent high awareness in large organisations does not seem to have translated into action).
  • The main perceived barriers towards achieving accessibility were:
    • Perceived cost (money, time, staff resources)
    • Low level of knowledge about the issues and a lack of simple guidelines, expertise and skill
    • Conflict between accessibility and aesthetic and creative considerations
    General lack of awareness of the importance of the issue
  • 58% of agencies claimed to discuss accessibility with their clients, but only 31% of clients showed a positive attitude towards it. The most commonly successful argument for accessibility to clients was an increase in potential audience
  • Levels of accessibility expertise amongst developers was found to be low

Comment

It is my experience (and that of many people involved with accessibility) that unfortunately the threat of legal action is the prime motivator for the majority of organizations, this is different to the the result indicated above. Indeed the DRC investigation when it was first announced created much interest as it was mistakenly reported that it would be a naming and shaming exercise. A report with more bite might have created more impact but it was the express aim of the DRC to find the current state of play, raise awareness and support for better levels of accessibility rather than hit people with legal proceedings. Clearly increased awareness and knowledge is what is required. NB: It was indicated that if response to 'encouragement' was not forthcoming by organizations the DRC would support more formal action being taken against individual organizations in the future.

Recommendations

The DRC produced a number of recommendations in the report (not detailed here please read the recommendations for yourselves) many centred around raising awareness in organizations, providing support and education for developers and people using assistive technology, and making certain policy changes. There were also a number of specific recommendations and observations that might be useful to those involved in web accessibility work.

It was recommended that Web site developers should involve disabled users from an early stage in the design process this is certainly a thing that should be advocated. Unfortunately as many of us know, at present it is often difficult enough to get any user involvement included at all - we can hope though that this may help nudge people in the right direction.

One of the most interesting recommendations was that the UK Government should promote a formal accreditation process and a certification scheme resulting in an accessibility 'kite mark'. This would certainly be valuable but it would be a highly complex and difficult proposition to set up, maintain and properly police - it will be interesting to see how things develop on this front in the near future.

Conclusion

Formal confirmation of what many of us knew already - that the general levels of accessibility suck, and suck badly. The levels of accessibility in other countries are likely to be similar (although section 508 in the US is likely to have prompted increased levels). Hopefully the investigation will raise awareness and stimulate action. Lets all get accessible!

 

  Usability Professionals' Association
140 N. Bloomingdale Road
Bloomingdale, IL 60108-1017
Tel: +1.630.980.4997
Fax: +1.630.351.8490
UPA: office@upassoc.org
Contact the Voice