e Learning - Evaluating accessibility

Evaluating accessibility

It is good practice to include evaluation in the development of any product or resource, but it is particularly important to evaluate accessibility because of the difficulties associated with the use of accessibility guidelines, described earlier in ‘Design guidelines and their limitations’.

Accessibility can be evaluated in different ways.

  1. Testing with disabled users.

  2. Testing by accessibility experts.

  3. Assessing conformance to checklists/guidelines, including the use of automated checkers.

  4. Testing with assistive technologies.

Approach 1, ‘Testing with disabled users’, is the best way of getting first-hand feedback from disabled users, but can be costly and time consuming to arrange. Approach 2, ‘Testing by accessibility experts’, can provide feedback from the perspectives of all the disability groups, but can also be costly. Therefore, Approach 3, ‘Assessing conformance to checklists/guidelines’, and Approach 4, ‘Testing with assistive technologies’, are probably more appropriate for teachers, designers, and developers if they do not have easy access to users or experts.

You have already read about assistive technologies and used simulators in the earlier activity, ‘Introducing accessibility and assistive technology’. Therefore, this activity will focus on the use of automated accessibility checkers to assess websites' conformance to guidelines. The difficulties with the use of guidelines were discussed earlier, in particular the need to have background knowledge or experience of disability in order to use guidelines effectively. The same difficulty is associated with the use of automated checkers and simulating users' experience. This means that any outcomes of evaluation should be treated with care, and, if possible, confirmed by an accessibility expert or by a representative sample of users.

Aim of accessibility evaluation

The aim of an accessibility evaluation is to assess the extent of the accessibility of the teaching resource: not to evaluate whether it is or is not accessible. In other words, the question to ask is ‘To what extent is this product accessible to people with a range of disabilities?’ rather than ‘Is this product accessible?’ An accessibility evaluation should assess both technical accessibility and usable accessibility.

When to evaluate accessibility

Technical and usable accessibility should be evaluated throughout the design life cycle, just as general usability should be. As with usability, the earlier in the process accessibility is evaluated the more likely the final product will be both technically and usably accessible. Accessibility can be evaluated or tested in early ideas and paper designs as well as prototype systems, and different aspects of accessibility can be evaluated at these different stages. For example, the general acceptability of a design idea to a disabled user can be evaluated early in the design process, and the feasibility of keyboard-only operation can be evaluated with a paper design. However, technical compatibility with assistive technology can be tested (as opposed to simulated) only after a prototype has been developed. Furthermore, like usability, accessibility is evaluated in different ways at different stages; for example, it may be useful to bring in an accessibility expert to evaluate design ideas or an early prototype, and then to conduct user testing with disabled users once a more advanced prototype has been developed.

If you are working with designers and developers who have little or no knowledge of accessibility issues, they need to be aware that fundamental decisions, such as which development environment the system will be programmed in, could have important accessibility implications and should be tested as part of the decision process.

How to evaluate accessibility

Accessibility guidelines and checklists can be used to evaluate a design or prototype. Despite the difficulties associated with the use of guidelines, they can be a useful tool for getting general insight into the accessibility of a website or system. As we discussed earlier, the main limitation of the use of guidelines or checklists is the fact that background knowledge of disability and assistive technology is required in order to effectively interpret and apply such guidelines.

Once a prototype of the application has been developed, basic accessibility testing can be conducted by simulating the interaction between a disabled user and the application. For example, switching off the monitor, unplugging the mouse, and using a screen reader can give useful insight into how a blind person might use an application. You can also gain an impression of how the pages might be presented to different users by viewing pages with a text-browser emulator (such as Lynx viewer), by adjusting browser settings (such as loading a page without the images, or ignoring page styles) or by adjusting operating system settings (such as adjusting colour settings).

However, these techniques do not provide a full picture of the experience of users with disabilities. For example, sighted people cannot easily ‘switch off’ their visual memory of the layout of the screen, nor can they experience what it is like to interact with a computer for extensive periods using, for example, voice recognition software or switch-controlled software.

Using automated checking tools

Another approach to testing website accessibility is the use of automated tools. These tools are useful for obtaining a quick overview of a site's accessibility and for checking technical accessibility. Examples of automated accessibility checking tools include:

  • WebXACT: A free online service that allows people to test single web pages for quality, accessibility, and privacy issues. It is provided by Watchfire, who also offer Bobby, a desktop application that allows the user to check a whole website rather than single pages.

  • WAVE: WAVE was developed at the Temple University Institute on Disabilities, based on the work of Dr Leonard Kasday and Web AIM (Web Accessibility in Mind) at the Center for Persons with Disabilities (CPD) at Utah State University.

These automated checking tools have been developed to assess a website's conformance to various web accessibility guidelines and standards, such as the Web Content Accessibility Guidelines (Chisholm et al., 1999). A range of tools is available, each with different functions. Some tools are available online and can be used to check one url at a time; others are available as applications to run on the developer's computer. The online tools are often free to use, while the downloadable applications usually have to be paid for. Some tools provide technical guidance in resolving accessibility problems, while others just offer a report of the accessibility problems found. Some tools provide a logo that can be placed on sites that have been ‘approved’ by the tool.

However, automated checking tools do have limitations. One limitation acknowledged by the developers of the tools is that some aspects of a site cannot be checked automatically. The tools will indicate in their feedback which aspects require human judgement, such as whether images have appropriate labels. For example, when checking accessibility, WebXACT provides ‘errors’ for aspects that can be detected automatically and ‘warnings’ for aspects that require human judgement. However, as discussed earlier, this kind of judgment can require some background knowledge of disability and assistive technology in order to make appropriate decisions. A further limitation of automated checking tools is that the guidelines on which the tools are based have themselves not been empirically evaluated and, therefore, the tools inherit this limitation.

There is some literature that discusses the use of automated tools in the evaluation of accessibility. For example, Ivory et al. (2003) take a detailed look at the use of automated tools for evaluation of accessibility. Rowan et al. (2000) have developed an evaluation methodology involving automated tools. Blair (2004) reviews the features of some tools, including an assessment of which tools are most suited to different types of users, such as web developers, accessibility experts, or those with little technical expertise. The Disability Rights Commission report (2004) on accessibility of public websites used automated testing together with user testing. Interestingly, the Commission reported that when it compared the findings from automated testing with those from user testing there was no correlation. They suggested this was because the accessibility problems identified by users were not problems that could be found through automated testing.

source:http://www.open.edu/openlearn/education/professional-development-education/accessibility-elearning/content-section-5.4.4