Many governments and organisations now require websites to be accessible, and when it comes to determining whether these requirements have been met, they often rely on recognised checklists of accessibility criteria such as WCAG 2.0 or Section 508. These checklists are a useful way of indicating whether a site complies with the required criteria. However, they don’t usually provide much additional information when a site does not comply – such as the likely impact this may have for web users with disabilities.
I don’t propose discussing the merits of using conformance criteria and/or user-testing when determining the accessibility of websites, as I canvassed this issue in an earlier post, “Measuring Accessibility.” Except to say, I know I’m not alone in feeling frustrated (annoyed) when I see sites, which are generally pretty accessible, being condemned as “inaccessible” just because of a couple of minor failures to fully comply with one or two WCAG 2 Success Criteria. Likewise, seeing sites being boldly proclaimed as fully “accessible” based solely the experience of one person using one screen reader.
Many website regulators, be they government or commercial agencies, often want a simple declaration about the accessibility of a website: Is it accessible or not? Does it comply or not with the accessibility guidelines that they are required to meet? This is the reality that faces most web accessibility professionals, as is the awareness that it is virtually impossible to make a modern website that will be totally accessible to everyone. Compliance with a set of predetermined guidelines, no matter how comprehensive they might be, is no guarantee of accessibility, a fact well recognised by the W3C in the introduction to WCAG 2.0:
“Accessibility involves a wide range of disabilities, including visual, auditory, physical, speech, cognitive, language, learning, and neurological disabilities. Although these guidelines cover a wide range of issues, they are not able to address the needs of people with all types, degrees, and combinations of disability.”
I know many web accessibility professionals move beyond the ‘comply’ or ‘not-comply’ paradigm by providing an indication of the likely impact particular accessibility issues might have for different users. However this is not always the case. In addition, organisations appear to be increasingly asking people with limited expertise in the area of accessibility to determine if a site is accessible or not. This determination is often only based on whether or not the site complies, for example with the WCAG 2.0 Success Criteria, after testing with an automated tool.
The aim of this article is to contribute to the discussion about how to measure the accessibility of websites, for I know I am not alone in feeling concern about the current situation. To this end, I put forward a suggested scoring system, with the extremely unimaginative title of “Accessibility Barrier Score”. This is just a suggestion that will be hopefully discussed, and not some form of prescribed approach. I am also mindful of the slightly amusing irony of suggesting a checklist to help overcome an obsession with using checklists to determine accessibility, but I hope you will bear with me and continue reading.
At the outset, I would like to make it very clear that this is not intended to be a criticism of WCAG 2.0, for in fact I am a strong supporter. Rather what I am suggesting is a system of identifying potential accessibility barriers and their likely severity. I would like to acknowledge the work of Giorgio Brajnik from Universita di Udine in Italy, and the information and inspiration I have drawn from it, in particular his article “Barrier Walkthrough”. I would also like to thank Sarah Bourne, Russ Weakley, Andrew Downie, Steve Faulkner and Janet Parker for their suggestions, criticisms and advice in preparing this article, but any blame for stupidity or inaccuracy should be directed at me and not them.
Access Barrier Scores (ABS) system
The suggested Access Barrier Scores (ABS) system assumes the person using the system has some knowledge of website accessibility and how assistive (adaptive) technologies are used by people with disabilities to access website content.
Needless to say, the process for determining a barrier score is subjective (more on this later) and it is envisaged the ABS will be used in conjunction with a recognised list of guidelines or recommendations relating to web content accessibility such as WCAG 2.0. It is also anticipated the reviewer will probably use a range of accessibility evaluation tools (e.g. aViewer, Colour Contrast Analyser etc) and some assistive technologies such as a screen reader.
The overall aim of the ABS system is to provide a measure of how often barriers to accessibility occur in reviewed web page(s) and the likely seriousness of those barriers. To achieve this, a range of common accessibility barriers is considered and the incidence (or frequency) and severity of each barrier is scored. These scores can then be used by the owners and developers of sites to identify and prioritize those issues that need to be remediated.
The ABS is a checklist with six columns:
1. Barrier Description: Describes the potential access barrier. Suggested list of barriers later in this article.
2. Reference: Accessibility guidelines or criteria relating to the barrier. In this example WCAG 2.0 Success Criteria.
3. Incidence: A measure of how frequently the use of a site component does not meet the relevant accessibility requirements. NOTE: This is not a raw measure of how often an accessibility guideline such as WCAG 1.1.1 Non-text content is not complied with, but rather an estimation of the percentage of times on a page (or in a site) a particular requirement is not met. The result is presented in a five point scoring system:
0 – There is no incidence or occurrence of a failure to make the component accessible.
1 – The use of the page component or element causes access problems up to 25% of the time.
2 – The use of the page component or element causes access problems between 25% and 50% of the time.
3 – The use of the page component or element causes access problems between 50% and 75% of the time.
4 – The use of the page component or element causes access problems more than 75% of the time.
Two examples: First, if there are 10 images and 4 have no alt text, the lack of a text alternative could cause an accessibility problem 40% of the time images are used, so the Incidence score would be 2.
Second, if a site has just one CAPTCHA and it is inaccessible; then 100% of the times CAPTCHA is used could cause a problem, so the Incidence score would be 4.
4. Severity: Rates the likely impact that the barrier might present for someone with a disability. NOTE: This refers to the likely impact for those people that will be affected by the barrier. The impact is rated with a score of 1 to 5, where 1 is a minor inconvenience, and 5 indicates someone would be totally prevented from accessing the site content or functionality. Allocation of the severity rating will of course be subjective, and this issue is discussed later in the article.
5. Remediation priority: This is derived from the Incidence and Severity scores. It aims to prioritize the accessibility barriers so that those which are likely to have the greatest impact can be identified and addressed first. Each potential barrier is given one of the six following ratings (see attached ABS excel file):
Critical: Any barrier that has a severity score of 5 (regardless of the incidence score).
Very High: Any barrier where the severity score is 4 regardless of the incidence score. And, any barrier where the result of multiplying the incidence and severity scores is equal to or greater than 9.
High: Any barrier where the result of multiplying the incidence and severity scores is equal to or greater than 6; and less than 9 (but excluding any barrier which has a severity of 4 or 5).
Medium: Any barrier where the result of multiplying the incidence and severity scores is equal to or greater than 3; and less than 6 (but excluding any barrier which has a severity of 4 or 5).
Low: Any barrier where the result of multiplying the incidence and severity scores is less than 3.
None: Any barrier that has an incidence score of 0 (regardless of the severity score)
6. Comments: Section for comments by the accessibility reviewer.
The hope is that these six columns combined will provide those who are responsible for ensuring the accessibility of a website with a useful tool that will allow them to easily determine how often a particular barrier to accessibility occurs, how serious the barrier is, and which barriers should be given the highest priority for remediation.
Deciding on the number and nature of issues to include in the list of potential accessibility barriers is a juggling act. It requires balancing the need for a list that comprehensively addresses every possible barrier with the desire to have a list that is not so long that it becomes off-putting and in a sense a barrier to its very use.
I initially wanted to suggest a list that contained no more than 20 items, but this turned out to be just not possible. After some deliberation I ended up with the following 26 suggested common barriers to accessibility, but these are just my opinions and it would be great to get the opinions of others.
IMAGES & COLOUR
- Images without appropriate text alternatives (alt text).
- Complex images or graphs without equivalent text alternative.
- Use of background (CSS) images for informative/functional content without an accessible text alternative.
- Use of CAPTCHA without describing the purpose and/or providing alternatives for different disabilities.
- Use of colour as the only way of conveying information or functionality.
- Insufficient colour contrast between foreground (text) and background.
STRUCTURE & NAVIGATION
- Failure to use appropriate mark-up for headings and sub-headings that conveys the structure of the document (e.g. h1 – h6).
- Poor use of layout tables.
- Unable to increase text size or resizing text causes a loss of content or functionality.
- Unable to access and/or operate all page content with the keyboard.
- Purpose/destination of links is not clear.
- Unable to visually identify when a page component receives focus via a keyboard action.
VIDEO & AUDIO
- Pre-recorded audio-only or video-only material without an accessible alternative that presents equivalent information.
- Pre-recorded synchronised media (visual and audio content) without captions for the audio content.
- Pre-recorded synchronised media (visual and audio content) without an accessible alternative for the video content.
- Pre-recorded synchronised media (visual and audio content) without sign language interpretation for the audio content.
- Unable to stop or control audio content that plays automatically.
- Unable to programmatically identify form inputs (e.g. through use of explicitly associated labels or title attributes).
- Mandatory form fields are not easily identified.
- Insufficient time to complete a form and failure to notify time limits.
- When an error is made completing a form, all users are not able to easily identify and locate the error and adequate suggestions for correcting the error are not provided.
- Difficult to identify data table aim or purpose (e.g. fails to use caption and/or summary).
- Unable to programmatically associate data cells with relevant row and column headers (e.g. fails to use TH and/or id and headers).
- Page headings, sub-heading and form labels and instructions are not clear and difficult to understand.
- No explanation or definition is provided for unusual words and abbreviations.
- Failure to use language that is appropriate for the reading-level of the intended audience.
The attached Access Barriers Scores excel file, contains an ABS checklist with six columns. The checklist is provided as an excel file so that it will be easy for others to add and remove barriers as they wish. The references used in this example are WCAG 2.0 Success Criteria, but could be replaced with another standard.
The ABS excel file should automatically generate the results for the Remediation Priority column based on what is entered into the Incidence and Severity columns.
Questions of subjectivity
Ideally any process which aims to determine whether a guideline or criterion has been complied with should be as objective and repeatable as possible, and this is even more important when the outcome of a court case may rest on the results. However, in spite of the best efforts of the W3C Web Accessibility Initiative, it is often not possible to obtain a completely objective and repeatable results when it comes to determining whether something is accessible or not, or whether or not a WCAG Success Criterion, has been complied with. Many times, evaluators need to make subjective (human) decisions, for example should an image have null alt or a text alternative, or is the text alternative a satisfactory equivalent for the image.
Clearly, the ABS system I have outlined raises questions of subjectivity. At the most basic level, deciding on which accessibility barriers to include is subjective. When it comes to using the checklist, deciding the incidence score is also likely to be subjective to some extent, notwithstanding the suggested percentage of occurrences for allocating the score as outlined earlier.
The greatest area of subjectivity however, is probably associated with allocating a severity score. Ultimately, determining the likely severity of any particular barrier will be a human judgement, and as such, is always liable to be influenced by the abilities, experiences, knowledge and foibles of the person making the decision. For example, if we take just three potential barriers that all relate to vision: text alternatives for images; colour contrast; and focus visible, the severity score given to each of these may vary greatly depending on your starting point. If you are solely concerned with the ability of screen reader users to use the web, the failure to include text alternatives is a major potential barrier, where as contrast ratio and focus visible are not barriers at all. On the other hand, if your concern relates primarily to diminished colour vision, contrast ratio and focus visible will be more important than text alternatives. And, for all web users, apart from those who are unable to perceive content visually, a failure to make focus visible is likely to be a significant barrier.
The subjective nature of determining the severity of an accessibility barrier is one of the reasons why I believe it is important for anyone using the suggested ABS system (or any other process of accessibility evaluation) to have some knowledge of accessibility and assistive technologies. I provide the following as a general indication of how I would allocate severity scores, while recognising some issues that I might describe as ‘very minor’ could potentially prevent someone from accessing or using a page. As mentioned, these are subjective judgements and I know others may not agree, and in some cases strongly disagree, so I would very much like to hear what you think.
Severity score examples:
1. Very minor inconvenience: Not likely to prevent anyone from accessing content and is not likely to reduce the ability of people to use a page. For example:
- Failure to identify sub-sub-sub headings with H4 (but all other headings are appropriate).
- Images that should be ignored by screen readers have an alt that is not a null alt (e.g. alt=”line” or alt=”line.jpg”).
2. Minor inconvenience: Not likely to prevent anyone from accessing content, but could affect the ability of some people to use a page. For example:
- Failure to identify sub-headings with H# (but main heading(s) use H1).
- Colour contrast ratio for normal-size incidental text (i.e. not important for understanding or functionality) is between the recommended minimum ratio of 4.0:1 and 4.5:1.
- Colour contrast ratio for large-scale text is between 2.7:1 and 3.0:1.
- Link text alone is not meaningful (but destination can be determined from context).
- Decorative and other images, which could be ignored by screen readers have no alt attributes.
- Non-essential forms inputs (without title attributes) which use the label element but not the ‘for’ attribute.
3. Average inconvenience: Not likely to prevent anyone from accessing content, but will reduce the ability of people to use a page. For example:
- Complete failure to use header elements.
- All images have alt attributes, but text alternatives for content images (not functional images) are inconsistent.
- Non-essential forms with descriptive labels, but the label element is not used and there are no input title attributes.
- Link text which is not meaningful (e.g. more) and where it is not possible to programmatically determine the meaning from the context.
4. Major inconvenience: May prevent some people from accessing or using page content. For example:
- Important form inputs without title attributes or explicitly associated labels.
- Content images (which should be presented by screen readers) without alt attributes and adequate text alternatives.
- Colour contrast ratio of normal-size page text is between 2.5:1 and 3.2:1.
- Colour contrast ratio of large-scale text is between 1.8:1 and 2.2:1.
5. Extreme inconvenience: Will prevent access to sections of the site or the ability to perform required functions. For example:
- CAPTCHA without any alternative.
- Functional images (e.g. navigation items, buttons) without text alternatives.
- Significant functional components that are mouse-dependent.
- Login form inputs that cannot be programmatically identified.
- Data table mark-up that does not allow data cells to be programmatically associated with required column and/or row headings.
While deciding the individual scores for each barrier will involve some subjective decision, I hope that using two scores (Incidence and Subjectivity) in the ABS system will help iron out some of the subjective differences of different evaluators.
As previously indicated, the aim of this article is to suggest a system such as the proposed ABS, which could help experienced accessibility evaluators indicate the relative severity of accessibility issues in web content. The idea is that the ABS would be used in conjunction with established processes for determining the level of compliance with required accessibility guidelines or criteria. It is not intended to be a replacement for a comprehensive program of user-testing.
A typical checklist-style evaluation requires the accessibility evaluator to consider the content of a web page(s) with the aim of determining the extent of compliance with required guidelines or criteria. The ABS process suggests that while undertaking the compliance evaluation, the evaluator approaches the content with an awareness of the likely problems people with different needs and limitations may experience. In this regard, the evaluator “walks through” the content replicating, as much as possible, the behaviour of people with different limitations, for example using the keyboard instead of the mouse, turning off images, and increasing the size of text on the page. They also use a variety of tools to highlight accessibility related page components and APIs, and use the page with a screen reader such as JAWS or NVDA.
When a potential barrier is indentified (for example images without text alternatives), the evaluator estimates the percentage of times that page component is used in a way that will cause an accessibility problem (i.e. what percentage of images have missing alts) and the likely impact the barrier will have for those susceptible to it (e.g. are the missing alts essential for a screen reader user to understand or use the site content).
When the Incidence and Severity scores are entered into the attached excel worksheet, a Remediation Priority rating is generated based on the entered scores. The Remediation Priority rating aims to provide an indication of how significant a potential accessibility barrier might be and, by association the related failures to comply with designated accessibility guidelines. Combined, the Incidence, Severity and Remediation Priority results for each identified access barrier could help those responsible for the accessibility of a website to more effectively target their efforts.
Imagine a simple page that includes the following content:
- A CAPTCHA without any alternative modality at all, which is essential for progressing to the next page (the only CAPTCHA used (100%) does not provide an alternative so the Incidence score is 4).
- Five images: three content images have good alts; one content image, which does not relate to navigation or functionality, has no alt; and for the final (decorative) image alt=”line.jpg” (five images, two (40%) with accessibility issues so the Incidence score is 2).
- The ‘creative’ use of panels of background colours behind sections of the page means that about 60-70% of the content text has contrast ratios of between 3.5:1 and 4.5:1 (Incidence score is 3, but judgement made to allocate a Severity score of 2.5).
- The page contains 5 links. With 4 links it is possible to determine the link purpose from either the link text or an adjacent sentence, but one link says “details” and it is not possible to determine the meaning from the context (five links, but with one (20%) it is not possible to determine the purpose so the Incidence score is 1).
- A main page heading with H1, three sub-headings with H2, but there is also a sub-sub heading that is not contained in a header element (i.e. no H3) (five headings, but one (20%) heading fails to use H3 so the Incidence score is 1).
I envisage the proposed ABS being used to rate these issues in the following way:
|Use of CAPTCHA without describing the purpose and/or providing alternatives for different disabilities
|Images without appropriate text alternatives
|Insufficient colour contrast between foreground (text) and background
|Purpose or destination of link is not clear
|Failure to use mark-up for headings that conveys the document structure
In this very simple example, I feel the final Remediation Priority scores provide a reasonable indication of the likely impact of these failures to comply with specific WCAG 2.0 Success Criteria. Clearly the failure to provide an alternative for the CAPTCHA is the most serious issue and is likely to pose the greatest barrier even though there is only one CAPTCHA. At the other end of the scale, the failure to use a header element for just one sub-sub heading, while being a non-compliance issue is not likely to pose a significant barrier to anyone.
The scores for the remaining three items, image alternatives, links and colour contrast are interesting, in part because failure to provide non-text content and link purpose (in context) are Level A issues, whereas insufficient colour contrast is a Level AA issue.
The failure to provide a text alternative for one content image that is not required for navigation or function within the site and the failure to use a null alt for a decorative image, while not complying with Success Criteria 1.1.1 are not likely to pose a significant barrier to many users who are unable to perceive images. Similarly, an inability to programmatically determine the purpose of one link out of five, while being an irritant for some users is not likely to prevent anyone from using the page/site.
On the other hand, even though the contrast ratio for paragraph text is not a lot lower than what’s required by Success Criteria 1.4.3, the fact that it relates to 60% – 70% of the text on the page means that it could be a persistent problem for some users and is likely to present a greater overall barrier than the failures to comply with either 1.1.1 or 2.4.4.
If you got this far, many thanks for persevering and apologies for the length as this article turned out to be much longer than I expected when I started. The Access Barrier Score system I’ve outlined is a suggested technique for helping to address what I believe is a worrying tendency to equate the accessibility of web content solely on the basis of whether or not (yes or no) it complies with a set of guidelines or criteria such as WCAG 2.0.
No doubt, the suggested ABS system has some rough edges. My hope is that something like this could be used by experienced accessibility evaluators, in conjunction with recognised accessibility guidelines like WCAG 2.0, to help the owners and developers of sites to identify and prioritize accessibility issues. I also believe the ABS remediation results could also help those responsible for a suite of sites (e.g. government agencies, educational institutions, large corporations) to set accessibility targets and provide a standardised method of monitoring the progress of those sites as they move towards meeting those targets.
PS: I had few problems entering this into WordPress so I hope it has remained reasonably accessible.