Accessibility Barrier Scores

Many governments and organisations now require websites to be accessible, and when it comes to determining whether these requirements have been met, they often rely on recognised checklists of accessibility criteria such as WCAG 2.0 or Section 508. These checklists are a useful way of indicating whether a site complies with the required criteria. However, they don’t usually provide much additional information when a site does not comply – such as the likely impact this may have for web users with disabilities.

I don’t propose discussing the merits of using conformance criteria and/or user-testing when determining the accessibility of websites, as I canvassed this issue in an earlier post, “Measuring Accessibility.” Except to say, I know I’m not alone in feeling frustrated (annoyed) when I see sites, which are generally pretty accessible, being condemned as “inaccessible” just because of a couple of minor failures to fully comply with one or two WCAG 2 Success Criteria. Likewise, seeing sites being boldly proclaimed as fully “accessible” based solely the experience of one person using one screen reader.

Many website regulators, be they government or commercial agencies, often want a simple declaration about the accessibility of a website: Is it accessible or not? Does it comply or not with the accessibility guidelines that they are required to meet?  This is the reality that faces most web accessibility professionals, as is the awareness that it is virtually impossible to make a modern website that will be totally accessible to everyone. Compliance with a set of predetermined guidelines, no matter how comprehensive they might be, is no guarantee of accessibility, a fact well recognised by the W3C in the introduction to WCAG 2.0:

“Accessibility involves a wide range of disabilities, including visual, auditory, physical, speech, cognitive, language, learning, and neurological disabilities. Although these guidelines cover a wide range of issues, they are not able to address the needs of people with all types, degrees, and combinations of disability.”

I know many web accessibility professionals move beyond the ‘comply’ or ‘not-comply’ paradigm by providing an indication of the likely impact particular accessibility issues might have for different users. However this is not always the case. In addition, organisations appear to be increasingly asking people with limited expertise in the area of accessibility to determine if a site is accessible or not. This determination is often only based on whether or not the site complies, for example with the WCAG 2.0 Success Criteria, after testing with an automated tool.

The aim of this article is to contribute to the discussion about how to measure the accessibility of websites, for I know I am not alone in feeling concern about the current situation. To this end, I put forward a suggested scoring system, with the extremely unimaginative title of “Accessibility Barrier Score”. This is just a suggestion that will be hopefully discussed, and not some form of prescribed approach. I am also mindful of the slightly amusing irony of suggesting a checklist to help overcome an obsession with using checklists to determine accessibility, but I hope you will bear with me and continue reading.

At the outset, I would like to make it very clear that this is not intended to be a criticism of WCAG 2.0, for in fact I am a strong supporter. Rather what I am suggesting is a system of identifying potential accessibility barriers and their likely severity. I would like to acknowledge the work of Giorgio Brajnik from Universita di Udine in Italy, and the information and inspiration I have drawn from it, in particular his article “Barrier Walkthrough”. I would also like to thank Sarah Bourne, Russ Weakley, Andrew Downie, Steve Faulkner and Janet Parker for their suggestions, criticisms and advice in preparing this article, but any blame for stupidity or inaccuracy should be directed at me and not them.

Access Barrier Scores (ABS) system

The suggested Access Barrier Scores (ABS) system assumes the person using the system has some knowledge of website accessibility and how assistive (adaptive) technologies are used by people with disabilities to access website content.

Needless to say, the process for determining a barrier score is subjective (more on this later) and it is envisaged the ABS will be used in conjunction with a recognised list of guidelines or recommendations relating to web content accessibility such as WCAG 2.0. It is also anticipated the reviewer will probably use a range of accessibility evaluation tools (e.g. aViewer, Colour Contrast Analyser etc) and some assistive technologies such as a screen reader.

The overall aim of the ABS system is to provide a measure of how often barriers to accessibility occur in reviewed web page(s) and the likely seriousness of those barriers. To achieve this, a range of common accessibility barriers is considered and the incidence (or frequency) and severity of each barrier is scored. These scores can then be used by the owners and developers of sites to identify and prioritize those issues that need to be remediated.

ABS components

The ABS is a checklist with six columns:

1. Barrier Description: Describes the potential access barrier. Suggested list of barriers later in this article.

2. Reference: Accessibility guidelines or criteria relating to the barrier. In this example WCAG 2.0 Success Criteria.

3. Incidence: A measure of how frequently the use of a site component does not meet the relevant accessibility requirements. NOTE: This is not a raw measure of how often an accessibility guideline such as WCAG 1.1.1 Non-text content is not complied with, but rather an estimation of the percentage of times on a page (or in a site) a particular requirement is not met. The result is presented in a five point scoring system:

0 – There is no incidence or occurrence of a failure to make the component accessible.
1 – The use of the page component or element causes access problems up to 25% of the time.
2 – The use of the page component or element causes access problems between 25% and 50% of the time.
3 – The use of the page component or element causes access problems between 50% and 75% of the time.
4 – The use of the page component or element causes access problems more than 75% of the time.

Two examples: First, if there are 10 images and 4 have no alt text, the lack of a text alternative could cause an accessibility problem 40% of the time images are used, so the Incidence score would be 2.

Second, if a site has just one CAPTCHA and it is inaccessible; then 100% of the times CAPTCHA is used could cause a problem, so the Incidence score would be 4.

4. Severity: Rates the likely impact that the barrier might present for someone with a disability. NOTE: This refers to the likely impact for those people that will be affected by the barrier. The impact is rated with a score of 1 to 5, where 1 is a minor inconvenience, and 5 indicates someone would be totally prevented from accessing the site content or functionality. Allocation of the severity rating will of course be subjective, and this issue is discussed later in the article.

5. Remediation priority: This is derived from the Incidence and Severity scores. It aims to prioritize the accessibility barriers so that those which are likely to have the greatest impact can be identified and addressed first. Each potential barrier is given one of the six following ratings (see attached ABS excel file):

Critical: Any barrier that has a severity score of 5 (regardless of the incidence score).
Very High: Any barrier where the severity score is 4 regardless of the incidence score.  And, any barrier where the result of multiplying the incidence and severity scores is equal to or greater than 9.
High: Any barrier where the result of multiplying the incidence and severity scores is equal to or greater than 6; and less than 9 (but excluding any barrier which has a severity of 4 or 5).
Medium: Any barrier where the result of multiplying the incidence and severity scores is equal to or greater than 3; and less than 6 (but excluding any barrier which has a severity of 4 or 5).
Low: Any barrier where the result of multiplying the incidence and severity scores is less than 3.
None: Any barrier that has an incidence score of 0 (regardless of the severity score)

6. Comments: Section for comments by the accessibility reviewer.

The hope is that these six columns combined will provide those who are responsible for ensuring the accessibility of a website with a useful tool that will allow them to easily determine how often a particular barrier to accessibility occurs, how serious the barrier is, and which barriers should be given the highest priority for remediation.

Proposed barriers

Deciding on the number and nature of issues to include in the list of potential accessibility barriers is a juggling act. It requires balancing the need for a list that comprehensively addresses every possible barrier with the desire to have a list that is not so long that it becomes off-putting and in a sense a barrier to its very use.

I initially wanted to suggest a list that contained no more than 20 items, but this turned out to be just not possible. After some deliberation I ended up with the following 26 suggested common barriers to accessibility, but these are just my opinions and it would be great to get the opinions of others.


  1. Images without appropriate text alternatives (alt text).
  2. Complex images or graphs without equivalent text alternative.
  3. Use of background (CSS) images for informative/functional content without an accessible text alternative.
  4. Use of CAPTCHA without describing the purpose and/or providing alternatives for different disabilities.
  5. Use of colour as the only way of conveying information or functionality.
  6. Insufficient colour contrast between foreground (text) and background.


  1. Failure to use appropriate mark-up for headings and sub-headings that conveys the structure of the document  (e.g. h1 – h6).
  2. Poor use of layout tables.
  3. Unable to increase text size or resizing text causes a loss of content or functionality.
  4. Unable to access and/or operate all page content with the keyboard.
  5. Purpose/destination of links is not clear.
  6. Unable to visually identify when a page component receives focus via a keyboard action.


  1. Pre-recorded audio-only or video-only material without an accessible alternative that presents equivalent information.
  2. Pre-recorded synchronised media (visual and audio content) without captions for the audio content.
  3. Pre-recorded synchronised media (visual and audio content) without an accessible alternative for the video content.
  4. Pre-recorded synchronised media (visual and audio content) without sign language interpretation for the audio content.
  5. Unable to stop or control audio content  that plays automatically.


  1. Unable to programmatically identify form inputs (e.g. through use of explicitly associated labels or title attributes).
  2. Mandatory form fields are not easily identified.
  3. Insufficient time to complete a form and failure to notify time limits.
  4. When an error is made completing a form, all users are not able to easily identify and locate the error and adequate suggestions for correcting the error are not provided.


  1. Difficult to identify data table aim or purpose (e.g. fails to use caption and/or summary).
  2. Unable to programmatically associate data cells with relevant row and column headers (e.g. fails to use TH and/or id and headers).


  1. Page headings, sub-heading  and form labels and instructions are not clear and difficult to understand.
  2. No explanation or definition is provided for unusual words and abbreviations.
  3. Failure to use language that is appropriate for the reading-level of the intended audience.

The attached Access Barriers Scores excel file, contains an ABS checklist with six columns. The checklist is provided as an excel file so that it will be easy for others to add and remove barriers as they wish. The references used in this example are WCAG 2.0 Success Criteria, but could be replaced with another standard.

The ABS excel file should automatically generate the results for the Remediation Priority column based on what is entered into the Incidence and Severity columns.

Questions of subjectivity

Ideally any process which aims to determine whether a guideline or criterion has been complied with should be as objective and repeatable as possible, and this is even more important when the outcome of a court case may rest on the results. However, in spite of the best efforts of the W3C Web Accessibility Initiative, it is often not possible to obtain a completely objective and repeatable results when it comes to determining whether something is accessible or not, or whether or not a WCAG Success Criterion, has been complied with. Many times, evaluators need to make subjective (human) decisions, for example should an image have null alt or a text alternative, or is the text alternative a satisfactory equivalent for the image.

Clearly, the ABS system I have outlined raises questions of subjectivity. At the most basic level, deciding on which accessibility barriers to include is subjective. When it comes to using the checklist, deciding the incidence score is also likely to be subjective to some extent, notwithstanding the suggested percentage of occurrences for allocating the score as outlined earlier.

The greatest area of subjectivity however, is probably associated with allocating a severity score. Ultimately, determining the likely severity of any particular barrier will be a human judgement, and as such, is always liable to be influenced by the abilities, experiences, knowledge and foibles of the person making the decision.  For example, if we take just three potential barriers that all relate to vision: text alternatives for images; colour contrast; and focus visible, the severity score given to each of these may vary greatly depending on your starting point. If you are solely concerned with the ability of screen reader users to use the web, the failure to include text alternatives is a major potential barrier, where as contrast ratio and focus visible are not barriers at all. On the other hand, if your concern relates primarily to diminished colour vision, contrast ratio and focus visible will be more important than text alternatives. And, for all web users, apart from those who are unable to perceive content visually, a failure to make focus visible is likely to be a significant barrier.

The subjective nature of determining the severity of an accessibility barrier is one of the reasons why I believe it is important for anyone using the suggested ABS system (or any other process of accessibility evaluation) to have some knowledge of accessibility and assistive technologies. I provide the following as a general indication of how I would allocate severity scores, while recognising some issues that I might describe as ‘very minor’ could potentially prevent someone from accessing or using a page. As mentioned, these are subjective judgements and I know others may not agree, and in some cases strongly disagree, so I would very much like to hear what you think.

Severity score examples:

1. Very minor inconvenience: Not likely to prevent anyone from accessing content and is not likely to reduce the ability of people to use a page. For example:

  • Failure to identify sub-sub-sub headings with H4 (but all other headings are appropriate).
  • Images that should be ignored by screen readers have an alt that is not a null alt (e.g. alt=”line” or alt=”line.jpg”).

2. Minor inconvenience: Not likely to prevent anyone from accessing content, but could affect the ability of some people to use a page. For example:

  • Failure to identify sub-headings with H# (but main heading(s) use H1).
  • Colour contrast ratio for normal-size incidental text (i.e. not important for understanding or functionality) is between the recommended minimum ratio of 4.0:1 and 4.5:1.
  • Colour contrast ratio for large-scale text is between 2.7:1 and 3.0:1.
  • Link text alone is not meaningful (but destination can be determined from context).
  • Decorative and other images, which could be ignored by screen readers have no alt attributes.
  • Non-essential forms inputs (without title attributes) which use the label element but not the ‘for’ attribute.

3. Average inconvenience: Not likely to prevent anyone from accessing content, but will reduce the ability of people to use a page. For example:

  • Complete failure to use header elements.
  • All images have alt attributes, but text alternatives for content images (not functional images) are inconsistent.
  • Non-essential forms with descriptive labels, but the label element is not used and there are no input title attributes.
  • Link text which is not meaningful (e.g. more) and where it is not possible to programmatically determine the meaning from the context.

4. Major inconvenience: May prevent some people from accessing or using page content. For example:

  • Important form inputs without title attributes or explicitly associated labels.
  • Content images (which should be presented by screen readers) without alt attributes and adequate text alternatives.
  • Colour contrast ratio of normal-size page text is between 2.5:1 and 3.2:1.
  • Colour contrast ratio of large-scale text is between 1.8:1 and 2.2:1.

5. Extreme inconvenience: Will prevent access to sections of the site or the ability to perform required functions. For example:

  • CAPTCHA without any alternative.
  • Functional images (e.g. navigation items, buttons) without text alternatives.
  • Significant functional components that are mouse-dependent.
  • Login form inputs that cannot be programmatically identified.
  • Data table mark-up that does not allow data cells to be programmatically associated with required column and/or row headings.

While deciding the individual scores for each barrier will involve some subjective decision, I hope that using  two scores (Incidence and Subjectivity) in the ABS system will help iron out some of the subjective differences of different evaluators.

ABS Process

As previously indicated, the aim of this article is to suggest a system such as the proposed ABS, which could help experienced accessibility evaluators indicate the relative severity of accessibility issues in web content. The idea is that the ABS would be used in conjunction with established processes for determining the level of compliance with required accessibility guidelines or criteria. It is not intended to be a replacement for a comprehensive program of user-testing.

A typical checklist-style evaluation requires the accessibility evaluator to consider the content of a web page(s) with the aim of determining the extent of compliance with required guidelines or criteria. The ABS process suggests that while undertaking the compliance evaluation, the evaluator approaches the content with an awareness of the likely problems people with different needs and limitations may experience. In this regard, the evaluator “walks through” the content replicating, as much as possible, the behaviour of people with different limitations, for example using the keyboard instead of the mouse, turning off images, and increasing the size of text on the page. They also use a variety of tools to highlight accessibility related page components and APIs, and use the page with a screen reader such as JAWS or NVDA.

When a potential barrier is indentified (for example images without text alternatives), the evaluator estimates the percentage of times that page component is used in a way that will cause an accessibility problem (i.e. what percentage of images have missing alts) and the likely impact the barrier will have for those susceptible to it (e.g. are the missing alts essential for a screen reader user to understand or use the site content).

When the Incidence and Severity scores are entered into the attached excel worksheet, a Remediation Priority rating is generated based on the entered scores. The Remediation Priority rating aims to provide an indication of how significant a potential accessibility barrier might be and, by association the related failures to comply with designated accessibility guidelines. Combined, the Incidence, Severity and Remediation Priority results for each identified access barrier could help those responsible for the accessibility of a website to more effectively target their efforts.


Imagine a simple page that includes the following content:

  • A CAPTCHA without any alternative modality at all, which is essential for progressing to the next page (the only CAPTCHA used (100%) does not provide an alternative so the Incidence score is 4).
  • Five images: three content images have good alts; one content image, which does not relate to navigation or functionality, has no alt; and for the final (decorative) image alt=”line.jpg” (five images, two (40%) with accessibility issues so the Incidence score is 2).
  • The ‘creative’ use of panels of background colours behind sections of the page means that about 60-70% of the content text has contrast ratios of between 3.5:1 and 4.5:1 (Incidence score is 3, but judgement made to allocate a Severity score of 2.5).
  • The page contains 5 links. With 4 links it is possible to determine the link purpose from either the link text or an adjacent sentence, but one link says “details” and it is not possible to determine the meaning from the context (five links, but with one (20%) it is not possible to determine the purpose so the Incidence score is 1).
  • A main page heading with H1, three sub-headings with H2, but there is also a sub-sub heading that is not contained in a header element (i.e. no H3) (five headings, but one (20%) heading fails to use H3 so the Incidence score is 1).

I envisage the proposed ABS being used to rate these issues in the following way:

Barrier Description Reference Incidence Severity Remediation priority
WCAG 2.0 Range  0-4 Range 1-5
Use of CAPTCHA without describing the purpose and/or providing alternatives for different disabilities 1.1.1 4 5 Critical
Images without appropriate text alternatives 1.1.1 2 4 Very High
Insufficient colour contrast between foreground (text) and background 1.4.3 3 2.5 High
Purpose or destination of link is not clear 2.4.4, 2.4.9 1 3 Medium
Failure to use mark-up for headings that conveys the document structure 1.3.1 1 2 Low

In this very simple example, I feel the final Remediation Priority scores provide a reasonable indication of the likely impact of these failures to comply with specific WCAG 2.0 Success Criteria. Clearly the failure to provide an alternative for the CAPTCHA is the most serious issue and is likely to pose the greatest barrier even though there is only one CAPTCHA. At the other end of the scale, the failure to use a header element for just one sub-sub heading, while being a non-compliance issue is not likely to pose a significant barrier to anyone.

The scores for the remaining three items, image alternatives, links and colour contrast are interesting, in part because failure to provide non-text content and link purpose (in context) are Level A issues, whereas insufficient colour contrast is a Level AA issue.

The failure to provide a text alternative for one content image that is not required for navigation or function within the site and the failure to use a null alt for a decorative image, while not complying with Success Criteria 1.1.1 are not likely to pose a significant barrier to many users who are unable to perceive images. Similarly, an inability to programmatically determine the purpose of one link out of five, while being an irritant for some users is not likely to prevent anyone from using the page/site.

On the other hand, even though the contrast ratio for paragraph text is not a lot lower than what’s required by Success Criteria 1.4.3, the fact that it relates to 60% – 70% of the text on the page means that it could be a persistent problem for some users and is likely to present a greater overall barrier than the failures to comply with either 1.1.1 or 2.4.4.


If you got this far, many thanks for persevering and apologies for the length as this article turned out to be much longer than I expected when I started. The Access Barrier Score system I’ve outlined is a suggested technique for helping to address what I believe is a worrying tendency to equate the accessibility of web content solely on the basis of whether or not (yes or no) it complies with a set of guidelines or criteria such as WCAG 2.0.

No doubt, the suggested ABS system has some rough edges. My hope is that something like this could be used by experienced accessibility evaluators, in conjunction with recognised accessibility guidelines like WCAG 2.0, to help the owners and developers of sites to identify and prioritize accessibility issues. I also believe the ABS remediation results could also help those responsible for a suite of sites (e.g. government agencies, educational institutions, large corporations) to set accessibility targets and provide a standardised method of monitoring the progress of those sites as they move towards meeting those targets.

PS: I had few problems entering this into WordPress so I hope it has remained reasonably accessible.


  1. Our best practices currently have fields for severity, noticeability and tractibility which allows us to weight and prioritize them so that developers can fix the highest-impact issues first. Our prioritization matrix is built into our Accessibility Management Platform so that we can intelligently prioritize issues to give stakeholders guidance on the order in which issues should be addressed.

  2. Hi Roger

    Do you think I could provide a link to this article on my new website (due to be launched next week)? I mentioned it on Twitter (you probably saw already). I think your arguments are really valid and I’d like to be able to expose it a bit more.


  3. An excellent article, and very timely for members of the W3C WCAG 2.0 Evaluation Methodology Task Force (EVAL TF).

    I wonder how one will set the incident rate for some barriers / SC such as those related to headings. For example, your “Failure to use appropriate mark-up for headings and sub-headings that conveys the structure of the document (e.g. h1 – h6).” singles out one aspect of the use (or absence of use) of headings. In other cases, such as a warbled heading structure with wrong or inconsistent levels, it would be tricky to determine the incidence score since the page component is, in a sense, the entire heading stucture and its deficiencies would need to be transformed into one of the five values, 0-4. Doable, but it also adds an element of subjectivity.

    Another increasingly critical aspect that seems not yet covered in the list of barriers is element order, a problem increasinly common with the use of dynamically generated content (lighboxes and the like). This would probably fit under your section STRUCTURE & NAVIGATION and maps onto SC 1.3.2 “Meaningful Sequence”. We find in testing that lightboxes are ofen positioned far away from the point that calls them up, and without a scripted focus reset. While elements in such dynamic content may be in principle keyboard-accessible, their remoteness from the current tab position makes them very hard to access (or get rid of). We all know the problem of the invisible focus trundling through the dark grey recesses behind the lightbox. This might be listed as a further type of barrier.


    • Thanks Detlev for your kind words and for referring the article to the EVAL TF.
      With reference to your comments about poor use of heading structure (BTW love your use of the word warbled for this). I feel the suggested barrier probably covers this – if the heading structure on all pages is so warbled that it makes no semantic sense at all, I would suggest an Incidence score of 4 and Severity score of 3 would be appropriate (because even poor use of headers does bring some benefit). However, if the header element wasn’t used at all on any page then the Incidence score would remain the same but the Severity score would be 4.
      With regard to your comment about lightboxes (overlays) – good point. I think the suggested barrier “Unable to access and/or operate all page content with the keyboard” may not cover this adequately.

  4. Note accessibility is not just about accessing the content; it is also about having confidence in knowing that you have access to ALL the content (its importance, and context). Images whose alt text is just “line” or “line.jpg” can leave the user questioning what that image may actually be, it’s purpose, and of course the frustration of not knowing if it’s important; for example is line.jpg be an image of a line, an image of a line chart/graph, linear table, base line results, etc. Similarly colour contrast for incidental text may seem inconsequential but a user who only sees a blur does not know that it is incidental!

    • I agree Steve and this is why I suggest the barrier scores should be used by someone who is familiar with accessibility and the needs of people with different abilities, and also in conjunction with some acknowledged standard such as WCAG 2.0.

  5. Bravo,Roger! This is a very positive response to the problem of people with little real knowledge passing instant judgements on accessibility.

    One such expert warned me that someone could lose their job for not putting in ALT text for an image. What you’ve proposed is sensible and useful.

  6. Remediation priorities “Critical” and “None” are incompatible. They can both apply at the same time in non-realistic hypothetical situations (such as non-public functionality being completely inaccessible).
    So no it doesn’t “matter”, but you might as well make it logical by writing:
    Critical: Any barrier that has a severity score of 5 and a non-zero incidence score.

    As a bonus, it’s shorter than the original.

  7. Roger, there is much to be said for this approach. I’d like to take it one step further by pointing out that both the incidence and significance of a barrier depend on the disability being considered.

    Color contrast is one such issue. It is of no consequence to people who are totally blind, but it is supremely important to people with partial vision. Still, it affects different people in different ways. Many people who are sensitive to issues of contrast, it seems, need high contrast. But people with low moderate vision, who must use one method or another to magnify the text highly, actually need low contrast—a white background can cause great distress when you must keep your eyes just a few centimeters from the screen so you can read.

    That’s just one of many conceivable instances in which a method that improves accessibility for one group of people will create a barrier for others.

    So to truly be useful, an assessment of accessibility should define these points:

    the task being considered
    the nature of the barrier
    the severity of the barrier
    the type of disability a person must have to be affected by this barrier
    perhaps even the type of assistive technology the reviewer assumes the person is using
    the skill level of the users affected by the barrier (novice? expert? expert in the subject area, but novice to assistive technology?)
    the frequency with which these folks are likely to encounter this barrier

    This might seem to be an overwhelming degree of detail, but note that an answer to one question might answer others, as well. For example:

    An application that is inaccessible except through the mouse would affect (nearly) all people with visual disabilities or mobility impairments—but would have no impact on people with cognitive disabilities or the Deaf.
    A site that requires visitors to find and click a randomly positioned Easter egg before they can get beyond a splash page would be inaccessible to just about everybody. (I hope I didn’t just inspire the hottest new trend among would-be designers.)

    This approach would require adding two or three columns to your proposed worksheet. I see several significant advantages:

    I would feel less hesitant to publish my assessment of a site’s accessibility, because I could clearly show the limitations of my expertise (or at least of this assessment).
    Even if I hadn’t directly considered your situation, you would have greater information about my assessment’s relevance to you.
    On more websites, we might see reasonable commitments to continuously improving accessibility rather than unsupported claims that the content is fully accessible.

    I’m glad you started this discussion. Great idea!

  8. I wonder if it would be worth doing some polling of people with various disabilities or who use the different ATs to find out how they’d rate the severity of each of the WCAG2 criteria? It’d be good to have a severity rating based on numbers rather than my (non-AT-using) opinion.

  9. Electronic WOFT

    decisions regarding levels of conformance factor in – among other things – impact on users. why, then, is it necessary for yet another level of abstraction, yet another set of guidelines, and another opportunity for web accessibility to be de-prioritised? Did the W3C get it so wrong? So-called ‘cosmetic’ issues are resolved in functional test cycles – why wouldn’t inappropriate text alternatives be? Are users with disability somehow deserving of a lesser user experience? Resolution priority is not always determined by the level of impact on users either …

Leave a Reply

Your email address will not be published. Required fields are marked *