Changes in web-user behaviour

Over the last few years, it has become much easier for people in Australia (and many similar countries) to go online. The relative cost of computers and internet connection has fallen significantly, and now free internet access is available in many libraries, community centres and social spaces. However, we shouldn’t lose sight of the fact that not everyone can participate in this online revolution; a combination of geographical, financial, psychosocial, physical and cognitive factors means there is still a great gap between the digital haves and have-nots.

The total number of internet users is increasing all the time, but perhaps more important is the increased diversity of people who are online. It appears however, that many website owners and developers are either reluctant, or unable, to make sites that cater to the needs of these different sections of the community.

Two projects compared

In 2007, I was asked to test the usability of a website for people who were homeless or in public housing. (It was expected the site would be primarily accessed from computers and/or kiosks in government offices and welfare organisations.) Five years later, in 2012, I had an opportunity to test another site that had been prepared for a target audience who were similarly marginalised in terms of education and internet experience. Most of the test participants for the two sites had lower-level reading abilities and very limited access to computers when compared to the general community. (For reasons of privacy and client confidentiality I cannot provide specific details.)

While all the 2007 test participants had used the internet at least once, the amount of use was generally very limited with only 28% reporting they used the internet for 5 hours a week or more. However, after being introduced to the test-site they appeared to be genuinely interested in using it to find information and actively explored the various navigation options, and most appeared to develop an understanding of the site navigation system with relative ease. Comments from participants included:

  • I just love using the computer – I didn’t realise there was so much you could do.
  • I thought it was tricky at the beginning because there is all these different sections to look at for where to go (for information), but it’s good.

As might be expected, all the test subjects in the 2012 project had greater opportunities to access the web than those in 2007. 50% of the participants reported using the internet every day and only 20% said they used it once a week or less. Most went online via computers owned by family members and friends, and/or with computers and free internet access provided by various community centres.

All participants for the 2012 review were recruited on the basis that they had used the internet before, but several maintained they had never used the web even though as one commented, “… but I use Facebook all the time to keep in touch.” For most of the others, web use, other than Facebook, was restricted to finding specific information (almost exclusively) via Google or visiting one or two other regular sites, usually to obtain sporting results.

Changes in web understanding and behaviour

Compared to the 2007 participants, those in 2012 appeared to have less interest, or more difficulty, in learning the basic concepts of website navigation. Several were confused by the common navigation terms “Home” and “About”, and tended to see these terms as relating directly to themselves or their situation, comments included:

  • Home is that Australia or where you live?
  • Is Home something to do with the home you came from?
  • What is About? Is that about the information you are looking for.
  • What’s the difference between Home and About?

This apparent decline in the ability to understand, or willingness to learn, the navigation systems common to many sites may not be confined to novice web users or people with lower-level reading skills like those involved in the 2012 project. Instead, an emerging ‘Facebook effect’ may help explain why some regular web users today are less likely to participate in the exploratory, web-surfing behaviour of the past.

Facebook effect

Facebook recently announced its one billionth account; that is a lot of people, even if all these accounts are do not represent separate individuals. On Facebook (and other social networking sites such as Linkedin) focus is on the individual and their friends, and the navigations systems reflect this focus with the link Home taking the user back to their first page, or Wall.

It is possible that the growing use of social-networking sites may be contributing to a general decline in how well some web users understand the basic structure and navigation systems of sites. And, this process is likely to be exacerbated by the increasing use of application-based interfaces for a wide range of web activities ranging from banking to train timetables.

Googlefication

Another possible reason for the change in web behaviour over the five years between these two projects is the growing reliance on Google to find information.

Over the years, I have noticed that many web users are either “surfers” or “searchers”. Of course, this is not a hard and fast rule, and all of us probably indulge in both at times, but it seems that when seeking information some people are more likely to surf from site to site and use the navigation within sites, whereas with others the first inclination is to search.

Google has become the colossus of search engines. In 2004, Google accounted for less than 50% or all search engine use and the next most popular at the time, Yahoo, was at 26%. But by 2012, 83% of searchers use Google, with Yahoo a very distant second at just 6%. (PEW ‘Search engine use over time’)

Google now logs about 2 billion search requests a day, from approximately 300 million people, and for an increasing number of people Google is becoming the standard entry point to pages deep within websites. Why bother learning how to use a site to find what you want when Google will do it for you? To quote one recent test participant:

  • I normally just type the words in Google and it comes up. I always select one of the top results, because if I type a good question it will be there.

Conclusion

It seems to me, that for at least for some sections of the web community, the mental model they have of the web toady may be very different to the one they had a few years ago. This could be contributing to an impaired understanding of the structure of conventional sites, and difficulty in using the navigation systems they contain.

At the same time, an increasing number of web users are not using internal information retrieval mechanisms to locate information within a site, turning instead to external search engines (mainly Google) as a way of providing quick and direct access to resources deep within sites.

A combination of this growing reliance on Google, and the suggested ‘Facebook effect’ may mean that it is time to reconsider some basic usability and accessibility principals, and the potential impact it could have for web users with cognitive impairments and/or limited internet experience. Furthermore, I believe it may have more general implications for WCAG 2, in particular, “Guideline 2.4 Navigable: Provide ways to help users navigate, find content, and determine where they are.

In a practical sense I think there are a number of issues that need to be considered:

Should we continue to use common navigation labels like “Home” and “About”? Depending on the primary audience for a site, perhaps we should be more specific, for example “[Organisation name]” and “About Us” or “About [organisation name]”.

Perhaps we should pay more attention to Success Criteria 2.4.8 (Location: Information about the user’s location within a set of Web pages is available.), and associated Technique G65 about the use of breadcrumbs. This Success Criteria is at level AAA and so often ignored, but since users are increasingly going directly to internal site pages maybe more emphasis should be placed on helping them determine where they are within a site.

In WCAG 2 the use of metadata is strongly advocated, but this is mainly from the perspective of helping people find conforming alternate versions of pages or content (Appendix C). Perhaps we could also use metadata to communicate the location of primary versions of content within the site structure in a way that is machine readable.

And finally, the art of Search Engine Optimisation, dark or not:  While the SEO marketing imperative is to get the highest possible ranking for an organisation in search engine results, maybe it is time for a slight shift in focus towards also providing users with more help in locating information within sites.

The New Site

For many years the material I prepared was published only on the Web Usability site (www.usability.com.au), however a couple of years back I decided to write an online novel and so set up DingoAccess (www.dingoaccess.com) as a personal blog using Word Press. I soon found the Word Press publishing interface much easier to use and so increasingly put other material on the blog including usability and accessibility articles as well as videos and research reports.

Spreading material, which might interest other people concerned about website usability and accessibility, over two different sites was clearly not the most usable or accessible approach. Well this craziness has finally come to an end with the help of a couple of friends Russ Weakley and David McDonald, who helped prepare this new Web Usability site using Word Press.

I plan to continue using the DingoAccess blog, but mainly for personal posts and for information relating specifically to accessibility. The two sites will be integrated, and all the web-related material that I prepare, including videos and articles about usability and accessibility, will now be on the Web Usability site.

In early 2011, we researched the use of the internet by people over the age of 60. The findings were presented in a paper at the CSUN2011 conference, Improving Web Accessibility for the Elderly. One of the most striking findings was that about 60% of the research participants had no idea how they could change the size or colour of page content. As a result, I have included what I hope will be some useful advice and videos about how to make web content easier to read. This information is available from the link Having trouble reading this page? in the page footer. Thanks to Russ Weakley and Janet Parker for help in preparing the videos.

No doubt this new site will contain some bugs which could cause minor or major irritation. If you come across any problems of difficulties please let me know so that I can fix them.

I hope you like the new site and find the information it contains useful.

Welcome to the new site

Featured

The new Web Usability website contains material from both the old site as well as my personal blog, DingoAccess. All the usability, accessibility and general web-related articles I write will now be on this one new site rather than being spread between two sites.

I hope you like the new site and find the information it contains useful.

Is PDF accessible in Australia?

More than two years ago I wrote about WCAG 2.0 and Accessibility Supported, and my fear that, “the concept of ‘accessibility supported’ is not fully understood”. I believe that this “could put at risk the whole move to improve the accessibility of the web.” I am concerned that mixed-messages relating to the status of PDF as a “web content technology” is still causing problems within Australia at least.

I have presented many workshops about web accessibility and WCAG 2 compliance and the issue of “accessibility supported” is at the heart of some of the most common questions I get asked. Specifically, many developers want to know if they are still required to provide an accessible alternative for the PDF and/or JavaScript they include in a site. When answering questions like this I stress, of course, the need to ensure all content is as accessible as possible and the five Conformance Requirements for WCAG 2.0. But at a practical level this doesn’t fully answer the question because Conformance Requirement 4 states, “Only accessibility-supported ways of using technologies are relied upon to satisfy the success criteria”.

Now this would all be fine and dandy if developers were able to clearly identify which web content technologies, when used appropriately, are sufficiently supported by assistive (adaptive) technologies to be considered “accessibility supported” within the meaning of WCAG 2.0. However, when WCAG 2.0 was released in December 2008, the WCAG Working Group and the W3C effectively side-stepped this question, and more than three years later they continue to do so.

“The Working Group, therefore, limited itself to defining what constituted support and defers the judgment of how much, how many, or which AT must support a technology to the community and to entities closer to each situation that set requirements for an organization, purchase, community, etc.”

Understanding Accessibility Support

In Australia, this effectively means handing the decision over to the government regulators, most importantly the Australian Human Rights Commission and the Australian Government Information Management Office (AGIMO).

Changing attitudes

When it comes to all forms of disability rights in Australia, the Australian Human Rights Commission plays a key role in implementing the Disability Discrimination Act 1992, which it does (in part) through the issuing of Standards, Guidelines and Advisory notes. The Advisory Note relating to web content accessibility now references WCAG 2.0 and advises all sites move to Level AA compliance over the next few years. The Advisory Note doesn’t explicitly preclude the use of JavaScript (or Flash for that matter) but it does require them to be implemented in a way that is accessible. The attitude to the use of PDF however is significantly different:

“The Commission’s advice, current October 2010, is therefore that PDF cannot be regarded as a sufficiently accessible format to provide a user experience for a person with a disability that is equivalent to that available to a person without a disability, and which is also equivalent to that obtained from using the document marked up in traditional HTML.”

World Wide Web Access: Disability Discrimination Act Advisory Note (Version 4)

The websites of all government agencies in Australia are required to transition to WCAG 2.0, Level AA compliance, by the end of 2014. The Australian Government Information Management Office (AGIMO) is managing this transition through the National Transition Strategy (NTS), and the Government Web Guide provides an overview of what is required in terms of accessibility and which web content technologies can be used:

“Web technologies that claim accessibility support must prove WCAG 2.0 conformance through the use of WCAG 2.0 sufficient techniques

Agencies are reminded that it is still a requirement to publish an alternative to all PDF documents (preferably in HTML). … Agencies must abide by the Australian Human Rights Commission’s Disability Discrimination Act Advisory Notes in order to mitigate risk of disability discrimination complaint.”

Australian Government Web Guide: Accessibility (April 2011)

None of these documents appear to explicitly address the question of which web content technologies can be considered “accessibility supported”. However, with the exception of PDF, the Advisory Note and the Web Guide both seem to suggest that, any web technology, including JavaScript, can be considered “accessibility supported” if, for each relevant Success Criteria, there are recognised W3C Sufficient Techniques relating to that technology.

While I didn’t fully agree with excluding PDF, that all seemed reasonably clear until January this year when AGIMO posted an article on their blog about the Release of WCAG 2.0 Techniques for PDF. The main article is short, comprising an outline of the situation and various resources. However, in the comments, various people ask if this means that PDF (when used appropriately) can now be considered “accessibility supported”, to the extent that it is no longer necessary to provide multiple accessible formats. In answering this question, Jacqui van Teulingen, the Director of AGIMO wrote in part:

“As stated, the PDF Sufficient Techniques are now available, so technically an agency can rely on PDF by using the WCAG 2.0 PDF Sufficient Techniques and all applicable General Techniques, and will be considered to be complying with the NTS.”

Release of WCAG 2.0 Techniques for PDF (January 2012)

On the face of it, this comment seems to be at odds with the directive in the Australian Government Web Guide and the advice provided in the Human Rights Commission Advisory Note relating to web content accessibility. And, I suspect, once again there are developers, particularly those working on sites for government agencies, left wondering, or maybe even wandering…

We need clarity

This issue is not just about the use of PDF, but rather the process in Australia for determining those “web content technologies” that are considered acceptable and those that are not. I find it really hard to understand how Flash can be declared acceptable but PDF unacceptable, when both can cause significant accessibility problems when used inappropriately. Surely, it should not be a question of what technology is used, but how it is used.

When it comes “accessibility supported”, rather than precluding some web content technologies and not others, I believe the authorities should rely on the existence of Sufficient Techniques for each relevant Success Criteria as the main determinant of whether the use of a particular technology is accessible. That means, for example, if a web document has an image, there is a technique that allows an accessible alternative for that image, and if there are headings, then there is a technique that will allow them to be presented by different user agents, including commonly used assistive technologies.

I think the Australian authorities should consider the approach to the issue of “accessibility supported” (WCAG Conformance Requirement 4) as outlined in the Government of Canada Standard on Web Accessibility:

“Conformance requirement 4 (Only Accessibility-Supported Ways of Using Technologies) defines the ways of using technologies that can be relied upon to satisfy the success criteria. It can only be met by use of the following technologies:

  • XHTML 1.0 or later excluding deprecated elements and attributes,
  • HTML 4.01 excluding deprecated elements and attributes,
  • HTML5 or later excluding obsolete features, or
  • Technologies with sufficient techniques (specific to each technology) to meet all applicable success criteria.”

Government of Canada Standard on Web Accessibility (1 August 2011)

I believe it is time the Australian Government Information Management Office and the Human Rights Commission fully embrace both the spirit and the recommendations of WCAG 2.0. The accessibility of websites should be determined by how well they satisfy the five WCAG 2.0 Conformance Requirements regardless of the web content technology used.

JAWS 11 and IE 9

I recently had reason to investigate why someone using JAWS 11 with Windows 7 (64 bit) and Internet Explorer 9 was unable to identify or select checkboxes in a particular form.  I quickly found that the problems were not restricted to this form and so I initially thought it might have something to do with JAWS or Windows 7. However, after some testing and digging I found the culprit was Internet Explorer 9 and although the problem (bug) is recognised, it does not appear to be well publicised or known: Hence this post.

I test the accessibility of websites and although I can use various screen readers I don’t consider myself to be an expert in their use. For the purpose of this story, I should add that I am sighted, so I don’t need to rely on a screen reader and I am able to see things that may not be reported by a screen reader.

When I first tried to use the form with JAWS 11 and IE9, I noticed that in Virtual PC mode, the checkboxes appeared to be basically ignored when I just let the page read or if I arrowed down the page. I checked a different form with checkboxes and radio buttons, and once again the form inputs were not identified in the usual fashion. In effect, someone relying on the screen reader would not know there were checkboxes or radio buttons on the page.

I then asked a friend, Andrew Downie, who is a competent screen reader user, to look at the forms. All up, Andrew and I tested the two forms on several computers using either Windows XP or Window 7; IE 8, IE 9 and Firefox 9.01 browsers; and screen readers JAWS (various versions), NVDA 2011.2 and Window Eyes 7.5.2.

We found the forms behaved pretty much as you would expect with all combinations of operating systems, browsers and screen readers, apart from JAWS 11 and IE 9 (only tested with Windows 7).

We also found some other anomalies when using the forms with JAWS 11 and IE 9. When tabbing down the page going from one checkbox to the next etc, the checkboxes and radio buttons themselves were not identified, but the content of an explicitly associated label would be voiced. Interestingly, if the checkbox had no label but had a title attribute this was ignored. Even though it was possible to tab onto a checkbox (or radio button), we found that we could not select the item in the usual way (by pressing the space bar). Sometimes, when attempting to select a checkbox, the browser would return to a previously visited page or just shut down.

Clearly, the inability to identify and select checkboxes or radio buttons could be a serious problem for anyone who also happens to use IE 9 and relies on JAWS 11 and yet I can’t remember ever reading or hearing anything about it. With the aid of Google I found a few mentions of this problem including the following release note for IE 9 from Microsoft:

Problems reading web content using JAWS Virtual PC mode.

When reading web content using JAWS Virtual PC mode in JAWS 11, there are two issues you might notice. One issue is that some webpage content and some form controls such as radio buttons or check boxes are missing. The other issue you might notice is multiple blank lines and space characters when reading webpages. These two issues are resolved in the latest JAWS 12 release. Using Compatibility View will resolve this issue in JAWS 11.

http://msdn.microsoft.com/en-us/ie/ff959805#_Accessibility_considerations

I have re-tested the forms using IE 9 Compatibility View with JAWS 11 and they perform normally.

Accessibility Barrier Scores

Many governments and organisations now require websites to be accessible, and when it comes to determining whether these requirements have been met, they often rely on recognised checklists of accessibility criteria such as WCAG 2.0 or Section 508. These checklists are a useful way of indicating whether a site complies with the required criteria. However, they don’t usually provide much additional information when a site does not comply – such as the likely impact this may have for web users with disabilities.

I don’t propose discussing the merits of using conformance criteria and/or user-testing when determining the accessibility of websites, as I canvassed this issue in an earlier post, “Measuring Accessibility.” Except to say, I know I’m not alone in feeling frustrated (annoyed) when I see sites, which are generally pretty accessible, being condemned as “inaccessible” just because of a couple of minor failures to fully comply with one or two WCAG 2 Success Criteria. Likewise, seeing sites being boldly proclaimed as fully “accessible” based solely the experience of one person using one screen reader.

Many website regulators, be they government or commercial agencies, often want a simple declaration about the accessibility of a website: Is it accessible or not? Does it comply or not with the accessibility guidelines that they are required to meet?  This is the reality that faces most web accessibility professionals, as is the awareness that it is virtually impossible to make a modern website that will be totally accessible to everyone. Compliance with a set of predetermined guidelines, no matter how comprehensive they might be, is no guarantee of accessibility, a fact well recognised by the W3C in the introduction to WCAG 2.0:

“Accessibility involves a wide range of disabilities, including visual, auditory, physical, speech, cognitive, language, learning, and neurological disabilities. Although these guidelines cover a wide range of issues, they are not able to address the needs of people with all types, degrees, and combinations of disability.”

I know many web accessibility professionals move beyond the ‘comply’ or ‘not-comply’ paradigm by providing an indication of the likely impact particular accessibility issues might have for different users. However this is not always the case. In addition, organisations appear to be increasingly asking people with limited expertise in the area of accessibility to determine if a site is accessible or not. This determination is often only based on whether or not the site complies, for example with the WCAG 2.0 Success Criteria, after testing with an automated tool.

The aim of this article is to contribute to the discussion about how to measure the accessibility of websites, for I know I am not alone in feeling concern about the current situation. To this end, I put forward a suggested scoring system, with the extremely unimaginative title of “Accessibility Barrier Score”. This is just a suggestion that will be hopefully discussed, and not some form of prescribed approach. I am also mindful of the slightly amusing irony of suggesting a checklist to help overcome an obsession with using checklists to determine accessibility, but I hope you will bear with me and continue reading.

At the outset, I would like to make it very clear that this is not intended to be a criticism of WCAG 2.0, for in fact I am a strong supporter. Rather what I am suggesting is a system of identifying potential accessibility barriers and their likely severity. I would like to acknowledge the work of Giorgio Brajnik from Universita di Udine in Italy, and the information and inspiration I have drawn from it, in particular his article “Barrier Walkthrough”. I would also like to thank Sarah Bourne, Russ Weakley, Andrew Downie, Steve Faulkner and Janet Parker for their suggestions, criticisms and advice in preparing this article, but any blame for stupidity or inaccuracy should be directed at me and not them.

Access Barrier Scores (ABS) system

The suggested Access Barrier Scores (ABS) system assumes the person using the system has some knowledge of website accessibility and how assistive (adaptive) technologies are used by people with disabilities to access website content.

Needless to say, the process for determining a barrier score is subjective (more on this later) and it is envisaged the ABS will be used in conjunction with a recognised list of guidelines or recommendations relating to web content accessibility such as WCAG 2.0. It is also anticipated the reviewer will probably use a range of accessibility evaluation tools (e.g. aViewer, Colour Contrast Analyser etc) and some assistive technologies such as a screen reader.

The overall aim of the ABS system is to provide a measure of how often barriers to accessibility occur in reviewed web page(s) and the likely seriousness of those barriers. To achieve this, a range of common accessibility barriers is considered and the incidence (or frequency) and severity of each barrier is scored. These scores can then be used by the owners and developers of sites to identify and prioritize those issues that need to be remediated.

ABS components

The ABS is a checklist with six columns:

1. Barrier Description: Describes the potential access barrier. Suggested list of barriers later in this article.

2. Reference: Accessibility guidelines or criteria relating to the barrier. In this example WCAG 2.0 Success Criteria.

3. Incidence: A measure of how frequently the use of a site component does not meet the relevant accessibility requirements. NOTE: This is not a raw measure of how often an accessibility guideline such as WCAG 1.1.1 Non-text content is not complied with, but rather an estimation of the percentage of times on a page (or in a site) a particular requirement is not met. The result is presented in a five point scoring system:

0 – There is no incidence or occurrence of a failure to make the component accessible.
1 – The use of the page component or element causes access problems up to 25% of the time.
2 – The use of the page component or element causes access problems between 25% and 50% of the time.
3 – The use of the page component or element causes access problems between 50% and 75% of the time.
4 – The use of the page component or element causes access problems more than 75% of the time.

Two examples: First, if there are 10 images and 4 have no alt text, the lack of a text alternative could cause an accessibility problem 40% of the time images are used, so the Incidence score would be 2.

Second, if a site has just one CAPTCHA and it is inaccessible; then 100% of the times CAPTCHA is used could cause a problem, so the Incidence score would be 4.

4. Severity: Rates the likely impact that the barrier might present for someone with a disability. NOTE: This refers to the likely impact for those people that will be affected by the barrier. The impact is rated with a score of 1 to 5, where 1 is a minor inconvenience, and 5 indicates someone would be totally prevented from accessing the site content or functionality. Allocation of the severity rating will of course be subjective, and this issue is discussed later in the article.

5. Remediation priority: This is derived from the Incidence and Severity scores. It aims to prioritize the accessibility barriers so that those which are likely to have the greatest impact can be identified and addressed first. Each potential barrier is given one of the six following ratings (see attached ABS excel file):

Critical: Any barrier that has a severity score of 5 (regardless of the incidence score).
Very High: Any barrier where the severity score is 4 regardless of the incidence score.  And, any barrier where the result of multiplying the incidence and severity scores is equal to or greater than 9.
High: Any barrier where the result of multiplying the incidence and severity scores is equal to or greater than 6; and less than 9 (but excluding any barrier which has a severity of 4 or 5).
Medium: Any barrier where the result of multiplying the incidence and severity scores is equal to or greater than 3; and less than 6 (but excluding any barrier which has a severity of 4 or 5).
Low: Any barrier where the result of multiplying the incidence and severity scores is less than 3.
None: Any barrier that has an incidence score of 0 (regardless of the severity score)

6. Comments: Section for comments by the accessibility reviewer.

The hope is that these six columns combined will provide those who are responsible for ensuring the accessibility of a website with a useful tool that will allow them to easily determine how often a particular barrier to accessibility occurs, how serious the barrier is, and which barriers should be given the highest priority for remediation.

Proposed barriers

Deciding on the number and nature of issues to include in the list of potential accessibility barriers is a juggling act. It requires balancing the need for a list that comprehensively addresses every possible barrier with the desire to have a list that is not so long that it becomes off-putting and in a sense a barrier to its very use.

I initially wanted to suggest a list that contained no more than 20 items, but this turned out to be just not possible. After some deliberation I ended up with the following 26 suggested common barriers to accessibility, but these are just my opinions and it would be great to get the opinions of others.

IMAGES & COLOUR

  1. Images without appropriate text alternatives (alt text).
  2. Complex images or graphs without equivalent text alternative.
  3. Use of background (CSS) images for informative/functional content without an accessible text alternative.
  4. Use of CAPTCHA without describing the purpose and/or providing alternatives for different disabilities.
  5. Use of colour as the only way of conveying information or functionality.
  6. Insufficient colour contrast between foreground (text) and background.

STRUCTURE & NAVIGATION

  1. Failure to use appropriate mark-up for headings and sub-headings that conveys the structure of the document  (e.g. h1 – h6).
  2. Poor use of layout tables.
  3. Unable to increase text size or resizing text causes a loss of content or functionality.
  4. Unable to access and/or operate all page content with the keyboard.
  5. Purpose/destination of links is not clear.
  6. Unable to visually identify when a page component receives focus via a keyboard action.

VIDEO & AUDIO

  1. Pre-recorded audio-only or video-only material without an accessible alternative that presents equivalent information.
  2. Pre-recorded synchronised media (visual and audio content) without captions for the audio content.
  3. Pre-recorded synchronised media (visual and audio content) without an accessible alternative for the video content.
  4. Pre-recorded synchronised media (visual and audio content) without sign language interpretation for the audio content.
  5. Unable to stop or control audio content  that plays automatically.

FORMS

  1. Unable to programmatically identify form inputs (e.g. through use of explicitly associated labels or title attributes).
  2. Mandatory form fields are not easily identified.
  3. Insufficient time to complete a form and failure to notify time limits.
  4. When an error is made completing a form, all users are not able to easily identify and locate the error and adequate suggestions for correcting the error are not provided.

DATA TABLES

  1. Difficult to identify data table aim or purpose (e.g. fails to use caption and/or summary).
  2. Unable to programmatically associate data cells with relevant row and column headers (e.g. fails to use TH and/or id and headers).

UNDERSTANDABLE

  1. Page headings, sub-heading  and form labels and instructions are not clear and difficult to understand.
  2. No explanation or definition is provided for unusual words and abbreviations.
  3. Failure to use language that is appropriate for the reading-level of the intended audience.

The attached Access Barriers Scores excel file, contains an ABS checklist with six columns. The checklist is provided as an excel file so that it will be easy for others to add and remove barriers as they wish. The references used in this example are WCAG 2.0 Success Criteria, but could be replaced with another standard.

The ABS excel file should automatically generate the results for the Remediation Priority column based on what is entered into the Incidence and Severity columns.

Questions of subjectivity

Ideally any process which aims to determine whether a guideline or criterion has been complied with should be as objective and repeatable as possible, and this is even more important when the outcome of a court case may rest on the results. However, in spite of the best efforts of the W3C Web Accessibility Initiative, it is often not possible to obtain a completely objective and repeatable results when it comes to determining whether something is accessible or not, or whether or not a WCAG Success Criterion, has been complied with. Many times, evaluators need to make subjective (human) decisions, for example should an image have null alt or a text alternative, or is the text alternative a satisfactory equivalent for the image.

Clearly, the ABS system I have outlined raises questions of subjectivity. At the most basic level, deciding on which accessibility barriers to include is subjective. When it comes to using the checklist, deciding the incidence score is also likely to be subjective to some extent, notwithstanding the suggested percentage of occurrences for allocating the score as outlined earlier.

The greatest area of subjectivity however, is probably associated with allocating a severity score. Ultimately, determining the likely severity of any particular barrier will be a human judgement, and as such, is always liable to be influenced by the abilities, experiences, knowledge and foibles of the person making the decision.  For example, if we take just three potential barriers that all relate to vision: text alternatives for images; colour contrast; and focus visible, the severity score given to each of these may vary greatly depending on your starting point. If you are solely concerned with the ability of screen reader users to use the web, the failure to include text alternatives is a major potential barrier, where as contrast ratio and focus visible are not barriers at all. On the other hand, if your concern relates primarily to diminished colour vision, contrast ratio and focus visible will be more important than text alternatives. And, for all web users, apart from those who are unable to perceive content visually, a failure to make focus visible is likely to be a significant barrier.

The subjective nature of determining the severity of an accessibility barrier is one of the reasons why I believe it is important for anyone using the suggested ABS system (or any other process of accessibility evaluation) to have some knowledge of accessibility and assistive technologies. I provide the following as a general indication of how I would allocate severity scores, while recognising some issues that I might describe as ‘very minor’ could potentially prevent someone from accessing or using a page. As mentioned, these are subjective judgements and I know others may not agree, and in some cases strongly disagree, so I would very much like to hear what you think.

Severity score examples:

1. Very minor inconvenience: Not likely to prevent anyone from accessing content and is not likely to reduce the ability of people to use a page. For example:

  • Failure to identify sub-sub-sub headings with H4 (but all other headings are appropriate).
  • Images that should be ignored by screen readers have an alt that is not a null alt (e.g. alt=”line” or alt=”line.jpg”).

2. Minor inconvenience: Not likely to prevent anyone from accessing content, but could affect the ability of some people to use a page. For example:

  • Failure to identify sub-headings with H# (but main heading(s) use H1).
  • Colour contrast ratio for normal-size incidental text (i.e. not important for understanding or functionality) is between the recommended minimum ratio of 4.0:1 and 4.5:1.
  • Colour contrast ratio for large-scale text is between 2.7:1 and 3.0:1.
  • Link text alone is not meaningful (but destination can be determined from context).
  • Decorative and other images, which could be ignored by screen readers have no alt attributes.
  • Non-essential forms inputs (without title attributes) which use the label element but not the ‘for’ attribute.

3. Average inconvenience: Not likely to prevent anyone from accessing content, but will reduce the ability of people to use a page. For example:

  • Complete failure to use header elements.
  • All images have alt attributes, but text alternatives for content images (not functional images) are inconsistent.
  • Non-essential forms with descriptive labels, but the label element is not used and there are no input title attributes.
  • Link text which is not meaningful (e.g. more) and where it is not possible to programmatically determine the meaning from the context.

4. Major inconvenience: May prevent some people from accessing or using page content. For example:

  • Important form inputs without title attributes or explicitly associated labels.
  • Content images (which should be presented by screen readers) without alt attributes and adequate text alternatives.
  • Colour contrast ratio of normal-size page text is between 2.5:1 and 3.2:1.
  • Colour contrast ratio of large-scale text is between 1.8:1 and 2.2:1.

5. Extreme inconvenience: Will prevent access to sections of the site or the ability to perform required functions. For example:

  • CAPTCHA without any alternative.
  • Functional images (e.g. navigation items, buttons) without text alternatives.
  • Significant functional components that are mouse-dependent.
  • Login form inputs that cannot be programmatically identified.
  • Data table mark-up that does not allow data cells to be programmatically associated with required column and/or row headings.

While deciding the individual scores for each barrier will involve some subjective decision, I hope that using  two scores (Incidence and Subjectivity) in the ABS system will help iron out some of the subjective differences of different evaluators.

ABS Process

As previously indicated, the aim of this article is to suggest a system such as the proposed ABS, which could help experienced accessibility evaluators indicate the relative severity of accessibility issues in web content. The idea is that the ABS would be used in conjunction with established processes for determining the level of compliance with required accessibility guidelines or criteria. It is not intended to be a replacement for a comprehensive program of user-testing.

A typical checklist-style evaluation requires the accessibility evaluator to consider the content of a web page(s) with the aim of determining the extent of compliance with required guidelines or criteria. The ABS process suggests that while undertaking the compliance evaluation, the evaluator approaches the content with an awareness of the likely problems people with different needs and limitations may experience. In this regard, the evaluator “walks through” the content replicating, as much as possible, the behaviour of people with different limitations, for example using the keyboard instead of the mouse, turning off images, and increasing the size of text on the page. They also use a variety of tools to highlight accessibility related page components and APIs, and use the page with a screen reader such as JAWS or NVDA.

When a potential barrier is indentified (for example images without text alternatives), the evaluator estimates the percentage of times that page component is used in a way that will cause an accessibility problem (i.e. what percentage of images have missing alts) and the likely impact the barrier will have for those susceptible to it (e.g. are the missing alts essential for a screen reader user to understand or use the site content).

When the Incidence and Severity scores are entered into the attached excel worksheet, a Remediation Priority rating is generated based on the entered scores. The Remediation Priority rating aims to provide an indication of how significant a potential accessibility barrier might be and, by association the related failures to comply with designated accessibility guidelines. Combined, the Incidence, Severity and Remediation Priority results for each identified access barrier could help those responsible for the accessibility of a website to more effectively target their efforts.

Example

Imagine a simple page that includes the following content:

  • A CAPTCHA without any alternative modality at all, which is essential for progressing to the next page (the only CAPTCHA used (100%) does not provide an alternative so the Incidence score is 4).
  • Five images: three content images have good alts; one content image, which does not relate to navigation or functionality, has no alt; and for the final (decorative) image alt=”line.jpg” (five images, two (40%) with accessibility issues so the Incidence score is 2).
  • The ‘creative’ use of panels of background colours behind sections of the page means that about 60-70% of the content text has contrast ratios of between 3.5:1 and 4.5:1 (Incidence score is 3, but judgement made to allocate a Severity score of 2.5).
  • The page contains 5 links. With 4 links it is possible to determine the link purpose from either the link text or an adjacent sentence, but one link says “details” and it is not possible to determine the meaning from the context (five links, but with one (20%) it is not possible to determine the purpose so the Incidence score is 1).
  • A main page heading with H1, three sub-headings with H2, but there is also a sub-sub heading that is not contained in a header element (i.e. no H3) (five headings, but one (20%) heading fails to use H3 so the Incidence score is 1).

I envisage the proposed ABS being used to rate these issues in the following way:

Barrier Description Reference Incidence Severity Remediation priority
WCAG 2.0 Range  0-4 Range 1-5
Use of CAPTCHA without describing the purpose and/or providing alternatives for different disabilities 1.1.1 4 5 Critical
Images without appropriate text alternatives 1.1.1 2 4 Very High
Insufficient colour contrast between foreground (text) and background 1.4.3 3 2.5 High
Purpose or destination of link is not clear 2.4.4, 2.4.9 1 3 Medium
Failure to use mark-up for headings that conveys the document structure 1.3.1 1 2 Low

In this very simple example, I feel the final Remediation Priority scores provide a reasonable indication of the likely impact of these failures to comply with specific WCAG 2.0 Success Criteria. Clearly the failure to provide an alternative for the CAPTCHA is the most serious issue and is likely to pose the greatest barrier even though there is only one CAPTCHA. At the other end of the scale, the failure to use a header element for just one sub-sub heading, while being a non-compliance issue is not likely to pose a significant barrier to anyone.

The scores for the remaining three items, image alternatives, links and colour contrast are interesting, in part because failure to provide non-text content and link purpose (in context) are Level A issues, whereas insufficient colour contrast is a Level AA issue.

The failure to provide a text alternative for one content image that is not required for navigation or function within the site and the failure to use a null alt for a decorative image, while not complying with Success Criteria 1.1.1 are not likely to pose a significant barrier to many users who are unable to perceive images. Similarly, an inability to programmatically determine the purpose of one link out of five, while being an irritant for some users is not likely to prevent anyone from using the page/site.

On the other hand, even though the contrast ratio for paragraph text is not a lot lower than what’s required by Success Criteria 1.4.3, the fact that it relates to 60% – 70% of the text on the page means that it could be a persistent problem for some users and is likely to present a greater overall barrier than the failures to comply with either 1.1.1 or 2.4.4.

Conclusion

If you got this far, many thanks for persevering and apologies for the length as this article turned out to be much longer than I expected when I started. The Access Barrier Score system I’ve outlined is a suggested technique for helping to address what I believe is a worrying tendency to equate the accessibility of web content solely on the basis of whether or not (yes or no) it complies with a set of guidelines or criteria such as WCAG 2.0.

No doubt, the suggested ABS system has some rough edges. My hope is that something like this could be used by experienced accessibility evaluators, in conjunction with recognised accessibility guidelines like WCAG 2.0, to help the owners and developers of sites to identify and prioritize accessibility issues. I also believe the ABS remediation results could also help those responsible for a suite of sites (e.g. government agencies, educational institutions, large corporations) to set accessibility targets and provide a standardised method of monitoring the progress of those sites as they move towards meeting those targets.

PS: I had few problems entering this into WordPress so I hope it has remained reasonably accessible.

Measuring accessibility

There has been much discussion, and some arguments, about how to determine the accessibility of websites. Unfortunately, this is often polarised around two simplistic choices: A compliance/conformance based approach that usually involves a checklist of criteria; or, some form of user testing by people who have different disabilities and/or who rely on different assistive technologies. Both approaches have their strength and limitations, and neither can provide a reliable declaration about the accessibility of a site on its own.

For the individual web user, the accessibility of a site depends on many factors and how they interrelate. The obvious starting point are the personal barriers to web access that the user faces; these might for example, be technological or environmental limitations, or a physical disability that might necessitate the use of an assist device, or cognitive, learning or language problems that make the content of a page hard to understand.

Next, we need to consider the actual quality of the website code as well as the ability of the user-agents (such as browsers and assistive technologies) to present the content of the page in a way that the user can perceive. And finally, how skilful the user is in using the browser and/or assistive technology they rely on to access the web. With regard to this last point, it is often erroneously assumed that most people know how to use the accessibility features of a browser or computer operating system, and all assistive technology users are expert users of their technology.

Guidelines

Over the years, the W3C Web Accessibility Initiative (WAI) have developed sets of guidelines to help codify what is required to produce and render accessible web content, including:

Many web developers are aware of WCAG and some strive to produce content that complies with these guidelines. However, few are aware of ATAG and more importantly UAAG. Since conformance with the UAAG is largely beyond the control of developers, even well meaning and very dedicated developers cannot guarantee the content they produce will be fully accessible to all.

User-agents like screen readers rely on accessibility APIs, for example Microsoft Active Accessibility (MSAA), to expose objects, roles and states within the content, for example to identify a checkbox and whether or not it has been checked. However, this only works when the accessibility API is recognised by the user-agent(s) and this is not always the case.

This problem has been compounded by the rapid advances in web content technologies and techniques over the last few years and relative slowness by user-agents to keep up with these advances. Consider for a moment the increasing adoption of WAI-ARIA (Accessible Rich Internet Applications) and HTML 5, both of which offer some exciting accessibility features. The dedicated, and some might say valiant, work by Steve Faulkner from the Paciello Group over the years has highlighted the variable support by browsers and screen readers for some of these advanced features. For example,

  • ARIA Role Support (March, 2009) outlines how MSAA exposes ARIA roles by different browsers
  • HTML5 Browser Support (July, 2011) contains a table, which is regularly updated, that provides a good indication of how well new HTML 5 features are accessibility supported by browsers
  • JAWS Support for ARIA (October, 2010) documents how JAWS (10+) supports ARIA.

Evaluation tools

There are a wide range of free and not-so-free tools that can help determine compliance with accessibility guidelines by individual web pages or a collection of page. For example, and in no particular order of preference:

  • Web Accessibility Toolbar (WAT) contains a wide range of tools to aid manual examination of web pages for a variety of aspects of accessibility.
  • WebAim WAVE online tool for use with single pages. It is quick to use and provides results which are easy to understand
  • HiSoftware Compliance Sheriff contains an Accessibility Module that enables automated monitoring for site wide compliance
  • Deque Worldspace FireEyes for accessibility compliance testing of static and dynamic web content. An Enterprise version is also available
  • Total Validator online tool for validating pages against accessibility guidelines a Pro version is available for site wide testing

Although all these tools are useful and I use some of them regularly, in my opinion none can be replied up alone to reliably indicate either the degree of compliance with a specific set of guidelines or the overall accessibility of web page(s). The results obtained by automated testing tools like these need to be interpreted and confirmed by human evaluators.

Testing times

The risk of litigation, combined with political and moral pressure, has focussed increasing attention on the importance of ensuring websites are accessible. As a result, site owners, designers and developers now face the tasks of deciding what is the most efficient and reliable way of evaluating the accessibility of their sites.

As mentioned earlier, there are two general approaches for determining website accessibility: conformance review and user-testing. Ideally, any thorough accessibility evaluation should involve both approaches, but the constraints of budgets and time often mean that this is not possible. One feature common to both approaches however, is the importance of using an experienced accessibility evaluator who has an understanding of the potential barriers that people with disabilities might face and how these can be addressed. I now want to briefly consider the pros and cons of the two approaches.

Conformance review

Conformance reviews are the most common way of assessing the accessibility of websites. In general, this involves someone with expert knowledge checking whether the site as a whole, or more commonly a selection of pages, comply with a predetermined checklist of criteria such as WCAG 2.0. The assessment process is sometimes also referred to as a ‘manual inspection’ or ‘expert review’.

The selection of pages to evaluate is very important. Giorgio Brajnik and others from Universita di Udine, Italy reported in the paper “Effects of Sampling Methods on Web Accessibility Evaluations” (PDF) on a study that showed the use of predefined pages (e.g. home, contact, site map etc) may result in an inaccurate compliance result for 38% of checkpoints.

PROS

  • Able to identify a large and diverse range of non-compliance issues that might cause problems for a variety of potential end users and/or technologies.
  • The checklist items often provide a clear indication of what is required to rectify non-compliance.
  • Easy to incorporate into the different phases of the site development process and this can be particularly useful in an agile or iterative development environment.
  • Relatively quick and easy to implement when compared to user testing.

CONS

  • Totally dependent on the quality of the checklists or guidelines used. In my opinion however WCAG 2 is pretty good in this regard.
  • Tendency to view accessibility from just the perspective of whether or not a site passes or fails a number of checklist items, and may fail to adequately consider how easily or effectively the site might be for people with disabilities to use.
  • Particular difficulty with issues that blur the boundary between usability and accessibility, for example site structure (e.g. is it shallow or deep), which can be particularly relevant to older web users or those with cognitive disorders.
  • Does not involve real users doing real tasks in real time.

User testing

User testing usually involves a group of users with different disabilities, and different levels of skill in using the internet and their required assistive technology, undertaking a series of typical website tasks. The actions of the test participants are observed (and recorded) by the evaluator with the aim of identifying the accessibility barriers that maybe encountered.

Although task-based user testing for usability and accessibility share some common techniques, there are significant differences. For example, context is likely to be more important when evaluating accessibility since web content often has to go through transformation processes in order to be accessible. The output of these transformations (e.g. text alternatives for images, text to speech via a screen reader, speech to text as captions, magnification, extraction of links and headings on the page etc) has to be rendered in a way that retains the meaning and integrity of the original content while also meeting the needs of a diverse user base.  In the article “Beyond Conformance” (PDF) Giorgio Brajnik from Universita di Udine, Italy, explores the differences between usability and accessibility and why context is crucial when evaluating accessibility:

“Context is more crucial for accessibility than it is for usability. Besides being dependent on users’ experiences, goals and physical environment, accessibility of a website depends also on the platform that’s being used. It is the engine of a transformation process that is not under the control of the developer. In fact, accessibility of digital media requires a number of transformations to occur automatically, concerning the expression of the content of the website”

PROS

  • Evaluator is able to observe people encountering (and hopefully overcoming) real usability/accessibility problems in real time.
  • Accurately identify problems that actually prevent specific groups of people from accessing web content.
  • Test participants are able to rate the severity of the problems they encounter and identify those that are likely to be catastrophic.
  • Likely to generate highly valid results for people who have the same disabilities.

CONS

  • Difficult to recruit test participants with different disabilities.
  • Hard to obtain a test cohort that is large enough to canvas the range of assistive technologies and participant skill levels in using those technologies
  • Depends greatly on developing test scenarios (scripts) which are appropriate for test participants with different requirements.
  • Difficult to correlate and prioritise the problems encountered by a diverse group of people with different requirements who use different technologies
  • The testing process is expensive and time consuming

How inaccessible is it?

Many governments and organisations now require websites to be accessible. In most cases, compliance with this requirement is determined by conformance, either formally or informally, with predetermined guidelines/rules such as WCAG 2.0 or the US Section 508. In Australia for example, the Australian Government Information Management Office (AGIMO), requires all government agencies to comply with WCAG 2 at level AA by the end of 2014, and the Australian Human Rights Commission has indicated that it will use WCAG 2 when considering the validity of a complaint made under the Commonwealth Disability Discrimination Act. I looked in more detail at the adoption of WCAG 2 by various countries in an earlier post, “Adopting WCAG 2” (June, 2009).

WCAG 2 provides for three levels of compliance, Level A, Level AA and Level AAA, with the recommendation not to require Level AAA conformance “as a general policy for entire sites because it is not possible to satisfy all Level AAA Success Criteria for some content”. As a result, most jurisdictions that use WCAG 2 require websites to conform at either Level A or Level AA.

It appears that many regulators appear to adopt a pass or fail approach to the use of Guidelines, Checkpoints or Success Criteria and don’t factor in the potential severity of non-compliance with individual criterion. Nor is the likely impact of non-compliance with different criteria compared. To take an extreme example: A site which fails to provide text alternatives for all images, fails to comply with the Level A Success Criteria 1.1.1, as does a site with just one or two missing image alts on say one page.

Is a site which fails to comply with two Success Criteria any less inaccessible than one that fails to comply with five? And, what about a site that fails badly to comply with the Level AA Success Criteria 1.4.3 (e.g. has a contrast minimum of 1.5:1 for navigation items), should we consider this to be more or less accessible than another site that contains minor infringements of just a couple of Level A Success Criteria?

In a future article, I plan to look at how both the incidence and likely impact (severity) of accessibility barriers might be incorporated into the accessibility conformance review process.

‘Fluro’ Colours

My attention was recently drawn by Jenny Bruce to the relatively large number of sites that use bright ‘fluro’ background colours for navigation menu items and buttons. The combination of these ‘fluro’ background colours and white text often fails to meet the minimum colour contrast requirements of the Web Content Accessibility Guidelines 2.0, whereas, when the text colour is black the contrast ratio is acceptable.

Jenny also made the observation that it can be very difficult to convince people not to use white text against ‘fluro’ background particularly since, “many ‘regular’ users – i.e. those without known vision colour contrast problems – say they find white text against these colours to have better contrast and/or to be more aesthetically pleasing than black.”

This all got me thinking and so I decided to look a little more closely at these colour combinations under different conditions. I started by preparing a swatch which combines white (#ffffff) or black (#000000) text with the following background ‘fluro’ colours:
Orange: #FF6600
Green: #6E9800
Pink: #FF0084
Blue: #529FD6
Purple: #9966FF

As we know, WCAG 1.0 and WCAG 2.0 have different colour contrast requirements and use different methods for determining compliance with those requirements. However as a general rule, it appears that in most situations designers find the WCAG 2.0 requirements less constraining:

WCAG 1.0 requirement

Checkpoint 2.2: Ensure that foreground and background colour combinations provide sufficient contrast when viewed by someone having colour deficits or when viewed on a black and white screen. [Priority 2 for images, Priority 3 for text].”

Two formulas are provided to help determine if the contrast in colour brightness and colour difference between the foreground (text) colour and background colour is sufficient. When using these formulas with WCAG 1.0 Checkpoint 2.2, the W3C recommends the difference between foreground and background colour brightness be greater than 125, and the colour differences be greater than 500. The result is often presented in this format 135/507.

WCAG 2.0 requirement
WCAG 2.0 is different to WCAG 1.0 in that there are two Success Criteria relating colour contrast and several exemptions are identified. The following Success Criteria applies to most situations:

1.4.3 Contrast (Minimum): The visual presentation of text and images of text has a contrast ratio of at least 4.5:1, except for the following: (Level AA)

  • Large Text: Large-scale text and images of large-scale text have a contrast ratio of at least 3:1;
  • Incidental: Text or images of text that are part of an inactive user interface component, that are pure decoration, that are not visible to anyone, or that are part of a picture that contains significant other visual content, have no contrast requirement.
  • Logotypes: Text that is part of a logo or brand name has no minimum contrast requirement.

The WCAG 2.0 contrast ratios are determined with a complicated algorithm that measures the relative luminance of the text letters. For most text the minimum contrast ratio is 4.5:1. For large-scale text (more than 14 point bold or 18 point not bold) the minimum ratio is 3.0:1.

Colour contrast tests

I tested the colour combinations using the WAT Colour Contrast Analyser (which can be downloaded from the Paciello Group site) using both the WCAG 1.0 and WCAG 2.0 algorithms. The results were:

Fluro colours with white and black text
White on orange
WCAG 1: 119/408
WCAG 2: 2.9:1
Black on orange
WCAG 1:136/357
WCAG 2: 7.2:1
White on green
WCAG 1: 133/503
WCAG 2: 3.4:1
Black on green
WCAG 1: 122/262
WCAG 2: 6.2:1
White on pink
WCAG 1: 164/378
WCAG 2: 3.8:1
Black on pink
WCAG 1: 91/387
WCAG 2: 5.6:1
White on blue
WCAG 1: 113/310
WCAG 2: 2.9:1
Black on blue
WCAG 1: 142/455
WCAG 2: 7.3:1
White on purple
WCAG 1: 121/255
WCAG 2: 3.7:1
Black on purple
WCAG 1: 134/510
WCAG 2: 5.7:1

I feel several interesting observations can be made about these results:

  1. While all of the examples using white text fail to meet the WCAG 2.0 minimum contrast ratio of 4.5:1, all examples using black text exceed this requirement with the lowest score being 5.6:1.
  2. With WCAG 1.0 the results are less clear cut, with some of the white and black text examples meeting and failing to meet the minimum requirement of 125/500. Also, four of the WCAG 1 black text examples failed to meet the minimum requirement.
  3. The difference between the WCAG 1.0 and WCAG 2.0 results for black or white text on the green and blue backgrounds appear to be particularly interesting.

Not everyone has perfect colour vision

I was also interested to see how these colour combinations might be perceived under other conditions. The following tables give the results for these colour combinations when viewed in greyscale (converted with WAT colour tool), and as they might be perceived by someone with deuteranopia (a form of red/green color deficit). The deuteranope simulation was done using Vischeck.

Fluro – greyscape with white and black text
Background colour GREYSCALE
White text
GREYSCALE
Black text
Orange WCAG 1: 119/357
WCAG 2: 3.5:1
WCAG 1: 136/408
WCAG 2: 5.9:1
Green WCAG 1: 133/399
WCAG 2: 4.3:1
WCAG 1: 122/366
WCAG 2: 4.9:1
Pink WCAG 1: 164/492
WCAG 2: 6.8:1
WCAG 1: 91/273
WCAG 2: 3.1:1
Blue WCAG 1: 113/399
WCAG 2: 3.3:1
WCAG 1: 142/426
WCAG 2: 6.4:1
Purple WCAG 1: 120/360
WCAG 2: 3.6:1
WCAG 1: 134/405
WCAG 2: 5.8:1

Greyscale comments

  1. With WCAG 2, when the colours are converted to greyscale all of the contrast ratios are higher than they were with the actual background colour, and in the case of white on pink it greatly exceeded the minimum requirement. However, with the black text all the ratios were less than they were with the background colour and in the case of the pink background failed to meet the required ratio.
  2. With WCAG 1, neither the white or black text met the minimum requirement when converted to greyscale, although the white on pink and black on blue were close.
Fluro – deuteranope simulation with white and black text
Background colour DEUTERANOPE
White text
DEUTERNOPE
Black text
Orange WCAG 1: 100/407
WCAG 2: 2.4:1
WCAG 1: 155/358
WCAG 2: 8.7:1
Green WCAG 1: 131/460
WCAG 2:3.8:1
WCAG 1: 124/305
WCAG 2: 5.5:1
Pink WCAG 1: 103/323
WCAG 2: 2.8:1
WCAG 1: 152/442
WCAG 2: 7.4:1
Blue WCAG 1: 110/284
WCAG 2: 3.1:1
WCAG 1: 145/481
WCAG 2: 6.8:1
Purple WCAG 1: 119/284
WCAG 2: 3.1:1
WCAG 1: 136/481
WCAG 2: 6.5:1

Deuteranopia comments

  1. With WCAG 2, when deuteranopia is simulated none of the white text and background colour combinations came close to meeting the required contrast ratio of 4.5:1. With white text, the greatest problems appear to be with the orange (2.4:1) and pink (2.8:1) backgrounds. However, with black text the required contrast ratio was greatly exceeded with all the background colours when deuteranopia is simulated. Interestingly, with black text the highest ratios were for the orange (8.7:1) and pink (7.4:1) backgrounds.
  2. With WCAG 1, the results for the deuteranopia simulation were similar to those obtained with the actual background colours, except that with both the white and black text none of the results met the minimum requirement when deuteranopia is simulated although green on white, black on blue and black on purple are close.

The attached power point slides contain screen shots of the colour swatches and how they appear in greyscale, and when the conditions of deuteranopia and protanopia are simulated. These are the most common forms of impaired colour vision perception.

In conclusion, the results obtained when testing these colour combinations using the WCAG 2 algorithm appear to be more consistent than is the case with the WCAG 1 formulas, particularly when the colours are presented under different conditions. Also, people with deuteranopia are likely to experience significantly greater problems when white text is combined with these ‘fluro’ colours when compared to the rest of the population. But, when black text is used the ability of people with deuteranopia to perceive a difference between the foreground text and these background ‘fluro’ colour is likely to be improved.