In the previous post in this series, I ended with a discussion that “current automatic accessibility testing practices take place at the wrong place and wrong time and is done by the wrong people” but really this applies to all accessibility testing. Of course every organization is different, but my experience substantiates the statement quite well. The “…by the wrong people” part is especially true. The wrong people are QA staff.

While QA practices vary, one nearly universal trait among QA staff is that they lack any training in accessibility. Further, they often lack the technical skill necessary to skillfully decipher the reports generated by automated tools. When you combine their inexperience in both accessibility and development, you’re left with significant growing pains when you thrust an automated testing tool at them. As I’ve said in previous posts, these users will trust the automated tool’s results explicitly. Regardless of the quality of the tool, this increases the opportunity for mistakes because as I’ve said in previous posts, there are always limitations to what can be found definitively and very likely that some interpretation is needed. There are also things that are too subjective or too complex for an automated tool to catch.

Irrespective of tool quality, truly getting the most out of an automated web accessibility tool requires three things:

  • Technical knowledge in that which is being tested
  • Knowledge and understanding of the tool itself
  • Knowledge around accessibility and how people with disabilities use the web

The first two points above apply to any tool of any kind. Merely owning a bunch of nice tools certainly hasn’t made me an expert woodworker. Instead, my significant expense in tools has allowed me to make the most of what little woodworking knowledge and skill I have. But, if I had even more knowledge and skill, these tools would be of even more benefit. Even the fact that I have been a do-it-yourselfer since I was a child helping my dad around the house only helps marginally when it comes to a specialized domain like fine woodworking.

The similar lack of knowledge on the part of QA staff is the primary reason why they’re the wrong users for automated testing tools – at least until they get sufficient domain knowledge in development and accessibility. Unfortunately learning-by-doing is probably a bad strategy in this case, due to the disruptive nature of erroneous issue reports that’ll be generated along the way.

So who should be doing the testing? That depends on the type of testing being performed. Ultimately, everyone involved in the final user-interface and content should be involved.

  • Designers who create mockups should test their work before giving it to developers to implement
  • Developers should test their work before it is submitted to version control
  • Content authors should test their work before publishing
  • QA staff should run acceptance tests using assistive technologies
  • UX Staff should do usability tests with people with disabilities.

At every step is an opportunity to discover issues that had not been previously discovered, but there’s also a high likelihood that as the code itself gets closer and closer to being experienced by a user that the issues found won’t be fixed. Among the test opportunities listed above, developers’ testing of their own work is the most critical piece. QA staff should never have functional acceptance tests that fail due to an automatically-detectable accessibility issue. Usability test participants should never have a failed task due to an automatically-detectable accessibility issue. It is entirely appropriate that the developer take on such testing of the own work.

Furthering the accessibility of the Web requires a revolution in how accessibility testing is done

Right now we’re experiencing a revolution in the workflow of the modern web developer. More developers are beginning to automate some or all of their development processes, whether this includes things like dotfiles or SASS / LESS or the use of automated task runners like Grunt and Gulp. Automated task management isn’t the exception on the web, it is the rule and it stems from the improvement in efficiency and quality I discussed in the first post in this series.

Of the top 24 trending projects on Github as of this writing:

  • 21 of them include automated unit testing
  • 18 of them use Grunt or Gulp for automated task management
  • 16 of them use jshint as part of their automated task management
  • 15 of them use Bower for package management
  • 15 of them use Travis (or at least provide Travis files)
  • 2 of them use Yeoman

The extent to which these automated toolsets are used varies pretty significantly. On smaller projects you tend to see file concatenation and minification, but the sky is the limit, as evidenced by this Gruntfile from Angular.js. The extensive amount of automated unit testing Angular does is pretty impressive as well.

Myself and others often contend that part of the problem that exists with accessibility on the web is the fact that it is seen as a distinctly separate process from everything else in the development process. Each task that contributes to the final end product impacts the ability for people to use the system. Accessibility is usability for persons with disabilities. It is an aspect of the overall quality of the system and a very large part of what directly impacts accessibility is purely technical in nature. The apparent promise made by automated accessibility testing tool vendors is that they can find these technical failings. Historically however, they’ve harmed their own credibility by being prone to the false positives I discussed in the second post in this series. Finding technical problems is one thing. Flagging things that aren’t problems is another.

Automated accessibility testing can be done effectively, efficiently, and accurately and with high benefit to the organization. Doing so requires two things:

  • It is performed by the right people at the right time. That is that it be done by developers during their normal automated processes.
  • The tools stop generating inaccurate results. Yes, this means that perhaps we need to reduce the overall number of things we test for.

It may seem somewhat non-intuitive to state that we should do less testing with automated tools. The thing is, the state of web accessibility in general is rather abysmal. As I get ready for the official release of Tenon, I’ve been testing the homepage of the most popular sites listed in Alexa. As of this writing, Tenon has tested 84,956 pages and logged 1,855,271 issues. Among the most interesting findings

  • 27% of issues relate to the use of deprecated, presentational elements or attributes
  • 19% of issues are missing alt attributes for images
  • 10% of issues are data tables with no headers
  • 5% of issues relate to binding events to non-focusable elements.
  • 2% of issues relate to blank link text (likely through the use of CSS sprites for the link)

85,000 tested pages is statistically significant and has a high confidence interval. In fact, it is more than enough.

There are an average of 54 definitively testable issues per page on the web. These are all development related issues that could be caught by developers if they had tested their work prior to deployment. Developers require the availability of a toolset that can allow them the ability to avoid these high-impact issues up front. This is the promise of Tenon.

In Part 4 I’ll talk about our need to move away from standalone, monolithic toolsets and toward integrating more closely with developers’ workflows.

My company, AFixt offers full accessibility services from testing, training, consulting, and remediation. If you need help, get in touch with me now!