Ask any ten accessibility people how they test for accessibility and you’re bound to get a different answer from each one . Some people test with JAWS or other assistive technologies and, if they can use the site, they “pass” it. Some people subject the site to a series of ad hoc tests for things they deem important. Some people use a checklist. Others use complicated methodologies to ensure complete and thorough coverage. On some level, each method has its merits. In the grand scheme, so long as we’re making progress toward a more accessible Web, I think we’re doing good. I do, however, have very strong feelings that how people test could be improved in ways that make the testing more efficient, less intrusive on projects, and deliver better impact for end users. One thing we should rethink is the role and effective use of automated testing.
Old & Busted Opinions on Automated Web Accessibility Testing
Typically, approaches to automated accessibility testing fall under three categories:
- “Automated testing sucks and I won’t do it”
- “Automated testing rules and that’s all I need to do”
- “Automated testing is a valuable component to my all-inclusive audit methodology”
Those who feel automated testing sucks are partly correct. In the early days of web accessibility, automated tools were dumb. They were very prone to false positives and, as the web has evolved, such first-generation tools are incapable of handling complicated workflows and interfaces which makes extensive use of client-side scripting. I’ll discuss this more in a future blog post but the Cliff’s Notes version is that if your tool isn’t testing the DOM, your tool isn’t testing the right thing and you should find a new one.
Automated testing very definitely has a good place in any organization that does web development. The efficiency afforded by the use of an automated accessibility testing tool cannot be matched. Doing automated testing becomes more vital the more front end development you’re doing and the more content you’re creating. There becomes a point at which the accuracy of human effort cannot overcome the efficiency of a quality testing tool. But that doesn’t mean you can stop at automated testing. Because so much about accessibility is subjective, no automated testing tool in the world can provide you with enough data to claim your system is accessible. Machine testing is valuable but not enough.
The sensible response to the above seems to be to combine automated testing with other testing methods such as manual code review, assistive technology testing, use case testing, or even usability testing. In fact, this has long been my own suggestion for a very long time. As an accessibility consultant, it is hard to resist the urge to deliver a client a large impressive report filled with extensive findings about how messed up their system is. A former boss of mine called this “plop factor”. The plop factor is how impressive the “plop” sound is when the hard copy report lands on the client’s desk.
The New Hotness: Do Automatic Testing First
I’d like to make an alternative suggestion. Skip the manual testing. Skip the use case testing. Skip the usability testing. Do automatic testing first.
When auditing a site, do a first round with automatic testing only
This is possibly the most radical departure from the conventional line of thinking most people have when it comes to automatic testing. As I said, some people don’t do it at all and others will use it as part of a much larger effort. I believe that instead, we should do automatic testing as a first round of testing. Most of my colleagues and peers are probably hyperventilating right now – especially those envisioning the dollar signs disappearing from their huge impressive high-plop-factor reports. But here’s the thing: This definitely shouldn’t be the end of the engagement with the client. Instead, the first round of testing should just be automatic testing. Your report should contain detailed guidance on how to remediate the problems found. The client’s development team should fix all those problems and then you should do a regression audit that includes both automated testing and manual testing. I would also save the use case testing/ or and usability testing for a final iteration. These are all important types of testing and should be done, but here’s why I suggest the automatic-testing-first approach:
- You should never pay a human to find errors that can be found through automated testing.
- Manual testing will close the gaps on what automated testing couldn’t find
- You should never uncover errors in use case or usability testing that couldn’t be found by automated and manual testing
This is good customer service. By taking this iterative approach, you’re delivering value to your customers and helping them become compliant faster and cheaper.
Agile teams: Build automatic accessibility tests in your Definition of Done
In a proper Scrum environment, developers test their own work. In some teams, the tests and the code are written at the same time. Accessibility is a bit of a different situation, primarily because the conformance criteria is often so subjective. There are, however, a large and important subset of accessibility best practices which can be tested for automatically. Developers in Agile environments should subject their code to these tests prior to calling a task complete. QA engineers in Agile shops should never find any automatically-testable error because the developers should take care of that stuff first. If they are, then the User Story isn’t complete.
Test nightly builds for accessibility
Few people realize this, but some of the enterprise-class automated web accessibility testing tools can be used as a web service. Different tools do different things (which, due to ethical reasons, I may be unable to comment on in too much depth. Sorry) but one way you can take advantage of the web services is to submit requests to the service and get back results. A really compelling idea for this would be to automatically test nightly builds so that all code submitted to version control gets tested.
Content teams: Test for accessibility before publishing
Content creators are typically not skilled in web development and often only have enough technical knowledge to do their job, which is to get content up on a site. They are not developers and therefore are often the web team members who know the least about web accessibility. As a consequence of this ignorance and the amount of content they create, content creators can be the source of a significant volume of accessibility errors on a site. The workflow of the content creators should include automatic testing to ensure no errors reside in the new content they’re about to publish. The tests should be limited to testing the content only.
Do definitive accessibility tests only
In the above scenarios, you should configure your automatic testing tool so that the only things it tests for are those things which can be definitively determined to be pass/fail. In any given tool, some of the test results will come up with items they flag as “Warnings” or “Manual Verification”. Figure out how to turn these tests off. If your tool doesn’t offer this degree of flexibility, find a new one that can. The reason for this recommendation is that you need to focus your efforts on doing things efficiently. These “warning” level errors are often incorrect or require too much subjective interpretation to be an efficient use of time.
You’re Not Done
The thing to keep in mind when doing automatic testing is that you are not done. If you’re getting clean reports from whatever automatic tool you use, great. Pat yourselves on the back, because you’re doing better than the vast majority of websites out there. Regardless, you’re still not done. As I’ve said above, even the best automatic testing tool provides incomplete coverage. Anyone who gives you an impression that this is not the case should be treated with serious suspicion. Instead, you should function under the understanding that more work is to be done before you can really claim your site is accessible. Specifically, you need to include steps in your process to include manual code review, assistive technology testing, and use case testing at various stages of the development process.
Iterate and expand scope
One of the biggest barriers to adoption of accessibility, in my experience, has been the impression that accessibility is nebulous and intrusive. Using the approaches I’ve outlined above, you can build processes into your SDLC and publishing workflows that allow accessibility to have minimal impact on your business. By initially testing for a subset of high impact issues, you can get quick wins that help minimize the pain experienced when an organization is new to accessibility. Then you can build on those successes by including a few more of the more subjective things and/ or including some manual testing. Increasing the scope gradually and deliberately will help minimize the perceived impact.