Some thoughts on automated web accessibility testing
My feelings about automated accessibility testing have vacillated throughout my career. My introduction to accessibility was through automated testing. As a new web developer I began applying to jobs with US government contractors shortly after 508’s grace period ended. I was rejected several times because my work failed a test by Bobby, the most popular accessibility tool at the time. Later, as I got to understand more about accessibility and the amount of subjectivity in determining what-is-or-is-not accessible, I came to believe that only a human can perform this function. The pragmatic reality is actually somewhere in between.
I feel like we live in an exciting time in history where the technologies of the Web are evolving more rapidly than I’ve ever seen. The many new types of web-enabled devices, many uses employed for web-based technologies, and many new and emerging input and consumption modalities present numerous challenges for developers and organizations. Luckily it seems that there are plenty of enthusiastic and creative people attempting to address those challenges by creating toolsets, frameworks, and processes to make development quicker, easier, and more robust. One specific area that is particularly interesting is in automated testing and automated build systems and frameworks. Some systems, such as Jenkins can handle automated testing and building. Some companies have built their own systems, using numerous open source projects that exist for such testing, making it easy for any development team to take advantage of automation.
At the core of this is the DRY principle – Don’t Repeat Yourself. In programming, DRY is all about not duplicating functionality, because doing so adds to maintenance issues, logic conflicts, refactoring issues and, when left unchecked, ultimately a great deal of entropy. But DRY also applies to human effort, processes, and overall efficiency. We see demonstrations of the DRY principle everywhere in history going back to the invention of metal casting around 3500BC. The idea of creating a process that allows us to do something one time and having a way to duplicate that process repeatedly just makes sense. It also just makes sense to have a process or tool which will do something entirely differently but have the same outcome. To this day, workers on the line at General Motors auto plants who come up with a newer, faster, better, or safer way of building a car get a hefty bonus, depending on how well they improve the process. This has resulted in numerous innovations in safety or efficiency. This is why automated web accessibility testing makes so much sense to me.
Those who are quick to criticize automated web accessibility testing may not understand this idea of not repeating effort. Or, maybe they disagree with the idea that automated testing is effective. I believe that these feelings are well-founded because the toolsets they currently have at their disposal have made them feel that way. If the idea of automation is to replace human effort and the tool actually adds more effort or if the tool doesn’t do its job properly, then the natural conclusion is that automation is a bad idea.
In the manufacturing of physical products, the final manufactured products are tested to ensure a specific level of performance against pre-defined criteria. In some cases, the criteria are internally developed. In others cases, such as electronic products, the standards are external. Depending on the type of product the standards often include things like safety, reliability, accuracy, longevity, robustness, and sustainability. Naturally, testing and measuring equipment are tested for things like accuracy and sensitivity.
From a business perspective, the manufacturing processes themselves are scrutinized as well to ensure that the process is safe, accurate, reliable, and efficient as possible. In terms of efficiency, the business wants to make sure that the process is cheap and quick. Delivering a good product that’s cheap to make and sells for a lot is every manufacturer’s dream.
What do we make of the above few paragraphs? Whatever process we employ, we need to make sure that the process is fast, reliable, and accurate. The end result must be something that delivers high value. But what do we see when it comes to automated web accessibility testing tools? Often we see the opposite. Often the tools add a layer of complexity and a disruption in process that makes actually using the tool more trouble than it is worth. Reports are often littered with false positives that have little relevance or importance. I’ve been discussing problems with automated website accessibility testing tools for years – even issuing challenges to vendors to improve the usefulness and usability of their products. The reality is that the present model for automated testing suites like Worldspace, Compliance Sherriff, AMP, etc. is fundamentally broken. They will never eliminate the ways in which they disrupt the modern web development and QA workflows.
This does not, however, mean that automated web accessibility testing itself is bad. On the contrary. I believe strongly in it and my belief gets stronger every time I think about it. At last year’s CSUN, I presented a session titled Choosing an Automated Accessibility Testing Tool: 13 Questions you should ask. I now think that there are really only three things that warrant consideration:
- Flexibility and Freedom. These tools exist for one reason only: to work for you. Every single second you spend learning the UI, figuring out how to use it, or sifting through results is a second of development time that could go into actually improving the system. The tool must be flexible enough to offer the freedom to use it in any environment by any team of any size in any location.
- Accurate. Developers of accessibility tools want to find every single issue they possibly can. This unfortunately often includes things that are, in context, irrelevant or of low impact. They also may require an additional amount of human effort to verify. This is a disservice. Tool vendors need to understand that their clients may not have the time, budget, or knowledge to sift through false positives. The user should be able to immediately eliminate or ignore anything that isn’t an undeniable issue.
- Understandable The user of the tool needs to know exactly what the problem is, how much impact the problem has, where it is, and how to fix it. Overly general, overly concise, or esoteric result descriptions are unhelpful. If a tool can find a problem, then it can make intelligently informed suggestions for remediation
Those who are anti-automated testing probably don’t fully have the perspective needed to understand how useful it can be. The primary activity I perform at my job is to do accessibility audits. Even to this day, 14 years after WCAG 1.0 gained Full Recommendation status, I still find easy-to-discover issues like missing alt attributes for images, missing form field labels, and missing header cells for tables. These are things that are super easy to accurately find using automation. In fact today I tested a page that had 1814 automatically detected errors with zero false positives! The detractors against automated testing are very right about one thing: There’s only so much you can do with automated testing. There are a large number of things that are too subjective or too complex for machine testing. But this doesn’t mean that automated accessibility testing has no value. It means that we need to use it wisely. We need to apply the tool at the right time and for the right reasons. We need to understand its limitations and capabilities and use them in a manner that exploits the capabilities as efficiently as possible. At that point, automation will be viewed as the vital part of the developer’s toolset that it should be.