Frequently on mailinglists, blog posts, and Twitter, I read about accessibility advocates decrying the sins of what they call “Checklist” accessibility. What the arguments attempt to assert is, essentially, that “Checklist” accessibility is not good enough, either because the checklists themselves are flawed or that the checklist takes the disabled user out of the equation and relegates their challenges to the level of a series of check items.
The first item, that checklists themselves are flawed, is in itself a flawed argument. In short, the mere fact that some checklists or tools are flawed does not mean that they all are.
On the second point, that it takes the user out of the picture, I’d argue that a well-documented process which includes checklist-based evaluations are better at ensuring that all users’ needs are met, not just some users.
Why the Typical Accessibility Test Falls Short
In opposition to the “checklist” accessibility approach, what I normally see is a situation where a person opens a website in a browser, takes a poke at it with some automatic testing tools, some toolbars, possibly manipulating browser settings, and maybe even using the site with some assistive technologies. In the grand scheme, this approach is pretty good and leads to a great high-level understanding of the system’s accessibility and in most cases will generate a lot of issues. This makes the testing seem effective. Unfortunately it is not effective – especially when it comes to trying to make an accessible system.
Why “Checklist” Accessibility Excels
Before continuing further, its important that we understand what we’re trying to do with an accessibility test: Fix… The… System. While I’m sure there are plenty of people who get joy from creating (and reading) big reports full of accessibility issues, I tend to get more joy in helping my clients improve the accessibility of their systems. Reports which do not facilitate easy & quick remediation are not worth the paper they’re printed on.
The first thing to take note of when considering a checklist-style approach to testing is that it affords you the ability to do a complete audit on the system. Whereas traditional test techniques often have a make-it-up-as-you-go approach, a checklist allows you to have all of your conformance criteria at your disposal at all times. Instead of having to remember all of the best practices, a checklist is persistent. To be frank, my memory is not as consistent.
How many tests do you do when testing for WCAG 2.0 Success Criteria 1.1.1? I do approximately 3-dozen tests that dive into the minutia of what it means to conform to that single guideline. Remembering nearly 400 best practices for all of WCAG 2.0 is way too difficult and therefore not having a checklist is a near guarantee that your audit will be incomplete.
The issue of completeness becomes an even bigger issue when testing responsibilities are handled by more than one person. Exceptions could possibly be made to the use of a checklist if the organization is small and happens to have employed the services of a single “guru” (though I still argue that the guru better have an amazing memory), but if you’re a large organization with multiple developers, development teams, and/ or multiple staff members who handle accessibility testing then you definitely need a checklist to test with.
If you’re in a large organization, test this yourself: Ask all of your testers to list all of the things they’d do to validate for conformance with WCAG 2.0 Success Criteria 1.1.1. – You will get different answers from each person. This lack of consistency is a huge problem and is, in my experience, one of the biggest headaches faced by developers.
One of the better side effects of having a checklist is the fact that violations of a check item will – at least implicitly – mean that accurate data is gained when a violation is found. For instance, if we have a check item which states “Ensure alt text for actionable graphics describes the action performed” and we find the system to be in violation then we have some data we can share with developers which is far more accurate than merely stating “You’ve violated WCAG 1.1.1!” Breaking down the conformance criteria this way allows the tester to create accurate results and provides the developer with actionable information he can use to remediate the problems found.
Implicitly, having a checklist that gives us accurate data also means the data that comes from it will be reliable regardless of which of our testers did the test. Again, unless you’re lucky enough to have a “guru” on staff, you’re probably faced with the situation where your testing staff have a wide range of knowledge and experience regarding accessibility. I have done a number of skills assessments with some of my clients and find that you can predict the following:
- Some staff members will have zero knowledge of accessibility
- Some staff members will have a broad understanding of accessibility but lack detailed knowledge
- Some staff members will think they know accessibility but really won’t
- Some staff members will have good knowledge of basic accessibility but not advanced topics like accessible AJAX
As a consequence of this wide array of knowledge and experience, testing results will be unreliable because each person has a different understanding. This even happens with highly knowledgeable people. Each person has their own interpretation of the standards and therefore will generate different results even when testing the exact same system. This harms the reliability of the testing and can damage the staff’s willingness to cooperate with the accessibility efforts.
In the same spirit of reliability, we also see that a checklist can aid in repeatability as well. Keeping in mind that our ultimate goal is to end up with an accessible system, we need to ensure that the tests we perform can also be repeated at the end of remediation. This allows us to measure success. We should be able to perform the same tests with the same test criteria and get results which will clearly demonstrate whether we have fixed the issues found in the first test.
Finally, our checklist provides us with the ability to defend our approach. When a business owner, executive, compliance staff, or legal staff comes to us and says “How accessible are we?”, we need to be able to provide them with an informative response that indicates where we are now, where we were, and where we are headed. The only way to do that is to have a clearly defined list of criteria against which our system has been tested. Anything short of that is equivalent, in my opinion, to saying “I dunno”.
Caveat: Checklist Quality is Critical
Each of the above benefits are only achievable if the “checklist” is of sufficient quality. In fact, I would consider that to be absolutely critical to the success of the checklist-based approach. If the checklist doesn’t include enough different checks at a sufficient level of detail then all of the above advantages will be diminished considerably.
In its simplest form, the checklist should consist of:
- The Organization’s selected industry standard(s) and conformance level(s) (i.e WCAG 2.0 Level AA)
- The individual guidelines of the chosen standard
- For each guideline: the relevant success criterion
- For each success criteria: a list of best practices meant to address the success criteria
- For each best practice: clear instructions on how to validate conformance with the best practice.
If, after developing your checklist, your checklist contains the above traits, you’re well on you way of having a tool which will enable you to provide complete, accurate, reliable, repeatable, and defensible data.
But wait! What about the people! You forgot the people!!!11
True, this entire post seems to be just about getting good data into a report. But, as I’ve said that’s not the point. The point is so that the data can assist in steering remediation so that the system can be fixed. This isn’t about relegating accessibility to a checklist, but rather about giving developers what they need. One thing I know about (most) developers is that they want to do good work. Not all of them buy into accessibility but they do typically take pride in what they do. If, in the process of an audit they’re shown that there are some improvements needed, most will willingly dive in to make the repairs. The checklist is the thing we, as testers, use to help gather the data to steer the repair process and in the end our users get a better experience.