As I sit here today, running through unit tests on the automated testing capabilities on AQUA (which later became Tenon), I’m left with the feeling that I owe it to anyone who uses such tools to tell them that nothing can replace the eyes of a skilled professional.  The issue, in short, is that so much of determining whether a website is accessible or not is subjective enough that automated testing cannot support making a definitive judgment on whether or not disabled visitors can access and use the site. First, let’s discuss why automated tools are good.

The Good

Ability to Scan Through Lots of Code

A good website accessibility audit involves an in-depth, manual look at the generated source code. Doing so is the best way to tell not only whether problems exist, but where, why, and what it takes to fix the problems found. As someone who has assessed tens of thousands of web pages, my experience has shown me where to look and what to look for. Still, there becomes a point where there are too many pages on the site or too much code to look at on each page. The larger the site or the more complex the layout, the greater the chance that either time just won’t afford an accurate look or human review just can’t notice everything.

Automated website accessibility testing tools can be used as a good way to generate a list of things which deserve closer attention in the audit. When you’re staring at a page which is nearly 3900 lines of JavaScript, CSS, and HTML (like the Yahoo home page), this can be a considerable help. On big sites, you really have no choice but to find some automated means of finding errors. Even on a site which had essentially the same template wrapper around each page’s unique content, there’s too much to look for on a large and complex website. Make no mistake: The larger the site, the more you’ll need to rely on some automated tool – one which doesn’t just do one page at a time, but crawls the entire site – to assist in doing assessments.

Some Errors Don’t Need Manual Review

Somewhat related to the above, there are some errors which can be caught by an automated tool which don’t need human inspection to notice. One of the best examples of this is missing alternate text for images. In cases where the images have no alternate text at all, it is an inefficient use of the reviewer’s time to document each instance one by one when an automatic means of finding and reporting this information is available. Instead, the reviewer should be able to dedicate their time looking into potential issues that cannot be accurately detected through automated means.

Automated Accessibility Testing Tools Provide a Good Starting Point for Manual Review

One of the most appropriate uses of an automated accessibility testing tool isn’t to generate some sort of definitive pass/ fail judgment of the site being tested, but rather to give the reviewer an idea of what sorts of things need a closer look. A good automated tool will be able to provide the reviewer with a detailed list of “errors” and the location(s) where they occur. This includes line numbers and a short snippet of the offending code. With this information, the reviewer can now make a determination on whether the individual entry really is a problem, why it is a problem, and can tell the client what needs to be done to fix it. Even more than that, it can uncover patterns of errors.

So much of what I do during an accessibility audit is really aimed at determining what sort of “patterns” exist in the practices of the production team that it often doesn’t take an in-depth review of every page to uncover all of the accessibility errors. For example, HTML forms are one of the first places I notice not only whether the site has accessible forms but also how much the development team knows or cares about accessibility. In most cases, I’ll notice after only a few forms that none of them have been created in an accessible manner. After observing a handful of forms, there’s really little need to continue assessing every form on the site. Here again is an area where an automated tool can help. An automated tool could, for example, report that forms were missing explicit relationships between the form elements and their labels. Then the reviewer could manually confirm this, check out patterns, and provide the needed remediation guidance.

Automated Tools Can Determine Exceptions In An Otherwise Accessible Site

Again, keeping in mind the idea that a good automated tool can scour large amount of code in a large number of pages quickly, an automated tool can be useful in quickly finding simple oversights. I found this out first hand when testing one of my own web sites. I had made an assumption that my site was free from error when, unbeknownst to me, the TinyMCE editor I had incorporated into my CMS was stripping out the alt attributes from my images. In this instance, the automated tool was able to alert me to needed repairs that I would never have bothered to investigate considering I knew I had input accessible markup, but there it was, plain as day.

The Bad

Most of the “bad” things mentioned about automated testing tools revolve around the subjective nature of determining what is accessible. Additionally, there are numerous ways to make a site’s content more usable for disabled users that can’t be accurately detected by an automated tool; automated tools can offer false positive and false negative results; and automated tools often cannot check dynamic features.

Incomplete Coverage

Based on my research, I find that only about 25% of accessibility best practices can be tested for definitively with automated means alone. Another 35% can be tested to provide guidance on potential issues which require manual verification. That leaves 40% of accessibility best practices that must be tested for using manual testing of some kind.

Alternate text

Among the things which most frequently get sites in trouble when it comes to accessibility, alternate text for images is the most common error I see. This is not only because alternate text is left missing but also because when it is supplied it is so often inaccurate, incomplete, or uninformative. I’ve seen alt text that merely consisted of the word “image”, alt text that consisted of the image’s file name, and images which were attractive numbered bullets that had alt text of completely different numbering. While some automated tools can detect some of the rather obvious and frequent errors (such as alt text like “image”, “icon”, or the file name), there’s no way for any automated tool on the market to know whether the alternate text supplied is a sufficiently informative and accurate alternative. They may be able to “flag” an image’s alt text as suspicious, but the final determination of whether the alt text is good or not requires human review.

Forms

Forms are another area where accessible production is critical and where checking accessibility is largely subjective. There are the rather obvious issues (detectable by automated tools) regarding whether form elements have labels or not, but there is a lot more to consider with forms and accessibility.

Labeling of Form Elements

There are two issues to consider when it comes to labeling form elements. First, there’s the matter of whether a label exists for each element, which is something which can be detected by an automated tool. Then there’s the issue of whether the label text is clear and informative regarding what sort of input is expected. This, like image alt text, is a subjective matter which cannot be determined by an automated tool.

Forms’ Error Handling

Another subjective measure regarding the accessibility of forms is in determining the accessibility of error messages – both in their content and presentation. Determining whether error messages is accessible or not is actually far more complicated (and subjective) than it may seem. Not only must the content of the messages themselves be clear and easy to understand, but their location and the manner in which they’re displayed must be accessible as well. Because many automated tools do their work by spidering pages on the site and therefore do not interact with the forms, this is something they cannot check. Even in the case of tools which can be configured to automatically perform tasks in accordance with a use case, this is still far too subjective for any tool to measure. In instances where the tool can be automated to fill out forms, they’re still never able to assess errors – particularly those which appear as part of a POST request after submitting the form.

Color Issues

Color is another area which presents challenges for automated testing tool. We have reached a point in the sophistication of tools which can allow for calculation of luminosity contrast rations between foreground & background colors if the tool is a browser plugin/ toolbar or runs a headless browser.  However, WCAG 2.0 1.4.3 says The visual presentation of text and images of text has a contrast ratio of at least 4.5:1 (emphasis mine). Currently it is somewhat trivial to programmatically determine color contrast ratio between two colors. What is not at all trivial is determining whether an image is an image of text or whether an image contains important text information.

Client-side scripting, AJAX, Java, and other “Rich Media”

Because no automated tool (that I’m aware of) can assess the accessibility of anything other than HTML and CSS, thorough and accurate testing of JavaScript, AJAX, Java, and other “Rich Media” is impossible with automated tools. With “traditional” JavaScript, automated tools can do things like detect the use of device-dependent event handlers. However, whether or not their use causes accessibility problems is still a subjective matter. For instance, onMouseOver is device-dependent, but many times this event handler is used to trigger some sort of eye candy effect like highlighting navigation options. In such cases most automated tools will flag the use of the device-dependent event handler even if it doesn’t cause any problems. On the other end of the spectrum is DOM scripting which does not use event handlers placed directly within the HTML source code. Because of the way DOM scripting listens for events, automated tools often can’t detect the use of device-dependent event handlers.

Why use automated testing at all?

Although it might seem like I’ve spent most of this post bashing automated testing, the truth is I firmly believe that automated testing has an important place in a tester’s toolset. The use of a good automated tool can mean significant improvements in productivity and in the accuracy of results. Users of the tools, however, must understand that their automated tool is just that – a tool – and that they must not rely on the tool to be the final determining factor in whether their site is accessible. They must be well trained in how to use the tool, how to understand its results, and have a good knowledge of accessibility. Only when they do will they be able to get the maximum benefit from whatever tool they use.

My company, AFixt exists to do one thing: Fix accessibility issues in websites, apps, and software. If you need help, get in touch with me now!