I gave a presentation on this topic at this year’s CSUN Conference on Disabilities. Due to the popularity of that session, I figured I’d share the salient points in a blog post. The below information is mostly relevant for organizations who are in the market for an enterprise level accessibility testing tool along the lines of Compliance Sheriff, AMP, Worldspace, and so on. I do not market an accessibility testing tool and have no pecuniary interest in any company which does. I make no recommendations in this post, as that isn’t the point. The important part is that the reader become informed of the important factors to consider when choosing such a product. Here are 13 questions to ask yourself when choosing between competing products.

1. Is the tool user-friendly?

In many cases the accessibility testing tool will be a system which was designed to function as a distinct, standalone system that contains all configuration, testing, and reporting capabilities. As such, it will be separate from all other QA and development systems in the organization. Often, the staff that actually uses the tool will not be development or QA staff but are often instead an internal resource tasked specifically with overseeing accessibility. My experience with accessibility testing tools has shown that tools that are not user-friendly will become shelfware – and the less user-friendly the tool is, the quicker this tends to happen. If the intended users of the system resist using it because it is too difficult to use, then purchasing the tool will have been money wasted.

2. Does the tool provide high quality, reliable results?

Absent any real level of expertise in accessibility by testers, they will rely entirely on the feedback they’re given by their testing tool. Inaccurate/ misleading/ incomplete/ unreliable results from a tool is harmful to the organization, risky for project budgets & timelines, and potentially harmful for accessibility. What makes this issue particularly important is when tools perform overzealous or overly conservative testing. For instance, I’ve seen tools that throw an “error” any time the first heading in a page’s source isn’t an H1. Such overly conservative interpretations of WCAG lead to a lot of wasted time spent arguing internally. Time that could be spent on more impactful issues (By the way: the WAI website is a perfect example of properly using headings for structure.)

3. Is the tool capable of testing the DOM?

Possibly the first question you must ask a tool vendor is: Does your tool test the DOM? Be careful here, because some vendors will say that they test the DOM, but it is a browser DOM that matters. There’s a difference between “making” a DOM and actually using a headless browser to test the browser DOM. Nearly all modern programming languages have the ability to make a DOM (primarily best suited for processing XML files). Java has jDOM, PHP has a handful of DOM classes in the standard library, and there are similar built in functionalities in JSP, C++ and others. These are not the same as the browser DOM. The true DOM that must be tested is the browser DOM. Therefore, if a tool does not test the actual, final browser DOM, it does not test what users are experiencing. The more JavaScript a site uses to present or modify page content, the more unreliable the results of such a tool will be. Essentially this means the tool either resides in the browser as a plug-in or has its own headless browser.

Note: The best way to verify this is to point the tool at the Mother Effing Tool Confuser. If the tool finds errors on that page, it is not testing the DOM.

4. Does it offer the ability to spider?

Since the primary benefit offered by an automated tool is efficiency in testing, it makes little sense to have a tool that only tests a single page at a time. This is often the case with in-browser testing tools, which are best suited for use by developers testing their work under development or for QA testers testing specific features. Large scale testing, particularly of post-deployment web pages, will necessitate the ability for the tool to spider. For tools which can spider, the speed of the spidering (and, of course, testing of the spidered pages) is critical to the usefulness of the tool. Comprehensive spidering and testing of tens-of-thousands of web pages can take some tools a considerable amount of time. Look for a tool which performs this task quickly, yet accurately, if your websites are large. It is important, as a consumer, that you ask the vendor to prove to you that it can perform in a real-world test of your site. The larger your site and the more complicated your interactions and URL structure, the more important this will be.

5. Does it offer the ability to test uploaded files and/ or source code entered directly?

Like any other QA issue, the best way to resolve accessibility errors is to avoid them in the first place. Unfortunately, my experience has shown that most accessibility testing is done after a system is already in production. This almost always means that the issues found will be more difficult and time consuming to fix than they should have been had the issues been discovered earlier in the process.  For this reason, developers should have access to the testing tool so that they can test their work before it is deployed. Content creators working in a Content Management System (CMS) and designers creating functional mockups should have access to the tool as well. All persons involved in creating pages – or components of pages – should have access to the tool and the tool should support the testing of code-in-development without requiring that code to be uploaded to a publicly accessible URL. In this case, specifically, I mean that users can either upload files directly or users can enter source code directly to be tested or, even better, both. There’s no such thing as too much flexibility.

6. Does it offer the ability to perform continuous monitoring?

In some environments, new content is being added to the organization’s websites daily – sometimes even several times each day. The more often new content is added to a site and the more numerous the ways that the content gets added, the more difficult it is to closely manage accessibility using manual means. This, therefore, increases the importance of having an automated tool that can perform scheduled monitoring. Scheduled monitoring features allow you to schedule times when the tool would regularly test the site. Look for tools which can also limit the location(s) being tested during these regular intervals and those that have a high degree of flexibility for when these scheduled test sessions occur.

7. Is the tool configurable so that it does not re-test pages which have not changed?

One feature I’ve found to be quite useful in automated testing tools – especially when used to perform scheduled monitoring – is the ability to avoid retesting pages that have not changed. More specifically: to avoid creating duplicate issues that were already found. A good tool will have the ability to ignore web pages that it has already seen unless the page itself has changed since the last time it was encountered.

8. Does it provide clear, easy to understand manual test guidance?

Keeping in mind that tools can’t test everything definitively, the tool should be able to provide its users with explicit guidance on how to test everything else. While many tools also provide test coverage on a number of non-definitive criteria – and provide some guidance on verifying such results – few tools provide effective, user-friendly guidance on performing testing which must be performed entirely manually.  The guidance provided should be clear and easy to follow for the layperson, because the primary users for these tools are laypersons, at least when it comes to accessibility.

9. Can the tool be configured to support the specific testing needs of the organization?

Effective testing of a website and returning valid, relevant, and reliable results is surprisingly non-trivial, especially given the complexity of today’s modern websites and the management thereof. Some websites, for example, may contain tens-of-thousands or even hundreds-of-thousands of pages, but the development of the site may be the responsibility of organizationally (and often geographically) disparate teams of people. There may be several (even hundreds) of subdomains. There may be teams of content producers. There may be special URL parameters in use that are used during A/B testing of different marketing & branding campaigns. These things and many more may pose challenges to configuring the tool so that it returns a distinct and manageable set of pages you’ve targeted for testing. Your organization may have distinct and important constraints in place that dictate what gets tested and how. All of these things require a tool that is flexible enough to meet your distinct requirements.

10. Is the tool capable of reporting results in a way that separates common issues from unique ones?

Testing an entire page is often inefficient. Testing thousands of complete pages even more-so. This is due to modern production techniques that use templates and content management systems calling for essentially the same code to be used every time an interface component of a specific type is used. Two examples are the nearly ubiquitous headers and footers on most websites. Because this template code is reused on every page of the site, any errors in those templates means that every page has the same error when it is actually only one or two files in error. A good accessibility testing tool will have the ability to report such issues in a manner that reduces repetitive reporting to provide clearer and more accurate results.

11. Can it integrate with existing QA tools and processes?

Often development teams and content teams prefer to have a tool which can work seamlessly within existing toolsets and workflows. For instance, rather than having your staff use a completely separate system to test and manage issues, it would be better if the accessibility testing system was capable of being queried by and report its results to an existing system. At the moment, I’m unaware of any product which offers a full integration with other QA systems, mostly due to the wide variations in QA systems employed. Nevertheless, some amount of integration is likely possible in many enterprise tools, so this should be a characteristic to look for while researching different products. The more closely the tool can integrate with your existing toolsets and processes, the better. Having a wholly separate accessibility process and toolset often is often met with resistance by QA and development staff. Some tools can push their results into major issue tracking systems, but you should look for one (if such a thing exists) that fully integrates into your process – ideally into continuous integration systems.

Prediction: whether it exists now or not, accessibility tools that seamlessly work with CI systems like Jenkins will be the next wave of accessibility tools. The days of a standalone system are coming to an end in the next half-decade.

12. Does it support the accessibility standards your organization has committed to supporting?

These days, all major accessibility standards are harmonized (or in the process of becoming so) with the Web Content Accessibility Guidelines (WCAG), version 2.0. That being said, there may be additional standards including custom internal standards that your organization may want to test for as well. Whether or not this is currently the case, it may make sense to ensure that the tool you choose can easily accommodate the creation and maintenance of custom standards and custom rules. This will ensure that the tool remains useful to you as your organization matures in its accessibility efforts.

13. Does it offer direct access to modify or extend test logic?

Related to the above, can you customize how the product does its testing? This goes beyond the ability to turn on or off tests. Can you add new tests from scratch? Can you add completely new rule sets or standards? Can you use it to test for more than just accessibility? What about regulatory compliance? Privacy? SEO? General quality? Usability? If it doesn’t do these things out of the box, can you extend the tests to do so? It may be worth exploring, especially because it may allow you to split the budget for the product across multiple business units. If the test logic can be modified, how easy is it to do so?

Putting it all together and making a choice

Enterprise Accessibility Testing Tools are expensive. For large organizations, the starting price can easily be around $100,000, and sometimes much more, depending on your specific needs. Though expensive, the purchase of a tool can be money well spent, especially if the tool is configured properly and its use is well integrated into your development, QA, and compliance processes. Purchasing the wrong tool, however, can also be a costly step backwards in your compliance efforts. What persuasion scientists such as Robert Cialdini have shown is that absent any level of real expertise in a subject, people will often make buying decisions based upon emotion rather than reason. To avoid mistakes, you should follow the steps below before making a decision:

  1. Determine if you need a tool in the first place a tool will solve your problems. If your organization is new to accessibility, it is basically a given that your web site has accessibility problems. A tool will help you find those problems, but then what? Bugs (accessibility issues are bugs) are caused by a lack of knowledge, experience, and proper process. A tool can help, but certainly cannot solve, any of those underlying issues. Perhaps training is the answer. Perhaps the answer is a combination of training and a tool. Perhaps a tool after sufficient training. Evaluate your actual problems before determining what action you should take.
  2. Stay open minded throughout the procurement process. There are only a handful of enterprise tools, so fully researching each of them is a worthwhile use of time. Remain open to the possibility that any and all of the candidate products may be most suitable for your needs.
  3. Ask each vendor for a fully-functioning trial of their software. Treat with suspicion any vendor who won’t let you take their product for a fully functioning test drive. As I’ve said, these tools aren’t cheap. If the tool vendors want your money they should be expected to prove the product’s value and quality in a real world use of the product.
  4. Have everyone in your development & QA teams use the tool. A one-hour demo by a salesperson is nowhere near enough time to see whether a specific product fits in with your processes. Have your staff actually use each tool for some period of time and provide feedback as to whether they find it useful, reliable, and user-friendly enough to make a purchase.
  5. Take your time. Having the ability to buy an enterprise tool at all is likely the result of fighting to have budget allocated to such a purchase. Making a hasty decision you regret is bad enough. Having to fight all over again for budget next year to replace a tool you’re unhappy with is a hard, uphill battle. Take your time and consider your choice carefully.
  6. Do not make your purchase decision based on price. This is especially true if the prices are competitive. Naturally a tool which is vastly more expensive than its competition will need to prove why it is so much more expensive. After all, a tool that is 3x as expensive as everything else had better be 3x as good. Nevertheless, resign yourself to an understanding that these tools aren’t likely to be cheap, but that buying the wrong tool is too expensive no matter what you’ve paid for it.
  7. Get training as part of your purchase. Your goal, as customer, isn’t go generate reports but to enhance the effectiveness and efficiency of your accessibility efforts. You shouldn’t be left to your own devices to figure out how to get the most of your new tool, the vendor should show you, and doing so should be part of the deal.
  8. Demand satisfaction. Automated accessibility testing tools can be expensive and their integration into large development environments can be disruptive. In any case where you’re dissatisfied with the results, quality, or usability of the product you purchased, demand that the vendor successfully rectify the problems. In some cases, you may need additional training. In others, you may need assistance in configuring the tool’s options. For the price you’re likely to pay for such a tool, you deserve to demand satisfaction and there’s nothing wrong with being assertive in this situation. Accept nothing less than total satisfaction.

If you’re in the market for an enterprise accessibility testing tool, you have a handful of closely competitive choices. By taking your time and making an informed decision you can find the one that is best for you.

My company, AFixt exists to do one thing: Fix accessibility issues in websites, apps, and software. If you need help, get in touch with me now!