My first experience with accessibility and, therefore, accessibility testing, came from Bobby.

In 1995, CAST launched Bobby as a free public service to make the burgeoning World Wide Web more accessible to individuals with disabilities. Over the next decade, Bobby helped novice and professional Web designers analyze and make improvements to millions of Web pages. This work won CAST numerous awards and international recognition.

CAST no longer supports the Bobby accessiblity testing software. Bobby was sold to Watchfire in 2004 which, in turn, was acquired by IBM in 2007.

Although Bobby is no longer available as a free service or standalone product, it is one of the tests included within the IBM Rational Policy Tester Accessibility Edition software, the comprehensive enterprise application for testing websites. (http://www.cast.org/learningtools/Bobby/index.html)

Bobby was so popular that the above URL remains the #4 result in Google for the word “Bobby”. My first experience with Bobby came in the form of a rejection email for a job application. In the early 2000s I was attempting to get a job as a web developer in the Washington DC area. At this time, Section 508 was new-ish and government contractors such as Northrop Grumman, Raytheon, Lockheed Martin, and the like were very focused on hiring web developers whose work was “508-compliant”. On one occasion, I got a response to my job application asking me to send over some work samples. I responded with a series of publicly available URLs showing off my work and in a few hours received an email saying that they would be unable to hire me because my work had failed a test by Bobby. Bobby, whoever or whatever it was, became the thing interfering with my ability to put food on the table. Unacceptable.

For my part, I did as I always do. I became obsessed with accessibility. Today when people ask me how I got interested in accessibility, I tell them the above story and tell them I have no “legitimate” reason for my interest. In other words, I don’t have a disability myself nor did any of my family members or friends. I don’t have any interesting back-story like Ted Henter, creator of JAWS. Instead, I’ve come to view accessibility as a quality-of-work issue. As a developer, the quality of my work has the direct ability to impact users’ ability to consume and interact with the content I create. To me, persons with disabilities are no different than anyone else using my site. All of the human rights stuff surrounding accessibility is purely ancillary. I’ve done a bad job if users have a hard time. My interest in accessibility is as simple as that.

Perhaps this is due to my being so new to accessibility at that time, but I view that time period fondly as one in which there were incredible opportunities to learn. Among the educational resources I discovered the best, by far, was the WebAIM.org Discussion List. The resources provided on the WebAIM website itself were immensely useful, but the active and friendly atmosphere of their discussion list was and remains the best community for those new to web accessibility. The list of active participants on that list is like a who’s who of accessibility. It didn’t take long before I noted that many of the more notable contributors to the community had a high level of disdain for automated testing tools. This disdain wasn’t altogether unfounded as documented by Jukka Korpela in 2002. In the long term, however, this disdain has created roadblocks to adoption & use of tools for accessibility testing and, in my opinion, has delayed development of newer and better tools for this task. The end result has been the development of tools and procedures that test the wrong thing at the wrong time and fosters an atmostphere of generalized resistance of tools in general.

Resistance to tools in general is somewhat justified

In the accessibility community, resistance to automated accessibility testing tools comes in two flavors: Those who say tools cannot provide complete coverage for all accessibility issues and those who say such tools take the focus off of the user and puts it on the system. Both of these reasons are born from perspectives that don’t fully understand the purpose of automated testing. Further, they fail to consider that although they’re both right that doesn’t actually negate the value of automated testing.

Evolutionary biologists and anthropologists cite two major reasons for mankind’s evolution into the dominant species on Earth: the use of tools and the taming of fire. Beginning with rudimentary stone tools, man’s first tools enabled easier access to food. We could hunt for, butcher, and prepare food more easily through the use of tools. The evolution of tools and technology isn’t unlike biological evolution of a species. Our opposable thumbs, control of fire, and larger brains form the basic trunk with various other aspects of tools and technologies forming the limbs of the tree and more specific tools technologies form the twigs. Like biological evolution, certain types of tools and technologies die off along the way, losing favor and being replaced with better tools and technologies.

Forge welding came about in the Bronze and Iron Ages and remained the dominant form of welding for thousands of years. In the Middle Ages forge welding techniques saw many improvements which remained in use for hundreds of years. In the early 1800s however, the discovery of the electrical arc revolutionized welding and brought about advances that would later evolve into SMAW (Shielded metal arc welding), commonly referred to as “stick welding”. Since the late 1800s and continuing today, newer, better, and safer methods of welding are continually being developed. Today this includes such high tech methods as laser beam welding and electron welding.

In all cases the tools and technologies we employ are aimed at one primary goal: accomplish a task more easily, efficiently, and with higher quality. In the earliest stages of tool development the tools were aimed at doing things we were already doing, but doing them better, such as using rocks to smash nuts or sharp flints to butcher meat. But in the Bronze and Iron Ages our goals were more ambitious. We aimed at doing things we could never accomplish without the tool. As tools and technologies continue to evolve, mankind’s goals remain the same: make things easier, faster, better and make the previously impossible become possible.

At a fraction of the size and fraction of the cost, the smartphone in your pocket holds more than 13,000 times as much data than the first hard disk, the IBM 350 RAMAC, created in 1956, which weighed over 1 ton and cost $10,000 (in 1956 money). Literally everything around us in the modern world is the result of technological evolution including, in all likelihood, the grass on your lawn. This fact is, quite frankly, why I’m so baffled by resistance to automatic accessibility testing. In any case where a capability exists which can replace or reduce human effort, it only makes sense to do so. In any case where we can avoid repetitious effort, we should seek to do so. Of course this isn’t always possible. Even in the previous example of welding, some jobs can’t sustain the expense of robotic welding. Perhaps the job is unique or the production run is too small to justify robotic welding. But that doesn’t prevent the creation of a Welding Procedure Specification and the use of a specific process to create the end product. Automatic accessibility testing is no different.

While there are a number of seemingly insurmountable challenges relating to accuracy or completeness of automated accessibility testing, that doesn’t mean such testing has no value. Automated testing can and should be utilized in a way that makes the tool’s user more efficient and effective at their job – namely, finding accessibility issues. For example, it is trivial for an automated tool to be programmed to find instances of images without alt attributes and therefore no human should ever have to spend time looking for those issues. However, machines are wholly incapable of the subjective interpretation of the meaning behind an image in context and therefore judging the quality of the text in an alt attribute is a task that does require a skilled human. This will probably always be the case, at least as long as the Web as we know it exists.

In other words, an automated tool can tell you when things are bad but cannot tell you when things are good. Additionally, the array of possible “bad” things a tool can reliably and accurately find is somewhat small. My own and others’ estimates suggest that only 25% of accessibility best practices can be tested definitively using automated testing. It is this lack of complete and reliable coverage that detractors of automated testing point to as evidence against the usefulness of automated testing on the whole. The somewhat epidemic issue of existing automated tools returning overly-conservative, irrelevant, or erroneous results only serves to strengthen these claims against automated testing. But this only makes the case against the specific tools, not against automated testing. The fact that automated accessibility testing tools have historically been prone to delivering bad results doesn’t mean good results are impossible.

Stay tuned for post #2 in this series where I discuss the challenges and proper use of tools for testing web accessibility.

My company, AFixt exists to do one thing: Fix accessibility issues in websites, apps, and software. If you need help, get in touch with me now!