Many academic papers dealing with web accessibility make use of automated tools as part of their research. For instance, one recently published paper did a comparative analysis of the Forbes 250. Unfortunately, the tool in use did not test the DOM of the pages it analyzed. On one hand, you can say that at least all of the sites compared were tested against identical criteria, which I concede is an important consideration. I would still argue that the data is still invalid because the measurements were taken with a flawed tool. I think of it like doing a study of temperatures using a broken thermometer. You’ll never get an accurate measurement if you use the wrong tool.
Consequently, I created the “Mother Effing Tool Confuser” (naming idea shamelessly stolen from Paul Irish’s Mother Effing Text Shadow and Mother Effing HSL). Anyway, the first criteria against which any automated testing tool must be measured is this: Does the tool test the DOM? Be careful here, because some vendors will say that they test the DOM, but it is a browser DOM that matters. There’s a difference between “making” a DOM and actually using a headless browser to test the browser DOM. Nearly all modern programming languages have the ability to make a DOM (primarily best suited for processing XML files). Java has jDOM, PHP has a handful of DOM classes in the standard library, and there are similar built in functionalities in JSP, C++ and others. These are not the same as the browser DOM. The true DOM that must be tested is the browser DOM. Essentially this means the tool either resides in the browser as a plug-in or has its own headless browser.
Go to Mother Effing Tool Confuser with your favorite tool to see me holding a puppy. Then come back here to comment. How did your favorite tool do?