…how many bigger issues have we missed wasting our time fixing this kind of crap? @thebillygregory

Literally every single audit report I’ve ever done includes issues relating to the following:

  • Missing alt attributes for images
  • Missing explicit relationships between form fields and their labels
  • Tables without headers or without explicit relationships between header cells and data cells

I also frequently find these others

  • Use of deprecated, presentational elements and attributes
  • Events bound to items that are not discoverable via keyboard
  • Poor color contrast
  • Blank link text
  • Missing/ inaccurate/ incomplete name, state, role, and value information on custom UI widgets

The sheer volume of these types of errors is, to put it lightly, frustrating. In fact, the title of my presentation “What is this thing and what does it do” is actually born from an inside joke. During one audit where the system I was testing was particularly bad, I joked to some coworkers that analyzing the code was a bit like a game to figure out, “what is this thing and what does it do?”. I only later decided to put a positive spin on it.

As I mentioned in the previous post in this series, there are an average of 54 automatically detectable errors per page on the Internet. The thing about automated testing is that, even though it is somewhat limited in the scope of what it can find, some of the errors it does find are pretty high impact for the user. Think about it: missing alt text for images and missing labels for form fields are a huge impact for users. While the total amount of accessibility best practice that are definitively testable by automated means are small, they tend to have a huge impact in whether people with disabilities can use the system.

Automatically detectable issues should never see the light of day

The reason why some people are against automated testing is that for such a long time we in the accessibility world haven’t really understood where the testing belongs. People have long regarded the applicability of automated accessibility testing as being a QA process and, even worse, it often exists as the only accessibility-related QA testing that occurs. If your approach to accessibility testing begins and ends with the use of an automated tool, you’re doing it wrong. This concept of automated-tool-or-nothing seems at times to be cooperatively perpetuated both by other tool vendors and by accessibility advocates who decry automated testing as not effective. We must turn our back – immediately and permanently – on this either-or mentality. We must adopt a new understanding that automated testing has an ideal time & place where it is most effective.

Automated accessibility testing belongs in the hands of the developer. It must be part of normal development practices and must be regarded as part of the workflow of checking ones’ own work. All developers do basic checking of their work along the way, be it basic HTML & CSS validation, or checking that it displays right across browsers. Good developers take this a step further, by using code inspection tools like JSLint, JSHint, PHP Mess Detector, PHP_CodeSniffer and the like. In fact, IDEs like WebStorm, Netbeans, Aptana, and Eclipse have plugins to enable developers to do static code analysis. Excellent developers perform automated unit testing on their code and do not deploy code that doesn’t pass. What prevents accessibility from being part of this? Existing toolsets.

The revolution in workflow that will change accessibility

Last week I created a new wordpress theme for this site. I’m not the world’s best designer, but I hope it looks better than before. I created it from scratch using my Day One theme as a base. It also includes FontAwesome and BootStrap. I use Grunt for managing a series of tasks while I built and modified the template’s design:

  • I use grunt-contrib-sass to compile 11 different SASS files to CSS
  • I use grunt-contrib-concat to combine my JS files into one JS file and my CSS files into one CSS file
  • I use grunt-contrib-uglify to minify the JS file and grunt-contrib-cssmin to minify the CSS file
  • I use grunt-uncss to eliminate unused CSS declarations from my CSS file.
  • I use grunt-contrib-clean to clear out certain folders during the above processes to ensure any cruft left behind is wiped and that the generated files are always the latest & greatest
  • I use grunt-contrib-jshint to validate quality of my JS work – even on the Gruntfile itself.
  • I use grunt-contrib-watch to watch my SASS files and compile them as I go so I can view my changes live on my local development server.

All of my projects use Grunt, even the small Ajax Chat demo I’m giving at CSUN. Some of the projects do more interesting things. For instance, the Ajax Chat pulls down an external repo. Tenon automatically performs unit testing on its own code. When something goes wrong, Grunt stops and yells at you. You can even tie Grunt to pre-commit hooks. In such a workflow nothing goes live without all your Grunt tasks running successfully.

Imagine, an enterprise-wide tool that can be used in each phase, that works directly as part of your existing workflows and toolsets. Imagine tying such a tool to everything from the very lowest level tasks all the way through to the build and release cycles and publication of content. That’s why I created Tenon.

While Tenon has a web GUI, the web GUI is actually a client application of the real Tenon product. In fact, internally Asa and I refer to and manage Tenon as a series of different things: Tenon Admin, Tenon UI, and Tenon (the API). The real deal, the guts, the muscle of the whole thing is the Tenon API which allows direct command line access to testing your code. This is fundamental to what we believe makes a good developer tool. When used from the command line Tenon can play happily with any *nix based systems. So a developer can open a command prompt and run:

$ tenon http://wwww.example.com

and get results straight away.

By using Tenon as a low level command it becomes possible to integrate your accessibility testing into virtual any build system such make, bash, ANT, Maven etc. As I mentioned above, one possibility is to tie Tenon to a git pre-commit hook, which would prevent developer committing code which could not pass Tenon’s tests. Like JSHint, you can customize the options this to match your local development environment and level of strictness to apply to such a pre-commit hook.

A typical workflow with Tenon might look a bit more relaxed for say a front-end developer working on a CMS and using Grunt to compile SASS to CSS and minify JS. As a node.js module we will be introducing a grunt plugin. So once grunt-tenon is introduced into your Gruntfile.js file, you can add grunt-contrib-watch to watch your work. Every time you save, your front-end will perform your normal Grunt tasks and test the page you’re working on for accessibility.

Processing: http://www.example.com
Options: {"timeout":3000,"settings":{},"cookies":[],"headers":{},"useColors":true,"info":true}
Injecting scripts:

>>  /var/folders/mm/zd8plqb15m38j4dzf3yf9pjw0000gn/T/1394733486320.647
>>  client/assets.js
>>  client/utils.js

-----------------
RESULTS SNIPPED FOR BLOG POST
-----------------
Errors: 10
Issues: 10
Warnings: 0
Total run time: 3.27 sec

The same Gruntfile can also be run on your Jenkins, Travis-CI or Bamboo build server. Let’s say we’re using Jira for bug tracking and have it connected to our Bamboo build server. A developer on our team makes an accessibility mistake and commits that mistake with a Jira key — ISSUE-1234 — into our repo. As part of the Bamboo build, Tenon will return the test results in JUNIT format. The Bamboo build will fail and we can see in Jira that the commit against ISSUE-1234 was the cause for the red build. It will link directly to the source code in which the error originated. Because were using a CI build system from our developers standpoint all this can happen many times a day without requiring anything more than a simple commit!

Proper management of accessibility necessitates getting ahead of accessibility problems as soon as possible. Effectively there is no place before the code is committed. As a pre-commit hook or, at least, as a Grunt task before committing, accessibility problems are caught before they’re created. Automated testing is not the end, but the beginning of a robust accessibility testing methodology.

The next post is last post in this series, where we’ll put it all together.

My company, AFixt exists to do one thing: Fix accessibility issues in websites, apps, and software. If you need help, get in touch with me now!