I’m really terrible about responding to emails. I get (and send) a lot of email and lots of it just sits and sits for embarrassingly long times. After CSUN I got a great email with some questions from Vincent François about some of the things I said during my CSUN 2016 presentation “Extreme Accessibility”

Vincent had a couple of good follow-up questions to the information I talked about that I wanted to share the answers to:


When bugs are found, tests are written. Programmers have a failed test to focus their efforts and know when the problem is fixed

Vincent included a picture of the above slide and said:

Are you saying that we should create a new test each time we find a bug? In order to, later, focus on the part of the product in which we encountered and created the bug?

My answer:

One of the things about bugs is that once you’ve verified the bug, you actually have two important pieces of information: the details on why it is a problem and what should happen instead. With this information at hand, you can write a test. The test should be an assertion that the code does what it is supposed to do.

Taking accessibility out of the picture, for a minute, imagine we are on Amazon. We have found a bug: when we click the “Add to wish list” button, it actually adds the product to the cart. The first step to fixing the bug is writing a test that tests the following:

  1. Given that I am on a product page
  2. When I click the “Add to wish list” button:
  3. The number of items in the Cart is not increased
  4. The number of items in the wish list increases by 1
  5. The specific product is found in the wish list

The above is the criteria we will use to verify that the bug was fixed. Now we modify our code and test it against the above criteria. We don’t ship the bug fix until the tests pass.

The automated test(s) also ensure that we don’t end up “unfixing” the bug later down the road.

The latter point above is super important. One of the biggest pieces of tech debt I run into are cases where an untested fix gets undone somehow – or worse, has another side effect elsewhere. This is why if someone reports a buggy test in Tenon I ask for the URL of the page they’re testing and use that URL to verify whether i’ve fixed the test bug.

Vincent’s next question was:

With a11y-specific tests, is there a risk to separate a11y from overall quality and in some cases, to choice to postpone them?

This is an important consideration. One of the things I really harp on during Tenon sales demos is the ability to put Tenon into your existing toolset. This is important not just for convenience or efficiency but also (I hope) so that it keeps accessibility from being seen as a separate thing.

Here’s my answer:

The a11y-specific tests I advocate for are only those tests which are directly aimed at verifying an accessible experience. I think in the presentation I used the example of a modal dialog. In a normal development scenario you might have a case where the developer writes a unit test around whether the dialog opens and closes. But there are accessibility implications with modal dialogs including things like keyboard accessibility and focus management. These require their own tests. Thankfully these patterns have been provided for us by the fine folks at the W3C. We can take those patterns and turn them into test cases and these test cases can be turned into automated tests as well. The best part about this part is that we now have accessibility testing backed in to our normal process and not really separate.

My company, AFixt offers full accessibility services from testing, training, consulting, and remediation. If you need help, get in touch with me now!