Earlier this week, I began working on a new series of blog posts titled ‘Selling Accessibility’. Those who know me or have read my previous posts or have seen me speak know I’m very much in favor of taking a pragmatic approach to accessibility. More specifically, I mean that we must understand that there’s a distinct and important difference between our desires for a perfectly accessible world and the realities of budget, time, and resource constraints that prevent us from reaching those ideals. I view part of the pragmatic approach to accessibility is to never lose sight of the ideal. I don’t ever want to suggest that pragmatism means we should give people a “free pass” for their systems to be inaccessible. Its just that we should weigh our desires for universally usable technology in context of the other many needs that may exist.

Somewhat related to our need to adopt a more pragmatic approach is that we also need to communicate this pragmatism as well. In advance of creating my ‘Selling Accessibility’ presentation I did interviews with a number of people to get their perspectives. One that stuck with me was that of John Foliot who said that when approaching others who are outside of accessibility, take the approach of being a fireman, instead of that of a policeman. By saying that, what he means is that firemen actually spend a very small amount of their time fighting fires. Most of what they do is in preventing fires. The police, on the other hand, are spending their days investigating crimes and keeping a watchful eye out for crimes occurring. The mere presence of a police officer can often stress people out. Think about the last time you were driving down the road and saw a cop on the side of the road. The immediate – almost universal – reaction is to jam on the brakes (or at least check the speedometer), irrespective of how fast you’re actually traveling. This is not who we want to be. The last thing we want to be is seen as a policeman on the side of the road watching for people to slip up. This is not the way to gain the cooperation of others. This is not the way to gain further adoption of accessibility, as a concept, throughout the rest of your organization. In fact, the more you look like a roadblock, the more people will want to avoid you.

From my perspective, it seems that the accessibility community doesn’t quite get this and I feel strongly that chasing perfection – and being vocal about it – can do more harm than good. We still live in an environment where a lot of people are resistant to even the idea of accessibility. I’ve seen it so many times in the past decade. Sometimes it is political. Sometimes misguided. Sometimes it seems too hard or expensive. Sometimes it seems conflicting with other needs. But always it is merely ignorance. We won’t overcome that ignorance if how we’re communicating it merely serves to bolster the roadblocks that others keep putting up. If we’re seen as running around in hysterics over minor issues, we’re only going to keep getting ignored.

It is time for some perspective

Yesterday someone Tweeted a message essentially calling out the US Paralympics site for being inaccessible and posted a link to the above report from WAVE that shows 11 errors. Two months ago, this critique of the Disability.gov website was making its rounds around the Twittersphere. In both cases, my reaction has been the same: “Meh”.

In the case of the US Paralympics site, I ran my own independent (automated) test and found 44 errors, but my test settings are probably too conservative and eyeballing the results shows more than a dozen things I’d probably eliminate from a final report. There are probably several more errors undiscovered by either tool that relate to color contrast and JavaScript-driven interaction and there’s only so much that automated testing can do. But what is almost certain is that there’s always a strong correlation between what an automated testing tool can find and what you’ll find when doing a manual test. In other words: If a site does poorly in automated testing, it is also going to do poorly in manual testing.

The Common Page

Recently I’ve been doing some automated testing of random URLs to generate a profile of the “average” web page. Here’s what I’ve found:

  • Total Pages: 46,033
  • Total Errors Logged: 7,844,810
  • Average Issues Per Page: 170
  • Average Error Density: 38%
  • Lowest Error Density: 0%
  • Highest Error Density: 18,400%

Note: Density is a percentage which represents the # of errors per line of source. This provides a good reference point for how one page compares against the next. A page with 100 errors in 100 lines of code is going to be much worse (10x, in fact) than a page with 100 errors in 1000 lines of code. Yes, the highest error density of 18,400% is legitimate. I found a page that really is that bad.

So what conclusion can we draw from the above? Well, specifically, we can say that The US paralympics site, with its 44 errors (by my count) is much better than the 170 errors that are average. Even more telling is that The US Paralympics site’s home page had an error density of 6% which is lower than average by 32 points.

Some of their errors are just plain stupid, like this gem:
<label for="search" style="display: none;">Search</label>

They also have a lot of weird deprecated presentational attributes mixed in with inline CSS, they open a lot of new windows, and have missing alt attributes on images but overall very few of the issues found are likely to form real barriers to an end user’s successful use of the site. More importantly, compared to the “average” page, these developers deserve a pat on the back more than most. Instead of trying to make an example of them as a bad case, we should make them an example of people who’ve done a reasonably good job with some room of easily-achievable improvement.

In the case of Disability.gov, I cannot comment directly on the state of the site two months ago because it may have been different than today. Today however, I found 13 errors and a 4.87% Error Density. Of those errors, some of them were irrelevant which would improve their density. Some of the “Suggestions” provided by WebAxe are not relevant for accessibility or would, at-best, be considered WCAG Level AAA items.

In any case where we submit a critique of a system – be it paid or unsolicited – we need to weigh our results appropriately. While it may be a common best practice to use good heading structure on a page, ask yourself: “Exactly who is being harmed by not having an H1 on this page and only having H2s?”. If you cannot point to a single specific use case where someone will have a less-than-equivalent experience, then it really isn’t an issue. If I were writing a report on this page, I’d probably mention the headings, but I’d also tell them that this has such low priority, compared to some of the other issues, that they should fix it last.

As a community, I think it is important that we remember to take into consideration the way we deliver our message. The single best trait that is universal among the people I’ve encountered in this community is our shared passion for making things better for others and it is the closest thing to true altruism I’ve ever experienced. We must remember, however, that we’re running a marathon and not a sprint. Going out to social media and making examples of people who are making an effort is, in my opinion, counter-productive and only bolsters outsiders’ resistance to cooperation. If posting unsolicited critique is your mode of advocacy, I say go for it. At the same time, I would much rather see people concentrating on sites which are truly horrible.

My company, AFixt exists to do one thing: Fix accessibility issues in websites, apps, and software. If you need help, get in touch with me now!