Contents
- Introduction
- Why we test
- When to test
- In-team testing responsibilities
- The aim of testing
- How to test
- Wrap up
Introduction
An important part of making sure a site or component is accessible is testing. Just like any other aspect of coding, mistakes can be made, regressions introduced and so on, but accessibility issues can often be overlooked. Whilst some accessibiilty issues can be obvious or caught by automated tools, many need manual testing and an understanding of the potential issues to surface them.
For these reasons it is important that accessibility features as part of an overall testing strategy to ensure it is not overlooked or insufficiently tested.
It is the responsibility for the whole team to test their own work. Whilst the test team has an important role, leaving accessibility to just one or two people on the team is a guaranteed way to end up with a flawed end product which will cost time and money to remediate.
Why we test
Just as with general functional testing we need to verify that our users can complete tasks with the product we are building. Accessibility testing can be seen as a super-intense version of what might be being undertaken right now.
Typically a test process may look like something similar to this:
- user interviews
- prototypes
- user testing
- build with unit and automated end-to-end testing
- browser and device testing
- manual or exploratory testing
- further user testing
But at each of those points, unless we make a concious effort, we will end up just testing the product with people in mind who, from an accessibility point of view, are just like the team which built the product.
We are not our user
There are a large group of users who do not interact with the online world in the same way as the people on our teams. Users who benefit from accessibiilty testing are not a minority, they may form a significant part of our user-base and so we need to take this aspect of testing seriously.
It is all too easy to forget the diverse range of our user base when designing and building resulting in a product which unintentionally leaves out these users by baking in these biases. Poor accessibility can mean a component or page is left difficult to use or understand at best and completely broken, or potentially even make the user ill at worst.
By creating testing processes and strategies which ensure we consider and include all types of users we can start to approach a product built for all, instead of for a narrow section of the population.
The added complexity of assistive technology
With accessistive technology in particular, there are a lot of variables at work. This means even coding to a published specification might introduce issues.
If we take screen-readers as an example, here are some of the factors which can influence the viability of a solution:
- different screen-readers (and versions within the same screen-reader) might support different aspects of the specification, or not support it at all
- different browsers might interact with the same screen-reader in different ways
- even with the same screen-reader on the same browser, there are different navigation methods a user may employ which can introduce variations in support
- regression issues from browsers or screen-readers themselves
- different screen-readers may add their own heuristics to make interactions more usable, but which adds in further variations in how code may be interpreted
Add into this complexity, the fact that accessibility is always a balancing act between the needs of different users. Enhancing the accessibility for one group may adversley impact the accessibility of another, so testing helps expose this too.
This might seem like a lot to consider, but by putting in place some robust testing policies we should be able to affirm the products we build are as accessible as possible.
When to test
Testing should be a continuous procedure and something which is handled by the team doing the design and build work. Accessibility testing needs to be considered as soon as an idea for a component or feature is raised.
Whenever possible testing should include real users with a variety of disabilities and conditions. However realistically most of the testing will be done by team members with occasional valuable insight from users.
Why not leave it to a pre-launch audit?
A “strategy” I have often seen is for a product team to only start looking at accessibility a few weeks before launch, normally because there is a requirement for some sort of sign-off of the product for accessibility legislation or company policy. This can mean the actual accessibility testing work ends up with the team doing the accessibility review rather than with the product team. This approach will always lead to either missed deadlines or products launching with multiple accessibility barriers, which will most likely then languish in the backlog.
This approach also means the product team will never upskill in accessibility as it will always be deferred to another team. The product team will likely make fixes in a rush to meet the launch deadline and so not truly understand what they are doing and why it matters. This is doing the team a disservice.
We should see accessibility issues in a similar way as technical debt in the way the issues increase in quantity the longer the project goes on. Even with the best intentions a team will generate accessibility issues in the code they produce. As time goes on these issues begin to overlap, perhaps becoming more complex, and are copied into new code deepening the accessibility debt. What might have been a simple fix if found at the time of writing may result in a product-wide bug hunt when left.
The increased time it takes to retrospectively fix accessibility issues means that it can cost up to 30 times as much compared to catching that issue at the point it was created.
But these issues may not be limited to only the code. By far the most difficult issues to fix are ones which are fundamental to either how the wider page or journey is designed. By the time we get close to product launch finding out you need to redesign whole pages or components or even whole journeys can be devastating to a deadline and team morale. Typically the product gets released with major accessibility flaws which then persist as new features take priority. This is often related to the MVP paradox.
A pre-launch review can only be used as part of a wider accessibility testing strategy. By itself it is unlikely to achieve the desired outcome as it sits too close to the launch date and is rarely able to prevent a launch if it finds issues.
Design system and coding libraries
Ideally a project will be using a pre-existing design system which has already has been robustly tested. A design system should indicate where accessibility has been understaken, how and which assistive technology was used (including version numbers of that technology and browsers). Without this information it is unwise to assume the testing has been done and is being kept up-to-date so we need to include additional testing in our build.
If evidence of testing is present, then it is a responsiblity of each team to both keep their project up-to-date with the latest releases and have a maintenance plan for this going forward. If any issues are surfaced with library components or pattern library components are surfaced during testing, this should be raised with the appropriate team to assist them in keeping the libraries as accessible as possible.

Discovery stage
Testing of assumptions and hypotheses is normally done with users in the discovery phase, but this too should involve a diverse range of users with disabilities and different conditions. This can be especially valuable if there is an existing system which is being replaced or similar services already exist which they are able to give insight to.
Design and prototype stage
A prototype, whether that is in something like Figma or actually working in HTML, is the best place to get detailed feedback on potential accessibilty pitfalls in a design.
Any kind of high fidelity prototype will allow you to spot issues with content, layout, colours, and page and journey flow. Components which have different states (such as focus styles on buttons) can also be included and checked.
Catching issues in components, page layout or journeys at this point will greatly reduce cost and time spent versus ironing these out after they have made into code.
If using flat prototypes, go the extra distance in the mockup to annotate the different states, how focus moves through the page and semantic markup. This will save time later as developers will not have to surface these questions when they come to build it.
When building an HTML prototype, you can use browser tools to help you check for simple accessibility issues. Some of the most popular ones are:
An HTML prototype will also be able to be easily tested with screen-magnification users. A well-coded HTML prototype will also allow you to see how a design scales and even how assistive technology works with it.
However you want to be sure the prototype has been well tested before putting it in front of assistive technology users. This is because it is possible to have a lot of minor accessiblity issues prevent the user from providing true insights into more complex problems. Some testing and remediation in advance of the user testing could remove these obstacles and make the user testing less frustrating for the user and more valuable to the team. You need to know how this will be coded for production so the effort at this point is worth it.
Don't consider leaving testing with disabled users until the production version launches. At that point it becomes more difficult to argue for accessibility changes and it might be that the changes needed to make it accessible require wider rework than stakeholders will be comfortable with.
Build
The build stage is where the most intensive testing work will take place in terms of code and getting it production-ready. Team and user testing should continue throughout the build phase.
Even if using a component library of previously tested code the output should be tested. This is because support landscapes shift as browsers and assistive technology release new versions, but also the context and parameters for how those components are used are often unique to the current build. As mentioned, component libraries also depend on a community effort in feeding back improvements and both in-team testing and user-testing can greatly contribute to this.
During build there are several other testing mechanisms which can be added to help support writing accessible code.
- using linting on IDE to catch issues as you code
- adding accessibility checks into unit/component tests
- adding automated library checks (such as axe) to build pipelines
- adding specific scripted checks into end-to-end testing
See more on automated testing.
Maintenance
After launch testing should not stop. Sites rarely stay in their launch configuration as more features, improvements and bug-fixes are added.
Whenever the interface code is changed the accessibility testing should be re-run, both for the new code but also taking care to include any potential side-effects elsewhere in the site.
Even in periods of stagnation it is advisable to factor in regular accessibility sweeps due to the release timelines of browsers and assistive technology (especially screen-readers).
In-team testing responsibilities
As we have already explored, testing should not be the sole responsibility of one or two people in the team.
Let's examine the responsibilities for ensuring a new feature is accessible:
- discovery phase: user researcher, designer, content design
- prototype phase: designer, content design, developer
- build phase: designer, content design, developer, QA
- maintenance phase: QA, developer, user researcher
You can see design, content and developer are heavily involved. This is because good accessibility is often an ongoing conversation between team members due to the direct interaction you are having with the end-user. When one group makes all the decisions it can lead to an inferior product.
Even the project manager and business analyst need to be involved to ensure accessibility is treated as a priority and the team is given enough time to produce a quality and effective result.
See more about team processes and responsibilities
The aim of testing
The aim of testing is to not add issues to the backlog.
Whilst user testing will invariably result in tickets being generated, this should ideally be limited to insights from lived experience which the team could not be expected to add themselves.
Testing is there for 3 reasons:
- help designers provide the development team with an accessible solution
- assist the development team in creating accessible code
- providing regression assurance
With user testing and interviews in advance of code being written, the aim is to have a design which is generally accessible before any of it is coded.
Complex components which have the potential for accessibility issues should be tested in isolation using HTML prototypes before being signed off for development.
Code should be robustly tested as it is developed, so that inaccessible code is not merged and certainly not released.
The combination of
- an accessible design solution which has been tested with users
- Test-Driven-Development (using tests which enforce accessible best practices)
- code linting
- developers running checks (including manual assistive technology tests) before committing code
- pull request reviews which look at accessibility
should result in fairly robust code and minimal need for rework.
By testing before merging code it reduces the number of resultant bugs which then need to be prioritised and fixed. By fixing issues before they are merged we avoid accessibility being de-prioritised on the backlog.
Finally pipeline testing provides assurance that we haven't missed any side effects.
How to test
Testing for accessibility can be broken down into three main areas:
- automated tests
- non-assistive technology tests
- assistive technology tests
Automated testing
Automated testing is done most often in the browser on either a local branch or on the live site, but can also be done as part of a build process.
Note that whilst automated testing is easy to do, it will only find 30-40% of accessibility issues out-of-the-box. Even custom test scripts written specifically to target accessibility cannot account for everything, so manual testing is still essential.
When done in the browser automated testing will use one of the many available browser plugins. This will provide you with a list of potential issues on the current page only and only in the state the page was in when the test was run. Different plugins will run different tests so it can be useful to run a couple of different ones from time-to-time, but find a plugin which you like to work with and use that.
The advantage of doing it locally does mean you have a clearer view on what is happening on the page, but also that you can test code before committing it.
When done as part of a build process it will normally be using an API from one of the browser plugins. The advantage of this is that it can piggyback on your end-to-end journey tests and check each page in turn, providing a report at the end.
See more about automated testing
Non-assistive technology testing
We can do a lot of accessibility testing even without learning how to use a screen-reader or other assistive technology.
You can check for a lot of accessibility problems by just using your standard keyboard. If you also learn how to use your browser developer tools and a couple of (non-automated) browser plugins, you can then cover a lot more potential issues too.
See more about keyboard testing.
Assistive technology testing
This is the part where a lot of people stop because they think using a screen-reader is really difficult and it costs money to buy any assistive technology.
But screen-readers are genuinely simple to use as you only need a few commands to get started. Speech-recognition is even easier and again only a handul of verbal commands will get you by.
The only tricky part might be understanding why there is an issue when you come across one and how to fix it.
Each OS also has a screen-reader and speech-recognition which ships with it or is installable for free, so chances are you already have the tools to get started.
Learn how to use screen-readers.
What a test strategy should include
A robust testing strategy should include the following:
- automated testing
- keyboard testing on mobile and desktop
- visual checks - including reflow, zoom, font-size, high contrast, reduced motion
- screen-readers
- speech-recognition
- content
You could expand this to include:
- text-to-speech (which reads the visible text only)
- user style overrides (how easy or difficult it is for a user to customise the look of your product to their needs)
- performance (how the product works with lower-powered devices on slower networks as disabled users are less likely to have disposable income for top-of-the-line or dedicated internet devices)
Accessibility audits
Accessibility testing should be done all through development of a site and when doing that you should follow the steps outlined above.
But it might be that you will need to perform a wider accessibility review. This could be because you have inherited a project and want to gauge how accessible it is before you start working with it, or you might be doing a full review of a site you have just built as an assurance step. This kind of in-depth review is often referred to as an accessibility audit. These are often carried out by specialists but there is no reason for a development team to not do this themselves.
Deciding what to review
How an audit is run depends on the type of site:
- a large freely-explorable website might benefit from a sampling technique, perhaps concentrating on large-traffic areas
- if components can be identified, these can be targeted along with some more general full-page sample testing
- a transactional site or one which has well-defined user journeys can have the journeys mapped out and each step tested as full pages
It is important that whatever technique is used is recorded so it can be repeated. This is essential both for the development team to be able to replicate the issue and for the tester to be able to verify any fixes (which may be some time after the original report).
Whilst the product itself may be the focus of the review, it is worthwhile looking at the wider landscape too. Ask the following questions:
- how do users find the product - if social media is used are the posts accessible?
- what are their entry points - are people signposted from other parts of the company digital estate?
- how do they get in touch if they have a question or need help - are there options for all users?
- does the product generate artifacts such as emails or letters - are these accessible?
Testing for compliance vs testing for user impact
It can be tempting to take a list of the WCAG criteria and review each page or component for failures against each item. However WCAG has well-known areas where it does not meet the needs of all users, especially in areas such as cognitive function. By only reviewing a site against WCAG issues, many impactful problems may be left unreported. Remember WCAG guidelines are just a way to ensure you have done the bare minimum to make a site accessible - we should be going beyond this.
Whilst a WCAG checklist can be a useful tool, it should not be the only method of assessment used as it can be easy to miss issues.
Here are some sample checklists - note that none of these have been updated to the current version of WCAG.
How to report an issue
First, make sure that you have actually found an issue. Especially if it concerns assistive technology (like a screen-reader), make sure you haven't made an error in testing (like using the wrong key combo or being in a different navigation mode).
Then check to see if there is a bug which is causing the issue. It might be that making a reduced example of the component causing an issue might help work out if it is an issue with the browser or assistive technology rather than the component. If it turns out to be a bug with the browser or assistive technology, look to add to or create a bug report on the relevant issue tracker so it can be fixed.
It is important as much information as possible about the issue is recorded in order for someone with no context of the location or issue to be able to understand and replicate the finding.
If you have determined it is a problem with the component then you will need to raise an issue for the team to look at it. This is my accessibility audit issue reporting template which helps produce a detailed issue.
Severity vs priority
Severity and priority might not always coincide. The severity of an issue is how much of a barrier it causes for the user.
However how this is then prioritised depends on what this barrier is preventing - for example is it a core journey step or a minor feature?
It can also be very tempting to find an issue and then try and make one of the WCAG Success Criteria fit in some convoluted way to ensure it gets fixed. This is often a result of the stakeholders refusal to fix anything not identified as a WCAG failure. Don’t do this.
You should always be able to defend your assignment of an issue against a WCAG criterion, citing the guidance when asked to. Always assume the person you are reporting issues to has a good grasp of the WCAG guidelines and will call you out on any misrepresentation. To do otherwise risks all of your issue reporting being brought into question.
If you find an issue which is a borderline WCAG issue, but you can’t make it stick, then report it as a high-priority ‘other’ issue and flag your concerns in the issue itself. A good prioritisation process will ensure issues are reviewed in accordance with user impact.
Reocurrence is also a factor in prioritising. A lower severity issue which is happening on every single page can form more of a barrier than a single more severe issue which only happens in one place. This is because a repeated barrier can be exhausting to have to by-pass on every page and the resulting compound effect can be higher than the single issue.
Read more about prioritising accessibility.
Wrap up
Testing for accessibility is a complex thing. It covers a lot of things which cannot be automated which itself can be present a barrier to implementing a robust testing strategy. But with a series of testing processes in place, across all team roles issues can be addressed long before they become problems for users.