Accessibility Testing Overview
Testing – Verifying Accessibility
Accessibility cannot be left to chance. It must be specifically tested – with suitable methods and tools. This chapter shows how accessibility is tested in practice and why the interplay of automated and manual procedures is crucial.
Overview
A comprehensive accessibility test considers different types of testing:
- Automated tests quickly reveal many common problems (e.g., missing alternative texts or contrast errors). They are efficient but do not replace a complete assessment.
- Manual tests are essential to ensure actual usability and conformity with WCAG. This includes keyboard tests, screen reader testing, and semantic analysis.
- User-based tests – ideally by people with disabilities – provide valuable additional insights into actual barriers in real usage situations.
- Test documentation is required to systematically record results and – especially with regard to legal requirements such as the BFSG – to be able to provide evidence.
Chapter Structure
The following subpages go into detail on the individual types of testing and aspects of testing practice:
Automated Testing
Automated tests are fast and scalable. They are particularly well-suited for uncovering fundamental errors and making barriers visible early in the development process.
Contents of this page:
- Advantages and limitations of automated tools
- Overview of common tools such as:
- WAVE
- Axe DevTools
- Lighthouse
- Microsoft Accessibility Insights
- BAAT (BITV test tool)
- Types of errors that can be detected automatically (e.g., missing alt texts, contrast problems, missing labels)
- Role of automated tests in the overall process
Manual Testing
Many WCAG success criteria can only be assessed through human testing. This applies in particular to areas such as operability, semantic correctness, and screen reader compatibility.
Contents of this page:
- Conducting manual tests:
- Keyboard operability
- Focus guidance and logical sequence
- Screen reader testing (e.g., with NVDA, VoiceOver)
- Testing semantic structure (heading hierarchy, landmarks, roles)
- WCAG criteria that can only be tested manually
- Importance of user feedback from people with disabilities
- Use in audits and quality assurance
Documentation & Reports
Systematic documentation of test results is crucial to make progress traceable, support internal quality assurance, and – if necessary – comply with legal proof requirements.
Contents of this page:
Why test documentation is important
Contents of complete documentation:
- Date, context, and scope of testing
- Which tools and methods were used?
- Which WCAG criteria were tested?
- Results and recommended actions
- Structured recording: tables, tools, templates
- Options for report creation for audits, clients, or authorities
Why is Manual Testing Essential?
Automated tests are an important first step but are not sufficient to ensure WCAG conformity. Many success criteria – such as keyboard operability, the comprehensibility of forms, or semantic structure – can only be reliably assessed through manual human testing.
Manual tests are essential, especially for conformance level AA (and beyond). Additionally, for extensive or critical applications, it is also recommended to involve users with disabilities to identify real barriers early on.
The aim of this chapter is to provide a practical understanding of the different testing methods – and to show how their results can be systematically documented.