Role Definition
| Field | Value |
|---|---|
| Job Title | Accessibility Tester |
| Seniority Level | Mid-Level (3-6 years experience) |
| Primary Function | Tests websites and applications for WCAG 2.1/2.2 and ADA compliance. Conducts manual testing with screen readers (JAWS, NVDA, VoiceOver), keyboard-only navigation, and colour contrast tools. Runs automated scanners (axe, WAVE, Lighthouse). Documents findings per WCAG success criteria, writes remediation guidance, and collaborates with developers to resolve issues. |
| What This Role Is NOT | NOT a QA Manual Tester (general functional testing). NOT a Web Developer (building accessible code). NOT a UX Designer (designing accessible interfaces). NOT a Compliance Officer (managing legal/policy frameworks). NOT a QA Automation Engineer (building general test frameworks). |
| Typical Experience | 3-6 years. May hold IAAP CPACC or WAS certification. Background in front-end development, UX, or QA testing with accessibility specialisation. |
Seniority note: Junior accessibility testers (0-2 years) would score lower Yellow or Red — they run automated scans and follow checklists without interpreting edge cases. Senior/Lead accessibility engineers (7+ years) defining organisational accessibility strategy and architecture would score higher Yellow or Green.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully digital. All testing done via browser, screen readers, and assistive technology software. No physical presence required. |
| Deep Interpersonal Connection | 1 | Collaborates with developers on remediation, coordinates with assistive tech users for validation testing. Relationships are functional rather than therapeutic, but empathy for disabled users informs testing judgment. |
| Goal-Setting & Moral Judgment | 2 | Interprets WCAG success criteria against real-world usage. Makes judgment calls on "sufficient" vs "best practice" accessibility. Prioritises issues by user impact, not just technical severity. Decides when automated scan results are false positives. Requires understanding of diverse disability experiences to evaluate subjective criteria (e.g., SC 1.3.1 Info and Relationships, SC 2.4.6 Headings and Labels). |
| Protective Total | 3/9 | |
| AI Growth Correlation | 0 | AI adoption drives more web content and interfaces that need accessibility testing. But AI also produces the scanning tools that automate portions of that testing. Net neutral — demand grows proportionally with automation capability. |
Quick screen result: Protective 3 AND Correlation neutral → Likely Yellow Zone. Proceed to confirm.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Manual keyboard-only and screen reader testing (JAWS, NVDA, VoiceOver) | 25% | 2 | 0.50 | AUGMENTATION | Q1: No. AI cannot replicate the full experience of navigating complex web apps with screen readers. Focus management, dynamic content announcements, ARIA live region behaviour, and modal trap detection require a human operating real assistive technology in real browsers. AI assists with test case generation but human drives execution and interpretation. |
| Remediation guidance and developer collaboration | 15% | 2 | 0.30 | AUGMENTATION | Q1: No. Explaining accessibility issues to developers requires contextual judgment — understanding their tech stack, suggesting practical fixes within their framework constraints, negotiating trade-offs. AI generates generic remediation text but the tester tailors guidance to the team's specific codebase and skill level. |
| Run automated accessibility scans (axe, WAVE, Lighthouse) | 15% | 4 | 0.60 | DISPLACEMENT | Q1: Yes. Automated scanners run independently in CI/CD pipelines. AI-enhanced tools now auto-triage results, suppress known false positives, and prioritise findings. Human reviews output but AI drives execution. Still catches only ~30-40% of WCAG issues (GDS, Deque research). |
| WCAG audit reporting and issue documentation | 15% | 4 | 0.60 | DISPLACEMENT | Q1: Yes for structured reporting. AI generates VPAT/ACR documents, writes issue descriptions per WCAG success criteria, and formats audit reports. Accessibility Tracker (2026) automates progress reports and portfolio insights. Human validates accuracy and adds contextual narrative. |
| Colour contrast and visual compliance checks | 10% | 5 | 0.50 | DISPLACEMENT | Q1: Yes. Colour contrast checking is algorithmic (WCAG AA 4.5:1, AAA 7:1 ratios). Tools like axe, Colour Contrast Analyser, and browser DevTools compute ratios instantly. AI now handles dynamic states, hover/focus colour changes. Fully automatable. |
| Regression testing and CI/CD accessibility integration | 10% | 4 | 0.40 | DISPLACEMENT | Q1: Yes. Automated accessibility checks in CI/CD (axe-core, pa11y, Playwright accessibility snapshots) run without human intervention. AI flags regressions and blocks deploys on failure. Human configures rules and reviews edge cases but pipeline runs autonomously. |
| User testing coordination with assistive tech users | 10% | 1 | 0.10 | NOT INVOLVED | Recruiting, scheduling, and facilitating testing sessions with real disabled users. Requires interpersonal coordination, empathy, and trust-building with participants. AI cannot substitute for the human-to-human validation of real assistive technology workflows. |
| Total | 100% | 3.00 |
Task Resistance Score: 6.00 - 3.00 = 3.00/5.0
Displacement/Augmentation split: 50% displacement (automated scans, reporting, colour contrast, regression testing), 40% augmentation (manual AT testing, remediation guidance), 10% not involved (user testing coordination).
Reinstatement check (Acemoglu): New tasks emerging: "AI scan output validator," "accessibility AI tool configuration specialist," "WCAG 2.2/3.0 interpretation for AI-generated content." As AI generates more web content, the need to test that content's accessibility creates new work. The role shifts from finding issues to validating AI-found issues and testing AI-generated interfaces.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 0 | 17,079 ADA accessibility testing jobs on Indeed (2026). Demand stable, driven by legal mandates: DOJ ADA Title II rule (public sector web/mobile compliance deadlines 2026-2028), EU European Accessibility Act (June 2025). BLS groups with Software QA Analysts and Testers — 15-1253 shows 25% growth 2024-2034. Accessibility is a niche within QA, but legal tailwinds sustain demand. Not booming, not declining. |
| Company Actions | 0 | 82% of accessibility teams incorporate AI tools (Level Access State of Digital Accessibility 2025-2026). But adoption is additive — companies adding AI scanning alongside human testers, not replacing them. 86% prioritise AI in vendor purchases. No major layoffs of accessibility specialists reported; if anything, legal compliance pressure is driving new hires. |
| Wage Trends | 0 | Accessibility tester salaries stable at $75-110K mid-level (Glassdoor, Indeed 2026). Specialist IAAP-certified testers command premiums. Wages tracking inflation without notable growth or decline. Niche skill keeps supply constrained. |
| AI Tool Maturity | -1 | Automated scanners (axe, WAVE, Lighthouse) catch ~30-40% of WCAG issues (GDS research, Deque). AI-enhanced platforms (Accessibility Tracker, Deque Axe Auditor) improve triage and reporting. But W3C acknowledges no tool can confirm full accessibility alone. Screen reader testing, keyboard navigation, cognitive accessibility, and dynamic content remain beyond automation. Tools are production-ready for the easy stuff; the hard stuff still needs humans. |
| Expert Consensus | 0 | Universal agreement: AI augments accessibility testing but cannot replace manual AT testing. accessibility.com (2026): "no tool can confirm accessibility on its own." Level Access: human review, privacy controls, and leadership support essential. Deque: "automated tools find issues; people find experiences." Consensus is augmentation, not displacement. |
| Total | -1 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 1 | No personal licensing required, but ADA/Section 508/EU EAA compliance audits increasingly require documented human testing evidence. Courts and regulators expect manual assistive technology testing — automated scan results alone do not satisfy legal compliance requirements in accessibility lawsuits (10K+ annual ADA digital lawsuits in the US). This creates structural demand for human testers. |
| Physical Presence | 0 | Fully remote. All testing done digitally. |
| Union/Collective Bargaining | 0 | No union presence in accessibility testing roles. |
| Liability/Accountability | 1 | Organisations face significant legal liability for inaccessible digital products (ADA lawsuits, EU EAA fines). A human tester's professional judgment on compliance creates an accountability chain that pure AI scanning cannot provide. VPAT/ACR documents require human attestation. Not as strong as medical/legal liability, but meaningful in litigation-heavy US market. |
| Cultural/Ethical | 0 | No cultural resistance to AI in accessibility testing. Industry embraces AI tools as complementary. Some disability advocacy groups emphasise the importance of human testers with lived disability experience, but this is not a structural barrier to AI adoption. |
| Total | 2/10 |
AI Growth Correlation Check
Confirmed at 0 (neutral). AI adoption creates more digital interfaces, more AI-generated content, and more complex web applications — all requiring accessibility testing. But AI also produces better automated scanning tools. The relationship is balanced: more demand, more automation capability. Not positive enough for Green acceleration (accessibility testing doesn't recursively benefit from AI the way AI safety engineering does). Not negative (legal mandates ensure demand persists regardless of automation advances).
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.00/5.0 |
| Evidence Modifier | 1.0 + (-1 x 0.04) = 0.96 |
| Barrier Modifier | 1.0 + (2 x 0.02) = 1.04 |
| Growth Modifier | 1.0 + (0 x 0.05) = 1.00 |
Raw: 3.00 x 0.96 x 1.04 x 1.00 = 2.9952
JobZone Score: (2.9952 - 0.54) / 7.93 x 100 = 31.0/100
Zone: YELLOW (Green >=48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 50% |
| AI Growth Correlation | 0 |
| Sub-label | Yellow — Moderate transformation |
Assessor override: None — formula score accepted.
Assessor Commentary
Score vs Reality Check
The 31.0 Yellow score reflects a role in genuine transformation. The 3.00 Task Resistance Score sits exactly at the midpoint — half the role's tasks are automatable (scanning, reporting, contrast checks, regression testing), half resist automation (manual AT testing, remediation guidance, user testing coordination). Evidence at -1 is mild, reflecting stable demand driven by legal compliance but constrained by improving AI tools. The score aligns with the domain research benchmark of 47.6 for Web Accessibility Engineer (senior, architecture-level) — the mid-level tester scores lower because more of their time goes to automatable scanning and reporting tasks rather than strategic accessibility architecture.
What the Numbers Don't Capture
- Legal tailwinds are accelerating. DOJ ADA Title II deadlines (2026-2028), EU European Accessibility Act (June 2025), and 10K+ annual US digital accessibility lawsuits create structural demand that pure market forces would not. These mandates specifically require evidence of human testing, not just automated scans.
- The "30-40% ceiling" for automation. GDS and Deque research consistently shows automated tools catch only 30-40% of WCAG issues. The remaining 60-70% require manual assistive technology testing. This ceiling has not meaningfully moved in 5+ years despite significant AI investment. If AI cracks dynamic content and screen reader interaction testing, this role shifts dramatically — but that breakthrough has not happened.
- Lived experience premium. Testers with disabilities who use assistive technology daily bring irreplaceable insight. This segment of the workforce has a natural moat that no AI can replicate, and disability advocacy groups increasingly push for their inclusion in testing teams.
- WCAG 3.0 complexity. The forthcoming WCAG 3.0 (W3C Silver) introduces more subjective, outcome-based success criteria that require human judgment to evaluate, potentially increasing the human component of accessibility testing.
Who Should Worry (and Who Shouldn't)
Should worry: Testers who primarily run automated scans and generate reports from tool output. If your value proposition is "I click the axe button and document what it finds," AI is already doing that in CI/CD pipelines. Testers who cannot use screen readers or perform genuine manual AT testing are the most vulnerable — their automated-scan-only workflow is the first to be absorbed.
Shouldn't worry (as much): Testers deeply skilled in manual assistive technology testing — navigating complex SPAs with JAWS, testing dynamic ARIA patterns with NVDA, validating iOS VoiceOver flows. Testers who coordinate real user testing with disabled participants. Those who combine accessibility expertise with front-end development knowledge to provide actionable, code-level remediation guidance. The strongest position: IAAP-certified testers who combine manual AT expertise with the ability to configure and validate AI scanning tools.
What This Means
The role in 2028: The mid-level accessibility tester becomes an "accessibility validation specialist" — spending less time running scans (automated in CI/CD) and more time on manual AT testing, AI output validation, and complex WCAG interpretation for dynamic interfaces. Teams that had 3 testers doing scan-report-remediate will have 1-2 doing validate-interpret-coordinate. The scan-and-report function is absorbed by pipelines; the test-and-judge function persists.
Survival strategy:
- Master assistive technology testing — become expert-level with JAWS, NVDA, VoiceOver, and TalkBack. This is the hardest skill for AI to replicate and the most valuable in legal compliance.
- Learn front-end development — understanding React/Angular/Vue component patterns, ARIA implementation, and DOM structure makes your remediation guidance 10x more actionable than generic WCAG citations.
- Get IAAP certified (CPACC + WAS) — professional certification creates differentiation as AI lowers the floor for basic accessibility testing.
Where to look next. If you're considering a career shift, these Green Zone roles share transferable skills with this role:
- Senior Software Engineer (AIJRI 55.4) — Front-end development knowledge, WCAG expertise, and testing methodology translate to building accessible software from the ground up
- Cybersecurity Consultant (AIJRI 58.7) — Compliance auditing, technical documentation, and structured assessment frameworks map directly to security compliance consulting
- Solutions Architect (AIJRI 66.4) — Requirements analysis, standards interpretation, and cross-team collaboration skills transfer to architecture advisory roles
Browse all scored roles at jobzonerisk.com to find the right fit for your skills and interests.
Timeline: 3-5 years. Legal mandates sustain demand while AI improves scanning capabilities. The transformation is gradual — not a cliff. Testers who upskill in manual AT testing and front-end development will thrive; those relying solely on automated scan workflows will be displaced.