April 5, 2026 · 7 min read
How to Add WCAG Testing to GitHub Actions in 5 Minutes
Most teams treat accessibility as a one-time audit. Here's how to make it a continuous check that catches regressions before they ship.
I've set up accessibility testing in CI/CD pipelines at three different companies. Each time, the process looked like this: install Playwright, install axe-core, write a test harness, figure out how to parse the JSON output, wire up assertions, configure thresholds, write the GitHub Actions YAML, debug why headless Chrome won't launch on the runner. Minimum 10 hours. Usually closer to 20.
I built the AccessPulse GitHub Action so that setup takes 5 minutes instead. This post walks through three setups: the quick version, the Vercel preview version, and the DIY axe-core version if you don't want to use our tool.
Option 1: The 5-minute setup
Create .github/workflows/accessibility.yml in your repo:
name: Accessibility
on:
pull_request:
push:
branches: [main]
jobs:
wcag:
runs-on: ubuntu-latest
steps:
- name: WCAG 2.2 scan
uses: accesspulse/scan@v1
with:
url: https://your-site.com
threshold: 80That's the entire file. Here's what happens:
- On every PR and push to
main, the action sends your URL to the AccessPulse API - A headless Chromium browser loads the page and waits for JavaScript to render
- axe-core 4.10 runs all WCAG 2.2 AA rules against the live DOM
- If the accessibility score is below 80, the step fails and the PR is blocked
- A Markdown summary table appears in the Actions UI showing every violation by severity
No API key required for free scans (25/month per repo). Zero dependencies — the action uses only Node.js built-ins. Nothing gets installed on your runner.
Option 2: Scan your Vercel preview URL
If you deploy to Vercel, you can scan the preview deployment that Vercel creates for each PR. This tests the actual code in the PR, not production.
name: Accessibility
on: [pull_request]
jobs:
wcag:
runs-on: ubuntu-latest
steps:
- name: Wait for Vercel preview
uses: patrickedqvist/wait-for-vercel-preview@v1.3.1
id: vercel
with:
token: ${{ secrets.GITHUB_TOKEN }}
max_timeout: 120
- name: WCAG 2.2 scan
uses: accesspulse/scan@v1
with:
url: ${{ steps.vercel.outputs.url }}
threshold: 80
api-key: ${{ secrets.ACCESSPULSE_API_KEY }}The wait-for-vercel-preview action polls until the preview deployment is ready, then passes its URL to the AccessPulse scan. This catches regressions in the PR before merge.
You can do the same pattern with Netlify, Cloudflare Pages, or any platform that creates preview URLs. Replace the wait step with whatever surfaces the preview URL.
What the output looks like
Every scan writes a job summary to the Actions UI. It looks like this:
AccessPulse WCAG 2.2 Scan
62
Score
8
Violations
FAIL
Threshold: 80
| Severity | Rule | WCAG | Count |
|---|---|---|---|
| critical | image-alt | 1.1.1 | 3 |
| serious | color-contrast | 1.4.3 | 5 |
| moderate | heading-order | 1.3.1 | 2 |
When the step fails, the PR shows a red X on the accessibility check. Reviewers see exactly which violations were introduced.
Choosing the right threshold
The threshold is the minimum score (0–100) your site must hit for the check to pass. Here's how I think about it:
- 90+— Strict. Good for sites that already pass most checks. This is where you want to end up.
- 70–89— Moderate. Catches critical and serious violations while allowing minor issues. Good starting point for most teams.
- 50–69— Permissive. Use this temporarily if your site currently scores low and you want to ratchet up over time without blocking every PR.
The scoring formula weights violations by severity: critical issues (like missing altattributes — WCAG 1.1.1) count 10×, serious issues (like insufficient contrast — WCAG 1.4.3) count 5×. A single unlabeled form field (WCAG 4.1.2) tanks your score more than five minor heading-order warnings.
Pro tip: Start with a threshold 5 points below your current score, then ratchet up by 5 every sprint. This prevents regressions without blocking all existing work.
Option 3: DIY with axe-core (no AccessPulse)
If you don't want to use our action, you can wire up axe-core directly. I'm including this because I think you should know what's involved — it's the setup I did manually for years before building AccessPulse.
name: Accessibility
on: [pull_request]
jobs:
wcag:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install test dependencies
run: npm install @axe-core/cli
- name: Run axe scan
run: npx @axe-core/cli https://your-site.com --exitThis works for a basic pass/fail check. --exit makes the process return a non-zero exit code if violations are found, which fails the GitHub Actions step.
But you'll quickly hit limitations:
- No score or threshold.It's all-or-nothing. Any violation fails the build, even minor ones. Most real sites have some minor violations you don't want to block deploys on.
- No severity weighting. A missing
altattribute (critical, WCAG 1.1.1) is treated the same as a skipped heading level (minor, WCAG 1.3.1). - No history.You can't answer “did this PR introduce new violations?” vs “were these pre-existing?” Every run starts from zero context.
- No summary table. You get raw JSON or a text dump. Parsing it into something reviewable requires custom scripting.
- Headless browser on the runner. The
@axe-core/clipackage bundles Chromium. That's ~400 MB downloaded on every CI run unless you cache it. It adds 30–60 seconds to your pipeline.
You can solve all of these by writing custom code: parse the JSON, weight violations, compare against previous runs, generate Markdown summaries, cache the browser binary. That's the 10–40 hours I mentioned. It's what I did before building AccessPulse into a one-line action.
Scanning multiple pages
Most sites have more than one page worth testing. Here's how to scan multiple URLs in a matrix:
name: Accessibility
on: [pull_request]
jobs:
wcag:
runs-on: ubuntu-latest
strategy:
matrix:
page:
- https://your-site.com
- https://your-site.com/pricing
- https://your-site.com/docs
- https://your-site.com/login
steps:
- name: WCAG scan — ${{ matrix.page }}
uses: accesspulse/scan@v1
with:
url: ${{ matrix.page }}
threshold: 80
api-key: ${{ secrets.ACCESSPULSE_API_KEY }}This runs four parallel jobs, one per page. Each gets its own pass/fail result. If your login page drops to 65 but your landing page is at 92, you see exactly which page failed.
What this catches (and what it doesn't)
Automated WCAG testing with axe-core catches approximately 57% of WCAG issues (per Deque's own research). That's meaningful, but it leaves a lot on the table.
What CI catches well:
- Missing
altattributes on images (WCAG 1.1.1) - Insufficient color contrast (WCAG 1.4.3)
- Unlabeled form inputs (WCAG 4.1.2)
- Missing page language attribute (WCAG 3.1.1)
- Links and buttons without accessible names (WCAG 4.1.2)
- Duplicate IDs (WCAG 4.1.1)
What CI can't catch:
- Whether an
altattribute is accurate (WCAG 1.1.1 — automated tools confirm presence, not quality) - Keyboard trap patterns (WCAG 2.1.2 — requires interactive testing)
- Logical reading order (WCAG 1.3.2 — DOM order vs visual order)
- Video captions and audio descriptions (WCAG 1.2.x)
- Cognitive accessibility (WCAG 3.x — clear language, consistent navigation)
CI/CD testing is the floor, not the ceiling. Pair it with manual accessibility audits quarterly. Have someone navigate your site with a screen reader. Both matter.
Getting started
Here's the tl;dr:
- Copy the YAML from Option 1 into
.github/workflows/accessibility.yml - Replace
https://your-site.comwith your URL - Push the commit
- Check the Actions tab — you'll see your score in 30 seconds
25 free scans per month, no signup, no API key. If you need more, paid plans start at $29/month for 500 scans.
The point isn't perfection. It's ratcheting. Get a baseline score, set a threshold 5 points below it, and stop things from getting worse. Then push the threshold up over time. That simple mechanism prevents more accessibility regressions than any quarterly audit.
Related reading
- What axe-core actually tests (and misses) — the 57% coverage breakdown by WCAG principle
- I scanned 87 SaaS landing pages — what these violations look like in practice across real sites
- Overlays got fined $1M — why CI/CD testing is the alternative to runtime overlay widgets
Frequently asked questions
How do I add accessibility testing to GitHub Actions?
Create .github/workflows/accessibility.yml with a step using accesspulse/scan@v1. Set the url input to your site and threshold to your minimum passing score (e.g., 80). The action runs axe-core against the live DOM and fails the step if the score is below threshold. No API key required for 25 free scans per month.
What WCAG score threshold should I set for CI/CD?
Start with a threshold 5 points below your current score, then ratchet up by 5 every sprint. 90+ is strict (for sites that already pass most checks). 70–89 is moderate (catches critical and serious violations). 50–69 is permissive (use temporarily if your site currently scores low).
Can I scan Vercel preview deployments for WCAG violations?
Yes. Use the wait-for-vercel-preview GitHub Action to wait for the preview URL, then pass it to the AccessPulse scan step. This tests the actual code in each PR before merge. The same pattern works with Netlify, Cloudflare Pages, or any platform that creates preview deployment URLs.
Run a free scan to see your current score before adding CI/CD. Or go straight to the GitHub Action if you already know where you stand.