Role Definition
| Field | Value |
|---|---|
| Job Title | Security Tester |
| Seniority Level | Mid-Level (3-5 years) |
| Primary Function | Performs QA-side security testing within the SDLC. Configures and runs SAST/DAST/SCA scanning tools in CI/CD pipelines, triages security findings, executes security regression tests, writes security test cases, verifies vulnerability fixes, and produces compliance evidence for audit. Works within the QA organisation — not the offensive security or red team function. |
| What This Role Is NOT | NOT a Penetration Tester (who exploits vulnerabilities offensively — scored 35.6 Yellow). NOT an Application Security Engineer (who performs threat modelling, architecture review, and developer enablement — scored 57.1 Green). NOT a Vulnerability Tester/Scanner Operator (entry-level scanner operator with no QA integration — scored 2.7 Red). NOT a DevSecOps Engineer (who builds pipeline security infrastructure — scored 58.2 Green). This role OPERATES security scanning tools within a QA workflow; AppSec ARCHITECTS the security programme. |
| Typical Experience | 3-5 years. Background in QA testing or software development with security specialisation. Familiar with OWASP Top 10, SAST/DAST tools (SonarQube, Checkmarx, Snyk, OWASP ZAP, Burp Suite). Certs: ISTQB Security Tester, CompTIA Security+, CEH. |
Seniority note: A junior security tester (0-2 years) who only runs pre-configured scans and forwards reports would score deeper Red. A senior security QA lead who defines organisational security test strategy, selects tooling, and bridges QA and AppSec would score low Yellow — closer to QA Automation Engineer territory.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully digital, desk-based. All work in IDEs, CI/CD dashboards, and scanning tool consoles. |
| Deep Interpersonal Connection | 1 | Some cross-team interaction — explaining findings to developers, collaborating with QA leads and security architects. But transactional — value comes from scan results and test coverage, not the relationship. |
| Goal-Setting & Moral Judgment | 0 | Follows security test plans and scanning policies defined by AppSec engineers or security architects. Makes tactical decisions (scan scope, finding triage priority) but does not define what security means for the organisation. |
| Protective Total | 1/9 | |
| AI Growth Correlation | -1 | AI-powered SAST/DAST tools (Snyk, Checkmarx AI, SonarQube AI CodeFix, ZeroPath) increasingly self-configure, self-triage, and auto-remediate — directly replacing the QA-side security tester's workflow. More AI adoption = better scanning platforms = fewer human operators needed. Weaker negative than Vulnerability Tester (-2) because QA integration and cross-team communication provide a partial buffer. |
Quick screen result: Protective 0-2 AND Correlation negative — likely Red Zone. Proceed to quantify.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Configure & manage SAST/DAST/SCA scanning tools | 20% | 4 | 0.80 | DISP | Q1: YES. Snyk, Checkmarx One, SonarQube auto-configure scanning profiles in CI/CD. ZeroPath requires zero configuration. AI-powered tools generate pipeline YAML, set severity thresholds, and adjust scan scope automatically. Human still tunes for complex stacks but standard setups are fully automated. |
| Triage & prioritise security findings from scans | 20% | 3 | 0.60 | AUG | Q1: NO. Q2: YES. Tenable ExposureAI and Snyk AI auto-prioritise by reachability and exploitability. 98% of SAST findings are unexploitable at runtime — AI filters noise effectively. But determining business-context exploitability and deduplication across repositories still needs human judgment. AI does 70% of triage; human handles the ambiguous 30%. |
| Security regression testing in CI/CD pipeline | 15% | 4 | 0.60 | DISP | Q1: YES. Regression security testing is template-driven and repeatable — the exact profile AI excels at. CI/CD platforms auto-trigger security scans on every PR. Self-healing test frameworks handle environment drift. Human involvement reduces to exception review. |
| Write & maintain security test cases/scripts | 10% | 4 | 0.40 | DISP | Q1: YES. AI generates security test cases from OWASP checklists, API specs, and threat models. Copilot and Testim produce working security test scripts from natural language. Human reviews but AI writes the bulk. |
| Vulnerability verification & false positive analysis | 10% | 3 | 0.30 | AUG | Q1: NO. Q2: YES. Verifying whether a finding is truly exploitable requires contextual understanding of the application's architecture and deployment. AI assists with reachability analysis but complex business logic flaws need human verification. |
| Security requirements review & threat assessment | 10% | 2 | 0.20 | AUG | Q1: NO. Q2: YES. Reviewing user stories and feature specs for security implications requires understanding business context, trust boundaries, and threat actors. AI can enumerate STRIDE categories but cannot assess which threats matter for this specific product. |
| Cross-team collaboration (dev, QA, security) | 10% | 2 | 0.20 | NOT | Q1: NO. Q2: NO. Explaining security findings to developers, negotiating remediation timelines with PMs, coordinating with AppSec architects — human-to-human interaction that AI does not participate in. |
| Security compliance & audit evidence | 5% | 3 | 0.15 | AUG | Q1: NO. Q2: YES. AI generates compliance reports and maps findings to frameworks (PCI DSS, SOC 2). But interpreting audit requirements for specific organisational contexts and presenting evidence to auditors involves human judgment. |
| Total | 100% | 3.25 |
Task Resistance Score: 6.00 - 3.25 = 2.75/5.0
Displacement/Augmentation split: 45% displacement, 45% augmentation, 10% not involved.
Reinstatement check (Acemoglu): Limited reinstatement. New tasks AI creates in security testing — "validate AI-generated security scan results," "configure AI scanning tool fleets" — overlap heavily with the existing role's displacement trajectory. Unlike AppSec Engineering where threat modelling of AI systems creates genuinely new work, the QA-side security tester's reinstatement tasks are thin: the new work is configuring the very tools that replace the old work. Weak reinstatement.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | -1 | No dedicated BLS category for "Security Tester" in the QA sense. The function splits between SOC 15-1253 (Software QA Analysts — 220% projected growth) and SOC 15-1212 (Information Security Analysts — 33% growth). But "Security Tester" as a standalone QA title is declining on job boards — the function is being absorbed into broader AppSec Engineer, DevSecOps, or SDET roles. Indeed and LinkedIn show security-focused QA postings flat to declining while pure AppSec postings surge. |
| Company Actions | 0 | Mixed signals. Companies are embedding security testing into CI/CD via tooling rather than hiring dedicated security testers. Shift-left security means developers run their own SAST scans via IDE plugins (Snyk, SonarLint). But some organisations — especially in regulated industries — still maintain dedicated QA security roles for compliance. Net neutral. |
| Wage Trends | 0 | ZipRecruiter average $116K (March 2026), range $88K-$163K. Glassdoor $76K average skews lower due to title confusion with physical security. Mid-level QA security testers earn $90K-$130K — comparable to QA Automation Engineers, no premium for the security specialism. Stable but not growing. |
| AI Tool Maturity | -1 | SAST/DAST/SCA tools are production-ready and heavily AI-enhanced. Anthropic's own research (March 2026) identifies Software QA Analysts at 52% task exposure and Information Security Analysts at 49% — this role sits at the intersection of both. ZeroPath requires zero configuration. Snyk auto-generates fix PRs. Checkmarx AI triages findings automatically. The tools this role operates are designed to operate themselves. |
| Expert Consensus | 0 | Mixed. Anthropic finds "limited evidence that AI has affected employment to date" but flags QA and security as highly exposed. CBS/Medium: "70% of QA roles will disappear." But ASTQB argues software testing has "the best job security of any profession." The QA-security intersection lacks specific expert commentary — it falls between QA automation (transforming) and vulnerability scanning (displaced). |
| Total | -2 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 0 | No licensing required. ISTQB Security Tester certification is voluntary. Compliance frameworks mandate security testing but do not require a human to run the scans — the platform output satisfies the audit requirement. |
| Physical Presence | 0 | Fully remote-capable. All work is digital. |
| Union/Collective Bargaining | 0 | Tech sector, at-will employment. No union protections. |
| Liability/Accountability | 1 | In regulated industries (finance, healthcare, aviation), someone must sign off on security test results before release. If a vulnerability reaches production, accountability sits with the team. But liability is at the organisational level, not the individual security tester — and automated tooling output is increasingly accepted as evidence. |
| Cultural/Ethical | 0 | No cultural resistance to automated security scanning. The industry actively celebrates it — conference keynotes promote "shift-left security" and automated scanning as best practice. Companies prefer 24/7 automated scanning over periodic human testing. |
| Total | 1/10 |
AI Growth Correlation Check
Confirmed -1 (Weak Negative). AI adoption improves SAST/DAST/SCA tools, which directly reduces the need for human security testers to configure, run, and triage scans. Every improvement in Snyk, Checkmarx, or ZeroPath makes this role less necessary. The correlation is not as strongly negative as Vulnerability Tester (-2) because the QA integration aspects — cross-team communication, security requirements review, and compliance evidence preparation — provide a partial buffer that pure scanner operators lack. But the trajectory is clear: the tools are eating the role.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 2.75/5.0 |
| Evidence Modifier | 1.0 + (-2 x 0.04) = 0.92 |
| Barrier Modifier | 1.0 + (1 x 0.02) = 1.02 |
| Growth Modifier | 1.0 + (-1 x 0.05) = 0.95 |
Raw: 2.75 x 0.92 x 1.02 x 0.95 = 2.4516
JobZone Score: (2.4516 - 0.54) / 7.93 x 100 = 24.1/100
Zone: RED (Green >=48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 80% |
| AI Growth Correlation | -1 |
| Sub-label | Red (Terminal) — Score <25, 80% of task time at 3+ automation exposure |
Assessor override: None — formula score accepted. The 24.1 lands just 0.9 points below the Yellow boundary, which accurately reflects this role's position: meaningfully above Vulnerability Tester (2.7) due to QA integration and cross-team work, but too tool-dependent to survive as a standalone function. The 0.9-point gap from Yellow is honest — if this role added genuine threat modelling or architecture review it would cross the line, but as defined (QA-side security scanning) it does not.
Assessor Commentary
Score vs Reality Check
The 24.1 score — barely below the Yellow boundary — tells a precise story. This role sits in a no-man's-land between two better-defined functions: the Application Security Engineer (57.1, Green) who architects security programmes, and the Vulnerability Tester (2.7, Red) who operates scanners. The Security Tester adds QA process integration and cross-team communication on top of scanner operation, which lifts it substantially above the pure operator. But 45% of its task time faces direct displacement (SAST/DAST configuration, regression scanning, test script writing), and the augmented portions (triage, verification) are eroding as AI tools improve their contextual analysis. The score is at the knife's edge because the role IS at the knife's edge.
What the Numbers Don't Capture
- Title absorption in progress. "Security Tester" as a standalone QA title is disappearing into "AppSec Engineer," "Security QA Lead," or simply "QA Automation Engineer with security focus." The function fragments — scanning goes to CI/CD automation, triage goes to AI, and the human-judgment pieces get absorbed into AppSec. The role does not die cleanly; it dissolves.
- The shift-left squeeze. As developers use Snyk IDE plugins and SonarLint to scan their own code in real-time, the QA-stage security test becomes a redundant checkpoint. Why run a separate DAST scan at QA when SAST caught it at commit? The entire QA-security testing phase is being compressed out of the pipeline.
- Anthropic's exposure data. Anthropic's March 2026 research identifies Software QA Analysts at 52% AI exposure and Information Security Analysts at 49%. This role sits at the intersection — a QA analyst doing security work. The compound exposure is significant.
- Compliance as temporary buffer. Regulated industries (PCI DSS, SOC 2, HIPAA) still require documented security testing evidence. This provides a temporary buffer — but scanning platforms now generate compliance reports natively. The buffer is eroding.
Who Should Worry (and Who Shouldn't)
Security Testers whose daily work is configuring DAST scans, running OWASP ZAP against staging environments, triaging SonarQube findings, and generating security test reports should be most concerned. This is exactly the workflow that Snyk, Checkmarx One, and ZeroPath automate end-to-end — from scan configuration to finding prioritisation to fix-PR generation. The timeline is 12-24 months for standard stacks.
Security Testers who also review security requirements before code is written, assess threat models with architects, and mentor developers on secure coding — those practitioners have already evolved beyond this role definition and should benchmark themselves against the Application Security Engineer assessment (57.1, Green) instead.
The single biggest factor: whether you configure and run tools, or whether you make judgment calls about what to test and why findings matter. The former is a tool operator being replaced by better tools. The latter is an AppSec engineer in all but title.
What This Means
The role in 2028: The standalone "Security Tester" title within QA organisations will largely cease to exist. Scanning tools run autonomously in CI/CD. Finding triage is AI-powered. Regression security testing is a pipeline feature, not a human activity. The human-judgment work — threat assessment, security requirements review, cross-team enablement — persists but under AppSec Engineer or DevSecOps titles, not as a QA security function.
Survival strategy:
- Transition to Application Security Engineering now. The threat modelling, architecture review, and developer enablement skills that distinguish AppSec (57.1, Green) from Security Testing (24.1, Red) are the exact skills to develop. Get CSSLP or OSWE. Learn STRIDE/PASTA. Move from running scans to defining what gets scanned and why.
- Master AI-powered scanning tool orchestration. Become the person who deploys and manages the fleet of AI security tools — selecting between Snyk, Checkmarx, ZeroPath, and Semgrep, configuring policies, tuning thresholds, and integrating results into developer workflows. This is the DevSecOps trajectory.
- Specialise in a domain AI struggles with. AI/ML system security testing (prompt injection, training data poisoning, model evasion), API security testing for complex business logic, and security testing of IoT/embedded systems all resist automation and command premium salaries.
Where to look next. If you are considering a career shift, these Green Zone roles share transferable skills with Security Testing:
- Application Security Engineer (AIJRI 57.1) — Security scanning knowledge, OWASP expertise, and SDLC integration experience transfer directly to the broader AppSec function
- DevSecOps Engineer (AIJRI 58.2) — CI/CD security pipeline experience and scanning tool expertise map directly to security-integrated delivery pipelines
- AI Security Engineer (AIJRI 79.3) — Security testing methodology and vulnerability knowledge provide a foundation for the fastest-growing specialism in cybersecurity
Browse all scored roles at jobzonerisk.com to find the right fit for your skills and interests.
Timeline: 12-24 months for standard stacks. Regulated industries retain dedicated security testing roles 6-12 months longer due to compliance inertia. Security Testers who evolve toward AppSec Engineering have a longer personal runway — the skills transfer is direct and the destination role is Green.