Role Definition
| Field | Value |
|---|---|
| Job Title | Cyber Security Researcher |
| Seniority Level | Mid-Senior |
| Primary Function | Discovers novel vulnerabilities in software, hardware, and protocols through manual analysis, custom fuzzing, and creative exploitation. Develops proof-of-concept exploits, builds security tools and methodologies, publishes research through advisories and conferences, and manages responsible disclosure with vendors. Operates at the frontier where no playbook exists — every zero-day is a first. |
| What This Role Is NOT | Not a vulnerability tester running automated scanners (Red Imminent, 2.7). Not a penetration tester executing engagement-based assessments. Not a malware analyst reverse engineering existing threats. Not a SOC analyst triaging alerts. This is the researcher who finds what nobody knew to look for. |
| Typical Experience | 5-10+ years. OSCP, OSEE, GXPN, or deep domain expertise. Deep knowledge of memory corruption, binary exploitation, protocol analysis, or hardware security. Often published CVEs. |
Seniority note: A junior vulnerability tester (0-2 years) running automated scanners scores Red Imminent (2.7). A senior/principal researcher (10+ years) leading zero-day programmes, managing disclosure relationships, and directing research strategy would score deeper Green (~4.0+).
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully digital, desk-based. Some hardware security research requires physical access to devices, but this is niche. |
| Deep Interpersonal Connection | 1 | Collaboration with product security teams, vendor disclosure relationships, conference presentations. But the core value is technical discovery, not the relationship. |
| Goal-Setting & Moral Judgment | 3 | Every vulnerability hunt is an open-ended creative problem. The researcher decides what to investigate, when to pivot, how deep to go, and makes critical ethical judgments on responsible disclosure — when to publish, how much to reveal, whether a vulnerability is too dangerous to disclose. No playbook exists for novel zero-days. |
| Protective Total | 4/9 | |
| AI Growth Correlation | 1 | More AI systems = more attack surface requiring security research. AI infrastructure (NVIDIA, MLflow, MCP tooling) is itself a target — Trend Micro's AESIR found 21 CVEs in AI infrastructure alone. AI security research is a growing sub-domain. Weak Positive — demand grows with AI adoption but AI tools also automate routine portions of the pipeline. |
Quick screen result: Protective 4 + Correlation 1 — likely Yellow-Green boundary. Proceed to quantify.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Novel vulnerability discovery & zero-day research | 25% | 2 | 0.50 | AUGMENTATION | The core of the role — finding previously unknown vulnerabilities through manual code audit, creative fuzzing, protocol analysis, and hardware probing. AI tools (Aardvark, AESIR, Big Sleep) are finding real CVEs in open-source codebases, but novel research in complex proprietary systems, hardware, and protocols remains deeply human. AI assists with pattern recognition; the human leads the creative hunt. |
| Proof-of-concept exploit development | 15% | 2 | 0.30 | AUGMENTATION | Writing working exploits from discovered vulnerabilities — memory corruption chains, logic flaw exploitation, bypass development. Requires deep understanding of target architecture. AI assists with code generation but the human designs the exploit strategy and handles edge cases that make PoCs reliable. |
| Security tool & methodology development | 15% | 2 | 0.30 | AUGMENTATION | Building custom fuzzers, analysis frameworks, testing methodologies, and automation pipelines. Novel engineering work where the researcher defines what to build and why. AI accelerates implementation but the human provides the security insight. |
| Automated vulnerability scanning & triage | 10% | 5 | 0.50 | DISPLACEMENT | Running automated scanners, managing fuzzing campaigns at scale, processing results. AI agents handle this end-to-end — OSS-Fuzz, FENRIR, and CodeQL-style tools execute scanning workflows without human involvement. |
| Research publication & knowledge sharing | 10% | 2 | 0.20 | AUGMENTATION | Writing CVE advisories, conference papers, blog posts, and responsible disclosure reports. AI drafts text and formats reports, but the novel research insight and the judgment on what to disclose are irreducibly human. |
| Stakeholder briefing & cross-team collaboration | 10% | 1 | 0.10 | NOT INVOLVED | Presenting findings to product security teams, coordinating with vendors on disclosure timelines, briefing engineering leadership. Human trust and communication IS the value. A vendor does not accept a zero-day disclosure from an AI agent. |
| Literature review & threat landscape monitoring | 10% | 4 | 0.40 | DISPLACEMENT | Tracking new CVEs, reading advisories, monitoring research publications, staying current on vulnerability classes. AI agents aggregate, classify, and summarise this information at scale. Human only needed for interpreting novel research directions. |
| Mentoring & capacity building | 5% | 1 | 0.05 | NOT INVOLVED | Training junior researchers, knowledge transfer, building research team capability. Human interaction IS the value. |
| Total | 100% | 2.35 |
Task Resistance Score: 6.00 - 2.35 = 3.65/5.0
Displacement/Augmentation split: 20% displacement, 65% augmentation, 15% not involved.
Reinstatement check (Acemoglu): Yes — AI creates significant new tasks: "research AI infrastructure vulnerabilities" (NVIDIA, MLflow, MCP tooling), "validate AI-discovered vulnerabilities" (triaging Aardvark/AESIR outputs for false positives), "develop AI-resistant security architectures" (designing systems that resist AI-powered attacks), "audit AI fuzzing pipelines" (ensuring automated discovery tools work correctly). The role is expanding into AI security research, not contracting.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 1 | Indeed lists 12,962 vulnerability researcher jobs in the US. LinkedIn shows 1,000+ vulnerability research roles. Cybersecurity postings up 12% YoY to 514K+ openings. Vulnerability/Threat Management Analyst postings grew 14.9%. Strong demand within a broader talent shortage. |
| Company Actions | 1 | Major employers actively hiring: Johns Hopkins APL, Booz Allen, GDIT, OpenAI (Aardvark team itself recruits human researchers). Microsoft Zero Day Quest 2025 paid $1.6M for 600+ vulnerability submissions — investing in human discovery. Trend Micro built AESIR but explicitly employs human researchers to direct research, validate findings, and manage disclosure. No major layoffs citing AI. |
| Wage Trends | 1 | Glassdoor: $203K average. ZipRecruiter: $107K-$195K range. 6figr: $155K-$406K. Senior roles command $200K+. Premium compensation above general cybersecurity, growing with market. Cybersecurity salaries broadly predicted to rise 20-30% by late 2026. |
| AI Tool Maturity | 0 | Production AI tools ARE finding real zero-days: Aardvark (10 CVEs, OpenSSH), AESIR (21 CVEs, NVIDIA/MLflow), Big Sleep (SQLite), AISLE (12/12 OpenSSL zero-days). These are genuine achievements. But they target well-known open-source codebases with available source code. Novel research in proprietary systems, hardware, protocols, and complex exploit chains remains beyond current AI capabilities. Impact on mid-senior headcount: unclear. |
| Expert Consensus | 1 | 87% of cybersecurity professionals expect AI to enhance not replace. SC Media: "researchers will transition from repetitive discovery to guiding AI agents." Trend Micro: "human experts direct research, validate findings, manage disclosure." Microsoft investing $1.6M in human vulnerability submissions. Counterpoint: LessWrong reports AI found all 12 OpenSSL zero-days, and autonomous systems are climbing bug bounty leaderboards. Consensus: transformation, not elimination — at this seniority level. |
| Total | 4 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 1 | No strict licensing, but responsible disclosure operates within legal frameworks (CFAA, Wassenaar Arrangement, EU Cyber Resilience Act). Government/defence research roles require security clearances. Export controls govern vulnerability and exploit sharing. AI cannot hold a clearance or bear legal responsibility for disclosure decisions. |
| Physical Presence | 0 | Fully remote capable. Hardware security research occasionally requires physical device access, but this is a niche subset. |
| Union/Collective Bargaining | 0 | Tech/security sector, at-will employment. No union protection. |
| Liability/Accountability | 1 | Irresponsible disclosure causes real harm — premature publication enables exploitation, incorrect severity assessment misallocates resources, mishandled vendor coordination damages relationships and trust. A human must own these decisions. When a researcher decides to disclose a critical zero-day in critical infrastructure, someone is accountable for that judgment. |
| Cultural/Ethical | 1 | The security research community has strong norms around responsible disclosure, ethical research, and coordinated vulnerability handling. Bug bounty platforms (HackerOne, Bugcrowd), vendor security teams, and ISACs all assume human researchers. There is significant cultural resistance to AI autonomously discovering and disclosing vulnerabilities without human oversight — the community demands accountability. |
| Total | 3/10 |
AI Growth Correlation Check
Confirmed at 1 (Weak Positive). AI adoption expands the attack surface requiring security research: AI infrastructure itself becomes a target (Trend Micro AESIR found 21 CVEs in AI platforms), AI-powered systems introduce novel vulnerability classes (prompt injection, model poisoning, adversarial ML), and AI-generated code at scale creates new codebases requiring security audit. Not Accelerated Green (2) because the role predates AI and would persist without it — threat actors exploited vulnerabilities long before LLMs existed. The correlation is real but indirect.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.65/5.0 |
| Evidence Modifier | 1.0 + (4 × 0.04) = 1.16 |
| Barrier Modifier | 1.0 + (3 × 0.02) = 1.06 |
| Growth Modifier | 1.0 + (1 × 0.05) = 1.05 |
Raw: 3.65 × 1.16 × 1.06 × 1.05 = 4.7124
JobZone Score: (4.7124 - 0.54) / 7.93 × 100 = 52.6/100
Zone: GREEN (Green ≥48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 20% |
| AI Growth Correlation | 1 |
| Sub-label | Green (Transforming) — ≥20% of task time scores 3+ |
Assessor override: None — formula score accepted.
Assessor Commentary
Score vs Reality Check
The 52.6 score places the role 4.6 points above the Green threshold — a comfortable but not commanding margin. The score aligns well with adjacent research roles: Malware Analyst (54.4), Cryptographer (53.8), and Incident Response Specialist (52.6) form a tight cluster of Green Transforming cybersecurity research positions. The AI tool maturity dimension scored 0 (neutral) despite production AI tools finding real zero-days — this is the most forward-looking risk. If AI vulnerability discovery tools make a capability leap from "open-source pattern matching" to "novel zero-day research in proprietary systems," the evidence score would shift negative and the Green classification would be threatened. Today, that leap has not occurred.
What the Numbers Don't Capture
- Rate of AI capability improvement. AI vulnerability discovery advanced from experimental to production CVE discovery in under two years (2024-2026). Aardvark found its first CVE in June 2025; by December it had 60+. AISLE found all 12 OpenSSL zero-days. This trajectory is steeper than any other domain assessed in this project. The 0 score on AI Tool Maturity is a snapshot — the rate of change matters more than the current state.
- Bimodal distribution. Routine scanning/triage (10%, score 5) and literature monitoring (10%, score 4) are already displaced. Novel zero-day research (55%, score 2) and exploit development (15%, score 2) are deeply human. The 3.65 average masks two distinct sub-roles within the same title — one being automated, one being amplified.
- Supply shortage confound. The 4.8M cybersecurity workforce gap inflates evidence signals. Vulnerability researcher demand may partly reflect talent scarcity rather than genuine demand expansion. If the pipeline matures or AI tools reduce the need for human researchers at volume, evidence weakens.
Who Should Worry (and Who Shouldn't)
If you run automated scanners, process fuzzing outputs, and write reports on known vulnerability classes — you are functionally Yellow or worse regardless of the Green label. These are the tasks AI handles end-to-end today, and they are the first to be compressed as research teams restructure around AI-augmented workflows.
If you find novel zero-days in complex systems, develop working exploits for previously unknown vulnerability classes, and manage responsible disclosure relationships with major vendors — you are safer than Green (Transforming) suggests. This is the creative frontier where human intuition, adversarial thinking, and ethical judgment outperform AI, and the expansion of AI-powered systems is creating entirely new research targets.
The single biggest separator: whether you find vulnerabilities or discover them. The researcher who runs tools and reports findings is being automated. The researcher who invents new attack techniques and finds what nobody knew to look for is being amplified.
What This Means
The role in 2028: The mid-senior security researcher uses AI as a force multiplier — AI-powered fuzzers handle volume scanning, LLMs assist with code review and pattern detection, and automated triage processes routine findings at scale. But the researcher's core value — discovering novel vulnerability classes, developing creative exploitation techniques, and managing the ethical complexity of responsible disclosure — remains firmly human. AI infrastructure itself becomes a primary research target, expanding the domain.
Survival strategy:
- Master AI-augmented research workflows — integrate Aardvark/Codex Security, AI-powered fuzzers, and LLM-assisted code review into your discovery pipeline. Be the researcher who finds 5x more vulnerabilities, not the one still doing everything manually.
- Specialise in AI security research — AI infrastructure vulnerabilities (NVIDIA, MLflow, MCP tooling), adversarial ML, prompt injection, and model poisoning are the growth frontier with highest demand and lowest AI competition.
- Build your disclosure network — vendor relationships, bug bounty reputation, conference presence, and published CVEs create a professional moat that no AI can replicate. The researcher with 50 published CVEs and a HackerOne track record is irreplaceable.
Timeline: 5-7+ years of strong human demand at this seniority level. The creative and ethical core of the role has structural protection. AI raises the floor (routine scanning automated) while raising the ceiling (harder targets, novel AI attack surfaces).