Role Definition
| Field | Value |
|---|---|
| Job Title | SOC Analyst — Tier 2 (L2) / Incident Investigator |
| Seniority Level | Mid-Level |
| Primary Function | Performs deep investigation of escalated incidents (from T1 or AI triage), conducts forensic analysis of compromised systems, develops and tunes detection rules across SIEM/EDR/SOAR, performs hypothesis-driven threat hunting, writes and refines incident playbooks, and coordinates with incident response teams on complex multi-stage attacks. Acts as the human judgment layer between automated triage and strategic security leadership. |
| What This Role Is NOT | NOT a Tier 1 analyst (alert triage, playbook following). NOT a Tier 3 / dedicated threat hunter (full-time proactive, strategic). NOT a SOC manager (people/budget). NOT a security architect or CISO (strategy/governance). T1 scores Red (Imminent, 1.55). CISO scores Green (Accelerated, 4.25). This role sits squarely between them. |
| Typical Experience | 2-5 years. CySA+, GCIH, or equivalent. Prior L1 experience typical. Hands-on with at least one SIEM (Splunk, Sentinel) and EDR platform. |
Seniority note: Tier 1 (entry-level) scores Red (Imminent) at 1.55 — AI already handles 90-100% of that work. Tier 3 / SOC Architect would score Green (Transforming) as their work is novel, strategic, and judgment-heavy. Same job family, three different zones.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully digital, desk-based. Remote-capable. No physical interaction with systems. |
| Deep Interpersonal Connection | 1 | Some collaboration during complex incidents — coordinating with IR teams, mentoring junior analysts, briefing management during active breaches. Valuable but not the core of the role. |
| Goal-Setting & Moral Judgment | 2 | Decides investigation direction on escalated incidents. Determines what constitutes suspicious behaviour in context. Writes and refines playbooks (defining how future incidents should be handled). Tunes detection rules based on judgment about acceptable false positive rates. Does not set organisational security strategy — that sits with the CISO. |
| Protective Total | 3/9 | |
| AI Growth Correlation | -1 | AI SOC tools are expanding from L1 triage into L2-type investigation work. Dropzone and Prophet Security already perform timeline reconstruction, IOC extraction, and kill chain mapping autonomously. More AI adoption reduces the volume of incidents requiring human investigation — weak negative, not as direct as L1's -2. |
Quick screen result: Protective 3/9 + Correlation -1 = Likely Yellow Zone.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Deep investigation of escalated incidents | 30% | 3 | 0.90 | AUGMENTATION | AI agents build incident timelines, correlate data across SIEM/EDR/identity, and enrich IOCs automatically. Human still leads — interpreting attacker intent, deciding next investigative steps, applying business context to determine real impact. Prophet Security cuts investigation time by 90% but keeps analyst in the loop for judgment calls. |
| Forensic analysis of compromised systems | 20% | 3 | 0.60 | AUGMENTATION | AI extracts artifacts, maps kill chains, and reconstructs attack sequences. Human interprets novel techniques, validates findings against environment-specific context, and determines whether the AI's reconstruction is complete. Novel malware and living-off-the-land techniques still require human pattern recognition. |
| Develop and tune detection rules | 15% | 3 | 0.45 | AUGMENTATION | AI suggests detection logic based on threat intelligence and identifies coverage gaps. Human validates against the specific environment, tests false positive rates, and tunes for business context that AI lacks. Simbian's AI Threat Hunt Agent already generates detection hypotheses autonomously. |
| Threat hunting (proactive) | 15% | 2 | 0.30 | AUGMENTATION | Hypothesis-driven, requires creative adversarial thinking about what attackers MIGHT do. AI assists with data queries and pattern scanning across months of logs. Human formulates hypotheses and interprets ambiguous signals. This is the most judgment-heavy L2 task and the hardest for AI to lead. |
| Write and refine playbooks | 10% | 3 | 0.30 | AUGMENTATION | AI drafts playbooks from incident data and best practices. Human validates logic, incorporates lessons learned from real incidents, and adapts to organisational context. Playbook creation is higher-judgment than playbook following (which is L1 work scoring 5). |
| Mentor analysts / validate AI output | 10% | 1 | 0.10 | NOT INVOLVED | Training junior analysts, reviewing AI triage decisions, and serving as the human quality check on automated investigation. This is fundamentally interpersonal and judgment-based. Emerging as a larger portion of L2 work as AI handles more triage. |
| Total | 100% | 2.65 |
Task Resistance Score: 6.00 - 2.65 = 3.35/5.0
Displacement/Augmentation split: 0% displacement, 90% augmentation, 10% not involved.
Reinstatement check (Acemoglu): Yes — AI creates new tasks for L2 specifically. "AI output validation" (reviewing automated investigation decisions), "AI workflow tuning" (configuring and optimising AI SOC platforms), and "AI-escalation triage" (handling the cases AI flags as beyond its confidence threshold) are net-new tasks absorbing from the eliminated L1 tier. The L2 role is transforming into a human-AI partnership role, not disappearing.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 0 | Aggregate cybersecurity demand remains strong — ISC2 2025 reports 4.8M unfilled positions globally, BLS projects 33% growth for information security analysts through 2034. However, this is aggregate data that does not disaggregate by tier. L2-specific postings (SOC Analyst II, Incident Investigator) are stable but not surging. ZipRecruiter shows average L2 SOC salary of $99,157, indicating active market. Glassdoor reports SOC Analyst II average at $107,900. Demand exists but is not accelerating for this specific tier. |
| Company Actions | -1 | CrowdStrike cut 500 jobs (5% workforce, May 2025) citing AI efficiencies — cuts were not limited to L1. Torq Field CISO stated traditional T1 and T2 SOC roles are "dissolving" into outcome-based models. Some companies are restructuring SOCs so that L2s absorb former L1 work while AI handles routine investigation, compressing the tier structure. Not yet widespread elimination, but the reorganisation is underway. |
| Wage Trends | 1 | L2 SOC analysts earn $99K-$108K average (ZipRecruiter, Glassdoor Feb 2026), a meaningful premium over L1's $55K-$75K. Wages growing with market — not stagnating like L1, but not commanding the 8-15% YoY premiums seen in senior/specialised security roles. The pay band reflects a role that still requires humans but faces increasing pressure from AI productivity gains. |
| AI Tool Maturity | -1 | AI tools are actively expanding from L1 triage into L2 investigation territory. Dropzone AI performs autonomous investigation in under 3 minutes. Prophet Security reconstructs timelines, extracts IOCs, and maps kill chains. Simbian's AI Threat Hunt Agent queries security data using natural language hypotheses. Gartner predicts AI in threat detection and incident response will rise from 5% to 70% by 2028 — primarily augmenting, not replacing, but the augmentation is deep and accelerating. AI SOC agents outperformed 95% of human participants in the Simbian AI SOC Championship (2025). |
| Expert Consensus | 0 | Mixed. Gartner treats AI SOC agents as augmentation, not replacement, with systems meant to help analysts investigate with more speed and consistency. Prophet Security explicitly positions as augmenting L2, not replacing. But Swimlane predicts the traditional tier model dissolves entirely. Intezer argues the "AI SOC agent" narrative misses the point — the future is about outcomes, not workflows. The L2 role persists in all expert models, but its shape is changing significantly. |
| Total | -1 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 0 | No licensing required for L2 SOC work. No regulation mandates human investigation of security incidents. EU AI Act focuses on high-risk decisions (healthcare, criminal justice) — security incident investigation is not a regulated activity. |
| Physical Presence | 0 | Fully remote-capable. SOCs increasingly operate distributed post-pandemic. |
| Union/Collective Bargaining | 0 | Tech/cybersecurity sector is overwhelmingly non-unionised. No collective bargaining protections. |
| Liability/Accountability | 1 | If a compromised system is missed during investigation, there are organisational consequences. L2 analysts bear more accountability than L1 for investigation quality — but personal liability sits with SOC management and the CISO, not the individual investigator. Moderate barrier. |
| Cultural/Ethical | 1 | Some resistance to fully automated investigation of complex incidents. Organisations still expect a human to validate that an AI's investigation is complete before closing a significant incident. Gartner explicitly cautions that "over-automation introduces risk if agents act on flawed assumptions." This is weaker than the cultural barriers protecting healthcare or legal roles but real enough to slow full displacement. |
| Total | 2/10 |
AI Growth Correlation Check
Confirmed at -1. AI growth weakly reduces demand for L2 analysts. As AI SOC platforms mature from L1 triage into investigation, each L2 analyst can handle more incidents with AI assistance — meaning organisations need fewer L2s per unit of alert volume. However, this is not the direct -2 displacement seen at L1. The investigation judgment, threat hunting creativity, and AI output validation tasks persist and even grow. The net effect is a mild headcount compression, not elimination. No recursive dependency — the L2 role does not exist BECAUSE of AI in the way AI Security Engineer does.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.35/5.0 |
| Evidence Modifier | 1.0 + (-1 × 0.04) = 0.96 |
| Barrier Modifier | 1.0 + (2 × 0.02) = 1.04 |
| Growth Modifier | 1.0 + (-1 × 0.05) = 0.95 |
Raw: 3.35 × 0.96 × 1.04 × 0.95 = 3.1774
JobZone Score: (3.1774 - 0.54) / 7.93 × 100 = 33.3/100
Zone: YELLOW (Green ≥48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 75% |
| AI Growth Correlation | -1 |
| Sub-label | Yellow (Urgent) — ≥40% task time scores 3+ |
Assessor override: None — formula score accepted.
Assessor Commentary
Score vs Reality Check
The Yellow (Urgent) label is accurate. The 3.35 Task Resistance Score is borderline — 0.15 below Green — which correctly reflects a role where the human remains in the loop but AI is doing progressively more of the actual investigation work. The key tension: all five evidence dimensions cluster near zero (mixed), not at the extremes seen for L1 (-8) or CISO (+9). This is a genuinely uncertain role, which is exactly what Yellow captures. No override was applied; the mechanical result matches the qualitative picture.
What the Numbers Don't Capture
- The tier compression effect. Companies are flattening SOC tier structures. As AI eliminates L1, L2 analysts absorb both "validate AI triage output" (former L1 work) and "advanced investigation" (current L2 work). The job title may survive while the actual work becomes a hybrid of L1 validation and L3 hunting — a fundamentally different role wearing the same name.
- Aggregate cybersecurity demand masks L2-specific trends. ISC2's 4.8M unfilled positions and BLS's 33% growth are aggregate numbers. They do not disaggregate by tier. The L2 tier specifically may be compressing even as overall cyber hiring grows.
- Rate of AI capability improvement. AI SOC agents outperformed 95% of human participants in Simbian's 2025 championship. Prophet Security and Dropzone are advancing from triage into deep investigation. The tools are improving faster in this domain than in most — which compresses the Yellow timeline.
- The pipeline paradox. If L1 disappears, how do L2s develop? Today's L2s built their skills through 2-3 years of L1 triage. The next generation will need a different entry path — possibly AI-assisted apprenticeship or direct L2 hiring with lab-based training. This creates short-term demand for existing L2s (scarce experienced investigators) but long-term uncertainty about the pipeline.
Who Should Worry (and Who Shouldn't)
If you are an L2 analyst who primarily handles routine escalations using established investigation procedures — you are closest to the L1 displacement pattern. AI agents are already performing this type of structured investigation autonomously. Your 2-3 year window is real.
If you are an L2 analyst who actively threat hunts, writes detection rules, and leads complex multi-stage incident investigations — you are operating at the L2/L3 boundary where AI augments but cannot lead. Your risk is lower than the Yellow label suggests, and upskilling toward dedicated threat hunting or detection engineering positions you in Green territory.
The single biggest factor: whether you investigate WHAT the AI tells you to investigate, or whether you formulate your own hypotheses about what attackers are doing. Hypothesis-driven analysts survive. Escalation-following analysts do not.
What This Means
The role in 2028: The "Tier 2 SOC Analyst" title persists but describes a fundamentally different job. L2s become AI-augmented investigators who validate AI findings on complex cases, lead proactive threat hunts the AI cannot initiate, and tune the AI detection/investigation pipeline. Routine investigation — the current majority of L2 work — will be handled autonomously by AI agents with human spot-checks. The surviving L2 is closer to today's L3.
Survival strategy:
- Invest in threat hunting and detection engineering. Hypothesis-driven work is the hardest for AI to lead. SANS SEC504, GCTI, and hands-on threat hunting experience differentiate you from the investigation AI is absorbing. Become the analyst who finds what the AI misses.
- Master AI SOC platforms as a power user. Learn to tune, validate, and optimise Dropzone, Prophet Security, Simbian, or equivalent. The L2 of 2028 is defined by their ability to direct AI investigation, not perform it manually. 64% of cyber job listings now require AI/ML skills.
- Build toward specialisation. Digital forensics (3.75, Green Transforming), malware analysis (3.45, Green Transforming), or cloud security engineering (3.10, Green Transforming) all score higher because they require deeper technical judgment AI cannot yet replicate.
Where to look next. If you're considering a career shift, these Green Zone roles share transferable skills with this role:
- SOC Manager (AIJRI 61.8) — Incident escalation experience and mentoring junior analysts are the foundation for SOC management
- Digital Forensics Analyst (AIJRI 61.1) — Advanced investigation skills and evidence handling transfer directly to digital forensics
- Malware Analyst / Reverse Engineer (AIJRI 54.4) — Malware triage and behavioural analysis experience provides a foundation for dedicated reverse engineering
Browse all scored roles at jobzonerisk.com to find the right fit for your skills and interests.
Timeline: 2-3 years. AI SOC agents are advancing from L1 triage into L2 investigation territory now — Gartner projects 70% AI adoption in threat detection and incident response by 2028. The window to upskill is open but closing. Organisations that have already automated L1 are turning their AI investment toward L2 next.