Role Definition
| Field | Value |
|---|---|
| Job Title | Purple Team Operator |
| Seniority Level | Senior (5-10+ years) |
| Primary Function | Plans and executes adversary emulation campaigns mapped to MITRE ATT&CK, collaborating with blue team defenders in real-time during exercises. Tests detection coverage, identifies gaps, creates detailed attack narratives, and delivers detection engineering recommendations. Develops custom tooling, evades EDR in controlled settings, and trains SOC analysts on adversary TTPs. |
| What This Role Is NOT | NOT a Red Team Operator (red teams operate in isolation with stealth; purple teams collaborate with defenders in real-time). NOT a Penetration Tester (pen testers find vulnerabilities; purple teamers validate detection and response). NOT a BAS Operator (BAS runs scripted simulations; purple teamers adapt live). NOT a Detection Engineer (detection engineers write rules; purple teamers test whether those rules catch real adversary behaviour). |
| Typical Experience | 5-10+ years. Certifications: OSCP, CRTO, GXPN, GDAT (SANS SEC599). Background in pen testing, red teaming, or SOC operations. Deep MITRE ATT&CK knowledge required. |
Seniority note: A mid-level purple team operator (3-5 years) who follows prescribed emulation plans without designing campaigns or mentoring defenders would score Yellow (~42-45) — closer to mid-level Red Team Operator (47.5). The senior premium comes from campaign design, real-time coaching, and strategic detection engineering.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully digital. Purple team exercises run in production environments via remote access. No physical component. |
| Deep Interpersonal Connection | 3 | The defining feature. Purple teaming IS real-time collaboration — working side-by-side with SOC analysts during attacks, coaching defenders on what to look for, building trust between offensive and defensive teams. The human relationship IS the deliverable. |
| Goal-Setting & Moral Judgment | 2 | Decides which TTPs to emulate, adapts attacks in real-time based on defender response, determines when to push harder or pull back, and makes judgment calls about risk to production systems during live exercises. |
| Protective Total | 5/9 | |
| AI Growth Correlation | 1 | AI adoption drives demand for purple teaming — organisations need to validate AI-powered defences work against real adversary behaviour. BAS tools handle scripted scenarios but cannot replicate adaptive collaboration. Weak positive: more demand but BAS absorbs baseline testing. |
Quick screen result: Protective 5/9 AND Correlation 1 — Likely Green Zone. Proceed to quantify.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Adversary emulation campaign planning & execution | 20% | 2 | 0.40 | AUG | Designs and executes ATT&CK-mapped campaigns targeting specific threat actor TTPs. AI assists with payload generation and technique selection. Human drives creative attack paths, adapts to defender responses in real-time, and makes judgment calls about stealth vs detection testing. |
| Real-time defender collaboration & coaching | 20% | 1 | 0.20 | NOT | Works side-by-side with SOC analysts during exercises — explaining what's happening, why detections fired or missed, and how to improve. This is interpersonal teaching and trust-building. AI has no role. |
| Detection engineering recommendations | 15% | 3 | 0.45 | AUG | Analyses detection gaps and recommends Sigma/YARA/KQL rules. AI generates draft detection logic from ATT&CK mappings. Human validates against real attack behaviour, tunes for false positive rates, and prioritises based on organisational risk. |
| Custom tooling & EDR evasion development | 10% | 2 | 0.20 | AUG | Builds custom tools and payloads to test specific EDR configurations. AI generates code scaffolding and obfuscation. Human understands the target environment's specific defences and crafts bypasses that automated tools miss. |
| Attack narrative creation & reporting | 10% | 4 | 0.40 | DISP | Documents attack chains, timelines, and findings. AI generates 70%+ of report content — TTP mappings, evidence screenshots, timeline reconstruction. Human adds strategic context and defender-facing recommendations. |
| SOC analyst training on adversary TTPs | 10% | 1 | 0.10 | NOT | Trains defenders on adversary behaviour — what real attacks look like, how to recognise them, what indicators to hunt for. This is teaching, mentoring, and knowledge transfer. Irreducibly human. |
| MITRE ATT&CK gap analysis & coverage mapping | 5% | 4 | 0.20 | DISP | Maps detection coverage against ATT&CK matrix. AI agents automate coverage analysis, identify gaps, and generate heatmaps. Human interprets organisational context. |
| Reconnaissance & threat intelligence integration | 5% | 5 | 0.25 | DISP | Gathers threat intelligence on relevant adversaries, maps TTPs for emulation. Fully automatable by AI agents. |
| Strategic program development & methodology | 5% | 2 | 0.10 | AUG | Develops purple team methodology, defines exercise frameworks, aligns program with business risk. AI assists with framework documentation. Human drives strategic direction. |
| Total | 100% | 2.30 |
Task Resistance Score: 6.00 - 2.30 = 3.70/5.0
Displacement/Augmentation split: 20% displacement, 50% augmentation, 30% not involved.
Reinstatement check (Acemoglu): Yes. AI creates new tasks: validating AI-driven SOC detections against real adversary behaviour, testing AI-powered EDR/XDR effectiveness, emulating AI-assisted threat actors, and coaching defenders on AI-generated alert interpretation. The purple team operator becomes the quality assurance layer for AI defences.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 1 | Purple team roles growing as enterprises mature beyond pen testing. Indeed and ZipRecruiter show increasing "purple team" and "adversary emulation" postings. MITRE ATT&CK adoption by enterprises drives structured purple teaming demand. Niche but expanding — most large enterprises and MSSPs now build dedicated purple team capability. |
| Company Actions | 1 | Major financial institutions, tech companies, and defence contractors building purple team programs. CISA published a purple teaming guide. CBEST/TIBER frameworks increasingly include purple team exercises alongside red team operations. Companies investing in BAS AND human purple teams — BAS validates coverage, humans validate detection quality. |
| Wage Trends | 1 | Senior purple team operators command $150K-$225K+ (US). Growing steadily. Premium over mid-level pen testers reflects scarcity of professionals with both offensive skills and collaborative detection engineering capability. Glassdoor and ZipRecruiter show upward trajectory. |
| AI Tool Maturity | 0 | BAS platforms (SafeBreach, AttackIQ, Caldera) automate baseline adversary simulation. These handle "does our EDR detect known TTPs?" — work that overlaps with purple team baseline testing. But BAS cannot collaborate with defenders in real-time, adapt attacks based on live feedback, or coach analysts. AI tools augment the role rather than replace it. Anthropic observed exposure for Information Security Analysts: 48.6% — mixed automated/augmented, confirming a 0 score. |
| Expert Consensus | 1 | MITRE, CISA, and SANS position purple teaming as a maturation beyond pen testing and red teaming. Industry consensus: purple teaming's collaborative model is the most effective approach to improving detection. No expert predicts AI replacing the real-time collaborative element. |
| Total | 4 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 1 | CBEST, TIBER-EU, and DORA frameworks increasingly require collaborative red/purple team exercises with accountable human operators. Government and defence purple teaming requires security clearances. CREST accreditation applies. |
| Physical Presence | 0 | Fully remote-capable. Purple team exercises run over secure network access. |
| Union/Collective Bargaining | 0 | Tech sector, at-will employment. |
| Liability/Accountability | 2 | Purple team operators execute attacks against production systems under Rules of Engagement. When an exercise causes unintended disruption, a human bears accountability. The real-time nature — adapting attacks while coordinating with defenders — means an autonomous AI could cause cascading production impacts without human judgment to stop. |
| Cultural/Ethical | 1 | Organisations trust human purple teamers to attack their systems because of the collaborative relationship. Defenders share detection gaps openly because they trust the purple team won't exploit that trust. An AI agent attacking production systems without a human collaborator would face resistance from both security leadership and SOC teams. |
| Total | 4/10 |
AI Growth Correlation Check
Confirmed at +1 (Weak Positive). AI adoption drives purple teaming demand: organisations deploying AI-powered EDR/XDR need human operators to validate those AI defences work against real adversary behaviour. The feedback loop — attack, detect, improve, re-test — is the core value proposition, and it intensifies as AI defences become more complex. Weak rather than strong because BAS platforms absorb some baseline validation work that purple teams previously performed manually.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.70/5.0 |
| Evidence Modifier | 1.0 + (4 x 0.04) = 1.16 |
| Barrier Modifier | 1.0 + (4 x 0.02) = 1.08 |
| Growth Modifier | 1.0 + (1 x 0.05) = 1.05 |
Raw: 3.70 x 1.16 x 1.08 x 1.05 = 4.8671
JobZone Score: (4.8671 - 0.54) / 7.93 x 100 = 54.6/100
Zone: GREEN (Green >=48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 35% |
| AI Growth Correlation | 1 |
| Sub-label | Green (Transforming) — >=20% task time scores 3+, Growth != 2 |
Assessor override: None — formula score accepted. 54.6 sits comfortably within Green, 7.1 points above Red Team Operator (47.5) and 2.5 points below Red Team Leader (57.1). The premium over red team operator reflects the 30% of task time in irreducible human collaboration (score 1) vs 10% for red team. Well-calibrated.
Assessor Commentary
Score vs Reality Check
The 54.6 score places Purple Team Operator between Red Team Operator (47.5) and Red Team Leader (57.1), which accurately reflects the role's unique position: it has the offensive technical depth of a red team operator but adds a substantial interpersonal collaboration layer (30% score-1 time) that the mid-level red team operator lacks. The 3.70 task resistance is higher than the red team operator's 3.45 because purple teaming's defining characteristic — real-time collaboration with defenders — is an irreducibly human activity that anchors 30% of task time at score 1. No override needed.
What the Numbers Don't Capture
- BAS platform convergence. BAS tools are moving toward "continuous purple teaming" — automated emulation with dashboard feedback to SOC teams. If BAS achieves genuine adaptive collaboration (currently not possible), it would compress the purple team operator's value proposition. The 5-7 year timeline depends on BAS remaining unable to replicate real-time human coaching.
- Role title instability. "Purple Team Operator" is not yet a standardised job title in the way "Penetration Tester" or "SOC Analyst" is. Many professionals doing purple team work have titles like "Adversary Emulation Specialist," "Senior Red Team Engineer (Purple)," or "Offensive Security Engineer." The function is growing faster than the title.
- Talent pipeline scarcity. Purple teaming requires both offensive security expertise AND collaborative/teaching skills — a combination that is rare. Most offensive security professionals self-select for autonomy, not collaboration. This creates a structural supply constraint that supports demand independently of AI dynamics.
Who Should Worry (and Who Shouldn't)
Safe: The senior operator who designs adversary emulation campaigns, coaches SOC analysts in real-time during exercises, and delivers detection engineering recommendations that measurably improve an organisation's security posture. Your blend of offensive skill and interpersonal collaboration is the most AI-resistant combination in offensive security.
At risk: The operator who runs scripted adversary simulations without meaningful defender collaboration — essentially operating a BAS platform with extra steps. If your purple team exercises could be replicated by configuring AttackIQ and sending the SOC team a report, you're competing with automation.
The single biggest separator: real-time collaboration quality. The operator who can sit with a SOC analyst, explain why their detection missed a specific technique, help them write a better detection rule on the spot, and then re-run the attack to validate — that is irreducibly human work. The operator who runs attacks in isolation and delivers findings by email is a red team operator without the stealth.
What This Means
The role in 2028: Purple team operators use AI to accelerate payload generation, automate coverage mapping, and draft reports — freeing time for higher-value collaborative work. Exercises run more frequently as AI handles preparation. The core value — real-time adversary-defender collaboration — intensifies as AI defences become more complex and require human validation.
Survival strategy:
- Deepen real-time collaboration skills. The ability to coach defenders during live exercises is the most AI-resistant skill in the role. Invest in communication, teaching, and interpersonal skills alongside technical offensive capabilities.
- Build detection engineering expertise. Writing and validating detection rules (Sigma, KQL, YARA) based on adversary emulation findings makes you indispensable to the feedback loop. Detection engineering is where purple team value becomes permanent.
- Develop AI defence validation capability. Testing AI-powered EDR/XDR/SOAR effectiveness against real adversary behaviour is the growth vector. The purple team operator who validates AI defences becomes the quality assurance layer for the entire security stack.
Timeline: 5-7+ years of stability. The real-time collaborative model is structurally protected by interpersonal trust requirements and the irreducible human element of teaching and coaching. Timeline compresses only if BAS platforms achieve genuine adaptive collaboration — not on the near-term horizon.