Role Definition
| Field | Value |
|---|---|
| Job Title | AI/ML Engineer — Cybersecurity |
| Seniority Level | Mid-level |
| Primary Function | Designs, builds, and deploys machine learning models specifically for cybersecurity applications — threat detection, anomaly detection, malware classification, user behaviour analytics, and automated security response. Operates ML pipelines ingesting security telemetry (logs, network flows, endpoint data) and produces models that defend production systems. Combines ML engineering depth with cybersecurity domain expertise to build defences that adapt to an adversarial, evolving threat landscape. |
| What This Role Is NOT | NOT a general ML/AI Engineer who builds models without security domain expertise (scored 68.2). NOT an AI Security Engineer who secures AI systems rather than building ML models for security (scored 79.3). NOT a SOC Analyst who consumes ML-generated alerts without building the models. NOT a Data Scientist applying standard classification — this role builds production ML systems against adversarial actors. |
| Typical Experience | 3-7 years. Typically 2-4 years in ML engineering or data science plus 1-3 years in cybersecurity domain. Python, PyTorch/TensorFlow, cloud ML platforms. Security knowledge: MITRE ATT&CK, network protocols, threat landscape. Common certs: AWS ML Specialty, Security+, CySA+. |
Seniority note: Junior (0-2 years) would score Yellow — executing established ML pipelines without designing novel detection models. Senior/Principal (8+ years) would score deeper Green with architectural authority over entire ML security platforms and strategic threat modelling.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully digital. All work in code editors, ML platforms, and security tooling. |
| Deep Interpersonal Connection | 0 | Primarily technical. Some collaboration with SOC and threat intel teams, but core value is ML engineering capability, not relationships. |
| Goal-Setting & Moral Judgment | 2 | Makes consequential decisions about what threats to model, acceptable false positive/negative rates, and how to architect ML defences against novel attack vectors. Does not set organisational security strategy (that's senior/CISO), but exercises significant domain-specific technical judgment. |
| Protective Total | 2/9 | |
| AI Growth Correlation | 2 | Recursive demand from two vectors: (1) more AI adoption → more AI-powered attacks → more ML defences needed, and (2) more AI deployments → more attack surfaces → more security ML models. Demand compounds from both the AI and cybersecurity growth curves simultaneously. |
Quick screen result: Protective 2 + Correlation 2 = Likely Green Zone (Accelerated). Proceed to confirm.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Design & build ML models for threat detection and anomaly detection | 25% | 2 | 0.50 | AUGMENTATION | Each security environment has unique telemetry, threat profiles, and baseline behaviour. Off-the-shelf models produce unacceptable false positive rates. The engineer designs custom architectures (graph neural networks for lateral movement, transformers for log sequence analysis) tuned to specific environments. AI assists with code generation and architecture suggestions but cannot independently understand a novel threat landscape and design appropriate detection models. |
| Develop adversarial ML defences and model robustness testing | 15% | 2 | 0.30 | AUGMENTATION | Attackers actively evade ML models — adversarial examples, concept drift exploitation, model poisoning. Building robust defences requires understanding both ML vulnerabilities and attacker TTPs. This is a cat-and-mouse game against human adversaries where creativity and domain knowledge define effectiveness. Tools assist with known robustness tests but cannot anticipate novel evasion techniques. |
| Build and operate ML pipelines for security data (MLOps/SecOps) | 20% | 3 | 0.60 | AUGMENTATION | MLOps platforms (SageMaker, Vertex AI, MLflow) automate significant deployment workflows. The engineer architects the pipeline, handles complex integration with SIEM/SOAR/EDR platforms, manages model drift monitoring specific to security data distributions, and debugs production issues. Human leads but AI handles substantial sub-workflows. |
| Research novel ML techniques for emerging threat landscape | 15% | 1 | 0.15 | NOT INVOLVED | Evaluating cutting-edge ML research (graph neural networks, foundation models for security, federated learning for threat sharing) and determining which techniques solve specific detection problems. Genuine novelty — the threat landscape evolves monthly and no automated system can independently identify which emerging ML technique addresses which emerging threat. |
| Automate security workflows using ML (SOAR integration, alert correlation) | 15% | 3 | 0.45 | AUGMENTATION | Building ML-powered automation for alert triage, incident prioritisation, and response orchestration. SOAR platforms handle structured workflows, but designing the ML layer that makes intelligent decisions about alert correlation and response priority requires human judgment about security context. Tools increasingly capable but the human designs the intelligence layer. |
| Cross-functional collaboration with SOC/IR/threat intel teams | 10% | 2 | 0.20 | NOT INVOLVED | Translating threat intelligence into ML model requirements. Understanding what SOC analysts need, what false positive rates are operationally acceptable, and how models integrate into human workflows. Requires security domain knowledge and communication that AI cannot replicate. |
| Total | 100% | 2.20 |
Task Resistance Score: 6.00 - 2.20 = 3.80/5.0
Displacement/Augmentation split: 0% displacement, 75% augmentation, 25% not involved.
Reinstatement check (Acemoglu): Yes — AI creates substantial new tasks: LLM-powered threat hunting model development, AI agent security behaviour modelling, deepfake detection systems, AI-generated phishing detection, adversarial robustness testing for security models, foundation model adaptation for security telemetry, federated learning for cross-org threat sharing. The task portfolio expands with every new AI capability and every new attack vector.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 2 | AI/ML postings surged 163% YoY to 49,200 in 2025 (Lightcast). Cybersecurity postings at 457,000+ US openings (CyberSeek 2025). The intersection — ML engineers with security expertise — is acutely scarce. LinkedIn ranked AI engineering the #1 fastest-growing job title for 2026. WEF projects ML specialist demand to rise 40% (1M jobs) over five years. |
| Company Actions | 2 | Every major security vendor building ML/AI teams: CrowdStrike (Charlotte AI), SentinelOne (Purple AI), Darktrace (autonomous response), Palo Alto (Cortex XSIAM), Microsoft (Copilot for Security). Startups raising hundreds of millions for AI-powered security (Abnormal Security, Vectra AI, Exabeam). 70% of firms report difficulty finding AI talent (Signify Technology). No evidence of role cuts — acute shortage. |
| Wage Trends | 2 | AI Security Engineer salary $143K-$225K+ (domain file). ML Engineer median $187,500 (Axial Search). This intersection role commands stacked premiums: 28% AI premium (HeroHunt) plus cybersecurity premium (4.7% YoY growth, Motion Recruitment). Mid-level salaries jumped 9.2% in 2025 alone (MRJ Recruitment). Surging well above inflation. |
| AI Tool Maturity | 1 | AutoML handles standard classification/regression but security-domain ML requires custom models trained on adversarial data distributions. Attackers actively evade detection models — off-the-shelf AutoML cannot adapt. Platforms (SageMaker, MLflow) automate pipeline operations but the engineer designs what to build and how to make it robust against evasion. Tools augment significantly but don't replace the adversarial judgment layer. |
| Expert Consensus | 2 | ISC2 2025: AI is top-5 cybersecurity skill, expected to become #1 in-demand. WEF: AI/ML specialists #1 fastest-growing through 2030. Gartner: 45% of cybersecurity tasks automatable by 2028 — but this creates demand for ML engineers who build the automation, not displacement of them. Universal consensus: the builders of AI security tools are in the strongest position. |
| Total | 9 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 1 | No formal licensing. EU AI Act mandates human oversight for high-risk AI systems (security tools monitoring critical infrastructure qualify). NIST AI RMF requires documented human-in-the-loop for AI risk management. Creates structural demand for qualified ML engineers who understand model behaviour in security contexts. |
| Physical Presence | 0 | Fully remote capable. |
| Union/Collective Bargaining | 0 | Tech sector, at-will employment. |
| Liability/Accountability | 1 | ML models that miss threats cause real harm — breaches, data loss, regulatory penalties. If a threat detection model fails to catch an intrusion, someone is accountable. EU AI Act assigns liability to providers of high-risk AI. Mid-level engineers share accountability with leadership but bear significant technical responsibility for model performance. |
| Cultural/Ethical | 1 | Growing trust requirements for ML models defending critical infrastructure. Organisations require human engineers to validate that security models are robust, unbiased, and not susceptible to adversarial manipulation before deployment. The stakes — missed breaches, false accusations — demand human oversight. |
| Total | 3/10 |
AI Growth Correlation Check
Confirmed at 2. This role has dual recursive demand:
- AI growth drives attack growth: More AI deployments → more AI-powered attacks (82.6% of phishing now AI-generated, KnowBe4). ML engineers build the detection models that counter AI-powered threats.
- AI growth drives defence demand: More AI systems in production → more attack surfaces → more ML-powered security monitoring needed. Every AI deployment needs ML-based anomaly detection.
- The adversarial dimension adds uniqueness: Unlike general ML engineering, security ML operates against intelligent adversaries who actively evade models. This cat-and-mouse dynamic ensures continuous demand for human engineers who can adapt faster than attackers.
This qualifies as Green Zone (Accelerated): AI Growth Correlation = 2 AND AIJRI ≥ 48.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.80/5.0 |
| Evidence Modifier | 1.0 + (9 × 0.04) = 1.36 |
| Barrier Modifier | 1.0 + (3 × 0.02) = 1.06 |
| Growth Modifier | 1.0 + (2 × 0.05) = 1.10 |
Raw: 3.80 × 1.36 × 1.06 × 1.10 = 6.0259
JobZone Score: (6.0259 - 0.54) / 7.93 × 100 = 69.2/100
Zone: GREEN (Green ≥48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 35% |
| AI Growth Correlation | 2 |
| Sub-label | Green (Accelerated) — Growth Correlation = 2 AND AIJRI ≥ 48 |
Assessor override: None — formula score accepted.
Assessor Commentary
Score vs Reality Check
The zone label is honest and well-calibrated. The 69.2 AIJRI sits just above ML/AI Engineer (68.2) — correct because cybersecurity domain expertise adds slightly higher task resistance (3.80 vs 3.75) through the adversarial dimension that general ML lacks. Below AI Security Engineer (79.3) because that role has broader security architecture responsibility and stronger barriers (5/10 vs 3/10). The 1-point gap from ML/AI Engineer is tight but accurate — the cybersecurity domain adds marginal protection through adversarial complexity, not a fundamental structural difference. No borderline risk (21 points above the Green threshold).
What the Numbers Don't Capture
- Supply shortage confound. The intersection of ML engineering and cybersecurity expertise is exceptionally rare. Surging demand and premium wages are partly driven by this scarcity — most ML engineers lack security domain knowledge, and most security professionals lack ML engineering depth. If cross-training programmes close the gap, wage premiums could compress. The role stays Green, but current compensation reflects scarcity as much as structural protection.
- Adversarial arms race dynamic. Unlike general ML where model performance improves monotonically, security ML operates against adversaries who actively adapt. This means models require continuous retraining and novel architecture development — the adversarial dimension creates perpetual demand for human engineers that static domains do not.
- Title rotation risk. "AI/ML Engineer — Cybersecurity" may not be the permanent title. As ML becomes standard in security platforms, this work could absorb into "Security Engineer" or "Detection Engineer" the way "cloud" absorbed into general infrastructure roles. The WORK persists; the distinct title and premium may not.
- AutoML compression trajectory. Standard anomaly detection models are increasingly automatable. The role's protection depends on the continued prevalence of novel, adversarial, and domain-specific ML problems that AutoML cannot address. If security ML standardises (unlikely given the adversarial nature), task resistance would drop.
Who Should Worry (and Who Shouldn't)
If you're building custom ML models for novel threat detection — designing graph neural networks for lateral movement detection, developing adversarial-robust models, creating new detection architectures for emerging attack vectors — you're in an exceptionally strong position. Both the AI and cybersecurity growth curves feed your demand simultaneously, and the adversarial nature of the work ensures no off-the-shelf solution replaces you.
If you're primarily fine-tuning pre-trained anomaly detection models or maintaining existing ML pipelines in a SIEM/XDR platform without designing new detection approaches — your risk profile is closer to Yellow. Platform vendors (CrowdStrike, SentinelOne, Palo Alto) are building these capabilities into their products, reducing the need for in-house ML pipeline maintenance.
The single biggest factor: whether you design novel detection models or operate existing ones. The adversarial ML dimension — building models that resist active evasion by human attackers — is what separates the protected version of this role from the automatable version.
What This Means
The role in 2028: The AI/ML Engineer in cybersecurity will build detection systems for AI-powered attacks (deepfake social engineering, AI-generated malware, automated exploitation chains), design ML models for agentic AI behaviour monitoring, and develop adversarial robustness frameworks for the growing fleet of AI systems in production. Foundation models adapted for security telemetry will be standard tooling. The role becomes more specialised and more valuable as both AI complexity and attack sophistication increase.
Survival strategy:
- Master adversarial ML and model robustness. Adversarial examples, evasion attacks, model poisoning, concept drift in security contexts — this is the moat that AutoML cannot cross and the dimension that separates this role from general ML engineering.
- Build deep cybersecurity domain expertise. MITRE ATT&CK fluency, threat intelligence integration, understanding of attacker TTPs. The $200K+ roles go to engineers who understand both the models and the threats they're built to detect.
- Develop LLM and agentic AI security skills. AI agent behaviour monitoring, LLM-powered threat analysis, foundation model adaptation for security — these are the frontier applications where demand is accelerating fastest.
Timeline: This role strengthens over the next 5-10+ years. The dual growth drivers — AI adoption and cybersecurity threat expansion — create compounding demand. The adversarial dimension ensures continuous need for human engineers who can adapt faster than attackers.