Role Definition
| Field | Value |
|---|---|
| Job Title | Product Security Engineer |
| Seniority Level | Mid-Level (3-7 years) |
| Primary Function | Ensures connected products (IoT, embedded, consumer electronics) meet security requirements throughout their lifecycle. Conducts threat modelling and secure design reviews, manages SAST/DAST/SCA pipelines, operates PSIRT functions (vulnerability intake, triage, coordinated disclosure, ENISA reporting), builds SBOM inventories, and drives EU Cyber Resilience Act compliance including CE marking evidence and post-market surveillance. Bridges product engineering and security. |
| What This Role Is NOT | NOT a general Application Security Engineer (code-focused, not product-lifecycle). NOT an OT/ICS Security Engineer (industrial control systems, scored 73.3 Green). NOT an AI Security Engineer (AI/ML model security, scored 79.3 Green). This role is product-centric — embedded firmware, connected devices, regulatory compliance for products placed on the EU market. |
| Typical Experience | 3-7 years. Background in embedded systems security, IoT security, or application security with product company experience. Certs: CSSLP, IEC 62443, ISO/SAE 21434. Familiarity with STRIDE threat modelling, OWASP IoT Top 10, FIRST PSIRT Services Framework. |
Seniority note: Junior (0-2 years) would score Yellow — primarily running scanning tools and filing tickets. Senior/Principal (8+ years) would score deeper Green (~62-67) — owns product security strategy across multiple product lines, makes risk acceptance decisions, leads PSIRT programmes.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully digital. Some hardware test bench work or firmware debugging, but core work is desk-based design review, compliance documentation, and tool pipeline management. |
| Deep Interpersonal Connection | 1 | Cross-functional collaboration with product managers, firmware engineers, and legal/compliance teams. Must influence engineering culture toward security-by-design. Transactional rather than trust-dependent. |
| Goal-Setting & Moral Judgment | 2 | Interprets CRA essential requirements for specific product categories, makes vulnerability severity decisions with product-specific context, determines coordinated disclosure timing, decides whether to recommend product recall vs patch. Guided by frameworks but applies significant judgment. |
| Protective Total | 3/9 | |
| AI Growth Correlation | 1 | EU CRA enforcement (reporting Sep 2026, full Dec 2027) and proliferation of connected products create sustained demand. Not recursive like AI security — demand driven by regulation and IoT expansion. |
Quick screen result: Protective 3 + Correlation 1 = Likely low Green. CRA regulatory mandate and PSIRT judgment push above Yellow.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Secure product design & threat modelling | 25% | 2 | 0.50 | AUG | Each connected product has unique architecture — embedded RTOS, cloud backend, mobile companion app, firmware update channels. AI can suggest STRIDE categories but cannot determine threat relevance for a specific product's hardware-software integration. The engineer decides which threats matter. |
| SAST/DAST/SCA tooling & pipeline management | 15% | 4 | 0.60 | DISP | AI-enhanced tools (Snyk, Semgrep, SonarQube) handle scanning at scale with reduced false positives. Agent-executable pipelines run automatically in CI/CD. The engineer configures once and reviews exceptions rather than operating the pipeline manually. |
| CRA compliance & CE marking documentation | 15% | 2 | 0.30 | AUG | AI can draft compliance templates and map controls to CRA Annex I. But interpreting essential requirements for a specific product category, determining conformity assessment routes (self-assessment vs third-party), and signing declarations of conformity require human judgment and legal accountability. CE marking demands human attestation. |
| PSIRT operations (vuln intake, triage, disclosure) | 15% | 2 | 0.30 | AUG | Vulnerability triage requires product-specific impact assessment — a CVE in an open-source library may be critical in one product and unexploitable in another depending on how the code is called. Coordinated disclosure timing involves legal, PR, and customer relationship judgment. ENISA reporting (mandatory Sep 2026) requires human decision on exploitability. |
| SBOM management & supply chain security | 10% | 4 | 0.40 | DISP | SBOM generation and component tracking are agent-executable. Tools like Snyk, FOSSA, and Sonatype generate SBOMs from build systems, track known vulnerabilities, and flag licence conflicts automatically. Human reviews exceptions. |
| Stakeholder engagement & developer training | 10% | 1 | 0.10 | NOT | Influencing engineering culture, delivering security training to firmware teams, convincing product managers to prioritise security fixes over feature deadlines, building security champion programmes. Human relationship and persuasion work. |
| Post-market surveillance & incident response | 10% | 2 | 0.20 | AUG | CRA mandates ongoing monitoring and vulnerability reporting to ENISA/CSIRTs. Post-market surveillance of deployed products requires understanding field conditions and customer environments. AI assists with log analysis but response decisions — recall, patch, advisory — require human judgment and accountability. |
| Total | 100% | 2.40 |
Task Resistance Score: 6.00 - 2.40 = 3.60/5.0
Displacement/Augmentation split: 25% displacement, 65% augmentation, 10% not involved.
Reinstatement check (Acemoglu): Yes — the EU CRA creates entirely new tasks that did not exist before: conformity assessment documentation, mandatory vulnerability reporting to ENISA via the Single Reporting Platform (Sep 2026), SBOM maintenance obligations, coordinated vulnerability disclosure mandates, and post-market surveillance. These are expanding the role's scope, not automating existing work.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 1 | Product security postings growing 10-15% YoY as CRA enforcement approaches. Dedicated "Product Security Engineer" titles increasing on LinkedIn and Indeed. Growth is anticipatory — not yet the 20%+ surge expected after Sep 2026 reporting deadline hits. |
| Company Actions | 1 | Major manufacturers (Siemens, Bosch, Philips, Samsung) building dedicated product security teams. Startups like Finite State and Cybellum raised funding for product security platforms. No companies cutting this role. European Commission releasing implementation guidelines accelerating organisational preparation. |
| Wage Trends | 1 | Mid-level salary $130K-$190K (Glassdoor avg $186K, ZipRecruiter avg $144K). Growing above inflation, tracking general cybersecurity salary growth with early CRA premium signals for candidates with conformity assessment experience. |
| AI Tool Maturity | 0 | AI-enhanced SAST/DAST/SCA tools are production-ready and automating scanning workflows. But core tasks — threat modelling for novel product architectures, CRA interpretation for specific product categories, PSIRT judgment — have no viable AI alternative. Anthropic observed exposure for Information Security Analysts: 48.6% — mixed automated/augmented, supporting neutral. |
| Expert Consensus | 1 | ISC2 2025: 87% expect AI to enhance, not replace, security roles. ENISA CRA guidance emphasises human expertise for conformity assessment. FIRST PSIRT framework requires human-led vulnerability coordination. Blaze InfoSec and Kusari both identify CRA as creating sustained demand for product security expertise. |
| Total | 4 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 2 | EU CRA Article 13 places direct obligations on manufacturers — including designating responsible persons, signing declarations of conformity, and reporting exploited vulnerabilities to ENISA. CE marking requires human attestation. NIS2 creates additional accountability obligations. The regulatory framework mandates human ownership throughout. |
| Physical Presence | 0 | Fully remote-capable. Some firmware debugging requires lab access but is the exception. |
| Union/Collective Bargaining | 0 | Non-unionised professional role. |
| Liability/Accountability | 2 | CRA penalties for non-compliance include product recall and withdrawal from the EU market. If a connected product causes harm due to a known unpatched vulnerability, the manufacturer faces regulatory penalties and civil liability. Someone must own the decision to ship, patch, or recall. AI cannot bear legal responsibility for a flawed conformity assessment or delayed vulnerability disclosure. |
| Cultural/Ethical | 1 | Manufacturers and regulators expect human experts behind product security decisions. Enterprise B2B customers demand named security contacts and human PSIRT responders. Reinforced by CRA's emphasis on human accountability, though gradually normalising AI-assisted workflows. |
| Total | 5/10 |
AI Growth Correlation Check
Confirmed at 1 (Weak Positive). The proliferation of connected products (IoT, smart home, industrial IoT, connected medical devices, connected vehicles) expands the product security attack surface. EU CRA creates a regulatory mandate that did not exist before — every product with digital elements sold in the EU market requires cybersecurity conformity assessment, with reporting obligations starting September 2026 and full enforcement by December 2027. This is analogous to GDPR creating demand for Data Protection Officers. AI adoption also increases product complexity (AI-enabled features in products require security assessment). Not Accelerated (2) because the role does not exist specifically because of AI — it exists because of product connectivity and regulation.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.60/5.0 |
| Evidence Modifier | 1.0 + (4 x 0.04) = 1.16 |
| Barrier Modifier | 1.0 + (5 x 0.02) = 1.10 |
| Growth Modifier | 1.0 + (1 x 0.05) = 1.05 |
Raw: 3.60 x 1.16 x 1.10 x 1.05 = 4.8233
JobZone Score: (4.8233 - 0.54) / 7.93 x 100 = 54.0/100
Zone: GREEN (Green >=48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 25% |
| AI Growth Correlation | 1 |
| Sub-label | Green (Transforming) — AIJRI >= 48 AND >= 20% of task time scores 3+ |
Assessor override: None — formula score accepted. The 54.0 sits comfortably in Green, reflecting a role with solid task resistance (3.60) boosted by positive evidence, meaningful barriers, and growth correlation. Compared to Security Engineer (44.6 Yellow) which lacks the CRA regulatory tailwind and PSIRT judgment, Application Security Engineer (57.1 Green) which has deeper code-level specialisation, and Cloud Security Engineer (49.9 Green) which operates in a different product context, the score calibrates correctly.
Assessor Commentary
Score vs Reality Check
The Green (Transforming) label at 54.0 is honest and no longer borderline — 6 points above the Green/Yellow threshold of 48. The upgrade from the previous 50.0 reflects strengthened evidence (CRA timeline now concrete with Sep 2026 reporting deadline) and upgraded barriers (regulatory mandate upgraded to 2/2 as CRA implementation guidelines crystallise). The score is not barrier-dependent in isolation — stripping barriers entirely would yield ~48.8, still Green. The primary drivers are task resistance (3.60) and positive evidence/growth modifiers. CRA is the single strongest protective factor, but even without it the role's judgment-intensive nature (threat modelling, PSIRT, secure design) keeps it above Yellow.
What the Numbers Don't Capture
- Regulatory cliff (positive). CRA reporting obligations start September 2026 — just 6 months away. Full enforcement December 2027. Once penalties apply and market surveillance authorities begin enforcing, demand will surge. Evidence score could improve from 4 to 6-7, pushing the score toward ~60.
- Tool maturation rate. SAST/DAST/SCA tools are improving rapidly. The 25% of task time scoring 4 (displacement) could expand to 35-40% within 3-5 years as AI handles more scanning, fuzzing, and compliance documentation. This gradually compresses task resistance.
- Title rotation. "Product Security Engineer" consolidates fragmented titles — "Embedded Security Engineer," "IoT Security Analyst," "Device Security Engineer," "Connected Product Security Lead." Posting trend data understates actual demand because it misses these synonyms.
- Market growth vs headcount growth. The number of connected products is growing exponentially, but product security teams are not scaling proportionally. Organisations invest in tooling platforms (Finite State, Cybellum) rather than proportional headcount — one engineer covers more products with better tools.
Who Should Worry (and Who Shouldn't)
If you understand hardware-software integration, can conduct threat modelling for novel product architectures, run a PSIRT programme, and are building CRA compliance expertise — you are well-positioned. The regulatory mandate guarantees demand, and the judgment-intensive nature of your work resists automation.
If you primarily run SAST/DAST scans, file tickets from tool output, and generate boilerplate compliance documents without understanding the underlying product architecture — you are exposed. The scan-triage-report pipeline is exactly what AI tooling automates. The engineers who survive understand the product deeply enough to determine whether a vulnerability finding actually matters in context.
The single biggest factor: product architecture expertise. Knowing how to run Snyk is table stakes. Knowing why a buffer overflow in a firmware OTA update handler is critical while the same vulnerability in a logging module is low-risk — that is the judgment AI cannot replicate.
What This Means
The role in 2028: The Product Security Engineer of 2028 operates in a fully CRA-regulated environment where every connected product requires documented conformity assessment and mandatory ENISA vulnerability reporting. AI-powered scanning pipelines handle vulnerability detection at scale, freeing engineers to focus on secure architecture design, conformity assessment judgment, PSIRT coordination, and post-market surveillance strategy. The role is more regulatory and judgment-heavy, less tool-operation-heavy.
Survival strategy:
- Master EU CRA compliance. Understand the essential requirements (Annex I), conformity assessment procedures, ENISA reporting obligations, and the Single Reporting Platform. This is the regulatory moat — every manufacturer selling in the EU needs someone who can navigate CRA.
- Build PSIRT programme expertise. FIRST PSIRT Services Framework, coordinated vulnerability disclosure experience, and vendor-researcher communication skills. PSIRT operations are inherently human-led and growing in regulatory importance.
- Deepen product architecture knowledge. Embedded systems, firmware security, hardware root of trust, secure boot chains, OTA update security. The deeper your understanding of how the product actually works, the harder you are to replace with a scanning tool.
Timeline: 5-7+ years of stability, strengthening as CRA reporting obligations begin (Sep 2026) and full enforcement arrives (Dec 2027). The regulatory mandate provides a structural floor.