Role Definition
| Field | Value |
|---|---|
| Job Title | TLPT Manager |
| Seniority Level | Mid-Senior |
| Primary Function | Manages Threat-Led Penetration Testing programmes under TIBER-EU/DORA frameworks for financial institutions. Coordinates the white team, red team provider, threat intelligence provider, and regulatory authority (TLPT Cyber Team) through a multi-month engagement. Responsible for scoping critical functions, validating threat intelligence scenarios, monitoring red team execution for operational risk, reviewing deliverables, and securing attestation from the competent authority. |
| What This Role Is NOT | NOT a penetration tester or red team operator — does not execute attacks. NOT a SOC Manager — does not run defensive operations. NOT a generic project manager — requires deep offensive security knowledge and regulatory expertise in DORA/TIBER-EU. NOT a GRC analyst — this is programme leadership, not checklist compliance. |
| Typical Experience | 7-12 years. Background in offensive security, red teaming, or cybersecurity consulting. Knowledge of TIBER-EU/DORA RTS on TLPT frameworks. Often holds OSCP, CREST, or GIAC certifications plus programme management credentials. |
Seniority note: A junior coordinator handling logistics would score Yellow. A Head of TLPT/Cyber Resilience at a central bank or supervisory authority would score higher Green due to attestation authority and policy-setting responsibility.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully desk-based. Tests are conducted remotely against digital infrastructure. |
| Deep Interpersonal Connection | 2 | Trust and relationships are central. The TLPT Manager is the bridge between regulator, entity board, red team, TI provider, and white team. Navigating competing interests, managing confidentiality (blue team must not know), and building trust with all parties is core to the role. |
| Goal-Setting & Moral Judgment | 3 | Defines test scope against critical functions, makes judgment calls on operational risk during live red team operations, determines whether scenarios are realistic and proportionate, and ultimately recommends whether the test meets attestation standards. These are consequential decisions with regulatory and operational impact. |
| Protective Total | 5/9 | |
| AI Growth Correlation | 1 | DORA mandates TLPT for critical financial entities across the EU, creating new regulatory demand. AI-driven attacks increase the need for realistic adversarial testing. But AI tools may reduce the number of human test managers needed per engagement over time. |
Quick screen result: Protective 5 + Correlation 1 = Likely Green Zone (Transforming).
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Programme planning, scoping & regulatory alignment | 25% | 2 | 0.50 | AUG | Defining which critical functions to test, aligning scope with competent authority requirements, negotiating timelines with entity and providers. AI can draft scope documents and map regulatory requirements, but the judgment on what constitutes a critical function and proportionate scope requires deep institutional context. |
| Stakeholder coordination (white team, red team, TI provider, regulator) | 20% | 1 | 0.20 | NOT | Managing confidentiality boundaries (blue team unaware), mediating between competing interests, building trust with board-level stakeholders and regulatory contacts. This is irreducibly human — the coordination IS the value. |
| Threat intelligence phase oversight & validation | 15% | 3 | 0.45 | AUG | Reviewing TI provider deliverables — threat actor profiles, scenarios, TTPs. AI can assist with TI synthesis and scenario validation against MITRE ATT&CK. Human validates relevance to the specific entity's threat landscape and ensures scenarios are realistic, not generic. |
| Red team execution monitoring & risk management | 15% | 2 | 0.30 | AUG | Real-time oversight during live red team operations. Deciding whether to pause if operational risk escalates, ensuring rules of engagement are followed, managing deconfliction with the entity's SOC. AI can track progress and flag anomalies, but risk decisions during live operations require human judgment. |
| Deliverable review, attestation & remediation tracking | 15% | 3 | 0.45 | AUG | Reviewing red team reports, validating findings against threat scenarios, preparing the entity's remediation plan, and supporting the competent authority's attestation decision. AI can draft summaries and cross-reference findings, but the attestation recommendation requires professional judgment on test quality and completeness. |
| Post-test reporting, lessons learned & board communication | 10% | 2 | 0.20 | AUG | Presenting results to entity board and senior management, facilitating lessons learned, communicating to the regulator. The human IS the messenger — boards expect a trusted expert explaining the implications. AI drafts materials. |
| Total | 100% | 2.10 |
Task Resistance Score: 6.00 - 2.10 = 3.90/5.0
Displacement/Augmentation split: 0% displacement, 80% augmentation, 20% not involved.
Reinstatement check (Acemoglu): Yes. DORA itself creates new tasks that did not exist before 2025 — TLPT programme design for financial entities, regulatory attestation coordination, and cross-border TLPT harmonisation. AI adoption also creates new testing requirements (AI system resilience testing, LLM-specific threat scenarios). The role is expanding, not contracting.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 1 | DORA took effect January 2025, making TLPT mandatory for critical financial entities. The ECB is actively recruiting TLPT Team Leads. Demand is early-stage but structurally growing as EU member states designate entities for TLPT. Niche role — low posting volumes but clear upward trajectory driven by regulation. |
| Company Actions | 1 | Financial institutions across the EU are standing up TLPT programmes for the first time. Consulting firms (Northwave, Oneconsult, Bureau Veritas, Telefonica Tech) are building TLPT practices. Central banks are expanding TIBER Cyber Teams. No AI-driven cuts in this function. |
| Wage Trends | 1 | Comparable to senior cybersecurity consultant/programme manager ranges — $130K-$180K in the US, EUR 80K-130K in EU markets. Premium for TIBER/DORA expertise. Salaries tracking 4-5% above inflation consistent with broader cybersecurity market growth. |
| AI Tool Maturity | 0 | AI tools assist with threat intelligence synthesis (MITRE ATT&CK mapping, TI report drafting) and documentation. No production tool automates the TLPT programme management lifecycle — scoping, stakeholder coordination, attestation. The core work is coordination and judgment, not technical execution. Anthropic observed exposure for Information Security Analysts: 48.6%, but this role's management/coordination focus reduces AI applicability. |
| Expert Consensus | 1 | ISC2 (2025): 87% expect AI to enhance cybersecurity roles, 2% expect replacement. TIBER.info maturity model for TLPT test managers emphasises five knowledge domains requiring deep human expertise. DORA RTS explicitly requires human test managers from the TLPT Cyber Team. Regulatory frameworks mandate human oversight — consensus is augmentation. |
| Total | 4 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 1 | DORA RTS explicitly requires the TLPT Cyber Team (human test managers) to oversee and attest tests. No formal licensing exists, but CREST accreditation and TIBER-EU certification function as de facto gatekeepers. The regulatory framework is built around human oversight. |
| Physical Presence | 0 | Fully remote capable. Coordination meetings may occasionally be in-person but not structurally required. |
| Union/Collective Bargaining | 0 | Professional services / financial sector. No union protection. |
| Liability/Accountability | 2 | If a red team operation causes a production outage at a systemically important financial institution, the TLPT Manager bears accountability for risk management decisions made during the test. Attestation carries regulatory weight — incorrect attestation has legal consequences. AI cannot be the accountable party. |
| Cultural/Ethical | 1 | Financial regulators and bank boards expect a trusted human expert managing adversarial tests against critical infrastructure. The confidentiality requirements (blue team unaware) and the sensitivity of findings (potential systemic vulnerabilities) demand human judgment and discretion. |
| Total | 4/10 |
AI Growth Correlation Check
Confirmed at 1 (Weak Positive). DORA creates new regulatory demand — financial entities that never performed TIBER tests must now undergo TLPT. AI adoption in financial services expands the attack surface (AI-driven trading, AI customer services, LLM integrations), creating new threat scenarios that TLPT must cover. However, as TLPT tooling matures and programme templates standardise, fewer test managers may be needed per engagement. The demand growth is real but not recursive — this role does not exist BECAUSE of AI, it exists because of regulation.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.90/5.0 |
| Evidence Modifier | 1.0 + (4 × 0.04) = 1.16 |
| Barrier Modifier | 1.0 + (4 × 0.02) = 1.08 |
| Growth Modifier | 1.0 + (1 × 0.05) = 1.05 |
Raw: 3.90 × 1.16 × 1.08 × 1.05 = 5.1302
JobZone Score: (5.1302 - 0.54) / 7.93 × 100 = 57.9/100
Zone: GREEN (Green ≥48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 30% |
| AI Growth Correlation | 1 |
| Sub-label | Green (Transforming) — AIJRI ≥ 48 AND ≥ 20% of task time scores 3+ |
Assessor override: None — formula score accepted.
Assessor Commentary
Score vs Reality Check
The 57.9 score places this role squarely in Green (Transforming), consistent with comparable cybersecurity management roles — Cybersecurity Manager (57.9), SOC Manager (61.8), and Compliance Manager (48.2). The score is honest. The role's protection comes from three reinforcing factors: regulatory mandate (DORA explicitly requires human test managers), accountability barriers (attestation carries legal weight), and stakeholder coordination complexity (managing regulator, entity, red team, and TI provider simultaneously). None of these are technology gaps that AI will close — they are structural features of how financial regulation works.
What the Numbers Don't Capture
- Regulatory demand is front-loaded. DORA took effect in January 2025. The initial wave of TLPT implementations across EU financial entities creates a surge in demand for test managers. Once the first cycle completes (2025-2028), demand may stabilise at a lower steady-state as institutions build internal capability and testing cadences normalise.
- Extremely niche talent pool. Fewer than a thousand professionals globally have genuine TIBER/TLPT test management experience. This scarcity inflates evidence signals — strong demand and rising wages may reflect a temporary supply constraint rather than permanent structural demand.
- Cross-border complexity is underscored. Many financial entities operate across multiple EU jurisdictions, each with its own competent authority and national TIBER implementation. Coordinating cross-border TLPTs adds a layer of diplomatic complexity that no AI tool addresses.
Who Should Worry (and Who Shouldn't)
If you are a TLPT test manager at a competent authority (central bank, supervisory body) or leading TLPT programmes at a tier-1 consultancy — you are well-positioned. Your attestation authority and regulatory relationships are irreplaceable. This version of the role is safer than the score suggests.
If you are positioned as a TLPT coordinator handling logistics, scheduling, and document management without owning the regulatory relationship or attestation decision — you are closer to Yellow. The coordination mechanics (scheduling, document routing, status tracking) are the parts AI will absorb first.
The single biggest separator: whether you own the attestation recommendation or merely support it. The test manager who tells the regulator "this test meets the standard" holds a position AI cannot occupy. The coordinator who compiles the paperwork for that decision is more exposed.
What This Means
The role in 2028: The TLPT Manager is a regulatory programme leader who uses AI to synthesise threat intelligence, draft scope documents, and monitor red team progress — but owns the judgment calls on risk, scope proportionality, and attestation quality. As DORA matures and second-cycle TLPTs begin, the role becomes more standardised but no less human-dependent. Cross-border TLPT coordination and AI-specific threat scenarios (testing LLM resilience, AI supply chain integrity) become core competencies.
Survival strategy:
- Build deep DORA/TIBER-EU regulatory expertise. The test managers who understand the RTS, national implementation guides, and competent authority expectations are the ones who own attestation decisions — the irreducible core of the role.
- Develop cross-border TLPT coordination skills. Multi-jurisdictional TLPTs are the most complex engagements and the hardest to automate — the diplomatic and regulatory navigation required is a durable moat.
- Add AI-specific threat scenario expertise. As financial institutions deploy AI systems, TLPTs must test AI resilience (adversarial ML, prompt injection, data poisoning). The TLPT Manager who can scope and validate AI-specific threat scenarios adds a capability layer that compounds with AI growth.
Timeline: 5-8 years of strong demand driven by DORA implementation cycles. Regulatory frameworks move slowly — the human oversight mandate is structural, not a technology gap waiting to be closed.