Role Definition
| Field | Value |
|---|---|
| Job Title | Cybersecurity Risk Manager (Cyber Risk Manager / Information Security Risk Manager) |
| Seniority Level | Mid-Senior (5-10 years experience) |
| Primary Function | Manages the organisation's cybersecurity risk management program — develops risk strategy, identifies and assesses cyber threats and vulnerabilities, recommends risk treatment options, monitors control effectiveness, maintains the risk register, and communicates risk posture to leadership. Ensures risks remain at acceptable levels by selecting appropriate mitigations aligned with organisational strategy. |
| What This Role Is NOT | NOT a CISO (sets enterprise strategy at executive/board level). NOT a Compliance Manager (owns regulatory compliance program and attestation). NOT a GRC Analyst (executes risk tasks vs directs the risk program). NOT a Security Auditor (independently tests controls vs manages risk). The Cybersecurity Risk Manager owns the risk assessment and treatment lifecycle — not executive governance, not compliance attestation, not hands-on control testing. |
| Typical Experience | 5-10 years in cybersecurity, risk management, or information assurance. Certifications: CRISC, CISM, CISSP, ISO 27005 Lead Risk Manager, ISO 31000. 70% hold a bachelor's degree, 20% graduate degree. |
Seniority note: A junior risk analyst (2-4 years) doing operational risk register maintenance without risk acceptance authority or strategic scope would score Yellow (~2.8-3.0). A Director/VP of Risk or Chief Risk Officer would score closer to the CISO (83.0) due to executive accountability and board-level reporting.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully digital, desk-based. All work in GRC platforms, risk registers, and stakeholder meetings. |
| Deep Interpersonal Connection | 1 | Coordinates with business unit leaders, IT teams, and executive stakeholders to communicate risk. Requires influence and persuasion but not deep trust-based relationships like therapy or patient care. |
| Goal-Setting & Moral Judgment | 2 | Defines acceptable risk levels, recommends risk treatment strategies, interprets ambiguous threat data. Decides what the organisation SHOULD do about risk — not just executes prescribed rules. Risk appetite decisions involve genuine judgment in novel situations. |
| Protective Total | 3/9 | |
| AI Growth Correlation | 1 | AI adoption creates new risk categories — model risk, adversarial AI, shadow AI, data poisoning — and new regulatory requirements (EU AI Act risk assessment, NIST AI RMF, ISO 42001). But AI also automates traditional risk scoring and monitoring. Net weak positive. |
Quick screen result: Protective 3 + Correlation 1 → Yellow-to-Green boundary. Proceed to quantify.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Risk strategy & framework development | 20% | 2 | 0.40 | AUGMENTATION | AI researches framework best practices, benchmarks peer organisations, drafts risk appetite statements. The manager designs the risk management strategy, selects frameworks (ISO 27005, NIST CSF, FAIR), and aligns risk approach with business objectives. Strategic, contextual work. |
| Risk assessment & analysis | 25% | 3 | 0.75 | AUGMENTATION | AI automates vulnerability scanning, threat intelligence correlation, risk scoring algorithms, and CRQ modelling (CyberSaint, Bitsight). The manager interprets results, validates assumptions, applies business context, and makes judgment calls on novel threat scenarios. Significant AI acceleration in data gathering; human leads the analysis and interpretation. |
| Stakeholder communication & risk reporting | 15% | 2 | 0.30 | AUGMENTATION | AI generates risk dashboards, compiles metrics, drafts executive summaries. The manager translates technical risk into business language, presents to leadership, negotiates risk treatment priorities with business unit owners. Communication and influence are core to effectiveness. |
| Risk monitoring & control effectiveness oversight | 15% | 4 | 0.60 | DISPLACEMENT | Reviewing risk dashboards, tracking control effectiveness metrics, monitoring KRIs, updating risk registers with new data. MetricStream, Archer, and ServiceNow automate continuous control monitoring and risk scoring. AI agents can execute this monitoring workflow end-to-end with human review of exceptions only. |
| Risk acceptance & treatment decisions | 10% | 1 | 0.10 | NOT INVOLVED | Recommending risk acceptance, transfer, mitigation, or avoidance. Signing risk treatment plans. NIS2 Article 20 requires management bodies to approve ICT risk management frameworks. AI has no authority to accept organisational risk. Structural barrier, not technical. |
| Team/vendor coordination & mentoring | 10% | 1 | 0.10 | NOT INVOLVED | Managing risk analysts, coordinating with third-party risk assessors, developing team capabilities. Hiring, coaching, performance evaluation. Irreducibly human. |
| Policy interpretation & regulatory mapping | 5% | 3 | 0.15 | AUGMENTATION | AI maps controls across frameworks, analyses regulatory text, identifies gaps. But novel regulatory interpretation (EU AI Act application to specific AI systems, DORA ICT risk requirements) requires the manager to lead the analysis and own the decision. |
| Total | 100% | 2.40 |
Task Resistance Score: 6.00 - 2.40 = 3.60/5.0
Displacement/Augmentation split: 15% displacement, 65% augmentation, 20% not involved.
Reinstatement check (Acemoglu): AI creates significant new tasks — AI risk assessment (model risk, adversarial attack vectors, data poisoning), AI regulatory compliance (EU AI Act risk classification, NIST AI RMF), shadow AI discovery and governance, AI vendor risk evaluation, and validating AI-generated risk scores. The cybersecurity risk manager absorbing AI risk assessment is a genuine reinstatement mechanism — net new work that did not exist 3 years ago.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 1 | 21,000 US job openings for cybersecurity risk manager titles (HAL local data). ISC2 reports 4.8M unfilled cybersecurity positions globally. BLS projects 29% growth for information security analysts 2024-2034. Robert Half: risk management/ERM is top legal hiring area (33%). City Security Magazine names GRC specialists among most in-demand for 2026. Growing, but specific "Cyber Risk Manager" postings fragment across multiple titles. |
| Company Actions | 1 | Companies investing in cybersecurity risk management. Robert Half: cybersecurity is #1 UK hiring priority (44%). No companies cutting risk management roles citing AI. NIS2, DORA, and SEC cybersecurity disclosure rules are creating new risk management positions. However, Gartner's middle management flattening prediction (20% of orgs by 2026) applies — risk management is exposed to consolidation. |
| Wage Trends | 1 | Glassdoor: $146K median total pay ($111K-$194K range). Information Security Risk Manager: $181K avg. Salary.com: Computing Platform Risk & Security Manager $146K-$210K. UK operational risk manager salaries up 11.1% YoY (Robert Half). Motion Recruitment: cybersecurity salaries expected to surge ~10% in 2026. Growing above inflation. |
| AI Tool Maturity | 0 | MetricStream, CyberSaint, Archer, ServiceNow, Bitsight, and SecurityScorecard all production-ready for risk scoring, monitoring, and evidence collection. 72% of companies using AI in GRC (Cyber Sierra). Tools automate monitoring and data collection (15% of task time) but do not replace risk judgment, strategy, or stakeholder engagement (65% of task time). Tools augment core work; displace monitoring. |
| Expert Consensus | 1 | Onspring: "The Future of GRC: AI Enabled, Human Led." Cyber Sierra: "AI will not replace GRC professionals but will augment them." IBM: GRC becoming real-time but human judgment essential. Qualys: 2026 is about risk-first security. Fortinet: AI transforming cybersecurity but skills gap persists. Consensus: transformation not displacement for risk management leadership. |
| Total | 4 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 1 | CRISC and CISM certifications expected. NIS2, DORA, and SEC cybersecurity disclosure rules require documented risk management with human oversight. ISO 27005 and ISO 31000 frameworks expect named human risk owners. Not strict licensing like medical/legal, but significant professional and regulatory expectations. |
| Physical Presence | 0 | Fully remote-capable. |
| Union/Collective Bargaining | 0 | No union representation typical. |
| Liability/Accountability | 2 | Risk register sign-off and risk treatment recommendations directly guide organisational decisions. NIS2 Article 20 requires management bodies to approve ICT risk management frameworks and bear personal responsibility. DORA mandates named accountability for ICT risk. When risk materialises after being assessed as "acceptable," regulatory scrutiny falls on the risk assessment process and the person who signed it. AI cannot bear this accountability. |
| Cultural/Ethical | 1 | Boards and audit committees expect a human presenting risk posture and recommending risk treatment. Risk acceptance decisions — "we accept this residual risk" — require human authority. Cultural resistance to AI making risk tolerance decisions for the organisation is real. |
| Total | 4/10 |
AI Growth Correlation Check
Confirmed at 1 (Weak Positive). AI adoption creates genuinely new risk categories that the cybersecurity risk manager must assess — model risk, adversarial AI, data poisoning, shadow AI proliferation, AI supply chain risk. New regulations (EU AI Act risk classification, NIST AI RMF, ISO 42001) create new risk assessment requirements. However, AI simultaneously automates traditional risk monitoring and scoring, reducing effort per assessment. The risk manager specialising in AI risk assessment is in strong demand; the risk manager running traditional IT risk registers is being leveraged by platforms. Not Accelerated Green — the role predates AI and traditional risk management isn't growing BECAUSE of AI.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.60/5.0 |
| Evidence Modifier | 1.0 + (4 × 0.04) = 1.16 |
| Barrier Modifier | 1.0 + (4 × 0.02) = 1.08 |
| Growth Modifier | 1.0 + (1 × 0.05) = 1.05 |
Raw: 3.60 × 1.16 × 1.08 × 1.05 = 4.7356
JobZone Score: (4.7356 - 0.54) / 7.93 × 100 = 52.9/100
Zone: GREEN (Green ≥48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 45% |
| AI Growth Correlation | 1 |
| Sub-label | Green (Transforming) — ≥20% task time scores 3+ |
Assessor override: None — formula score accepted. Score sits 5.0 points below the Cybersecurity Manager (57.9), reflecting slightly lower task resistance (more analytical work amenable to AI acceleration) and slightly weaker evidence and barriers. 4.9 points above the Green threshold (48), consistent with a specialist risk management role with solid market demand and moderate structural protection.
Assessor Commentary
Score vs Reality Check
The 52.9 JobZone Score places the Cybersecurity Risk Manager solidly in Green, 4.9 points above the Yellow boundary. The 5.0-point gap below the Cybersecurity Manager (57.9) is driven by lower task resistance (3.60 vs 3.70) — the risk manager spends more time on analytical assessment work (25% at score 3) that AI can meaningfully accelerate. Evidence (+4) and barriers (4/10) are slightly below the Cybersecurity Manager (+5 and 5/10), reflecting the risk manager's narrower scope and less team management accountability. The score is not barrier-dependent — removing barriers entirely would drop the score to ~48.5, still Green but borderline.
What the Numbers Don't Capture
- The CRQ automation wave. Cyber Risk Quantification tools (CyberSaint, Bitsight, SecurityScorecard) are automating the analytical core of risk assessment — scoring likelihood and impact, modelling financial exposure, benchmarking against peers. The risk manager who primarily runs risk scoring exercises faces greater compression than the score suggests. The risk manager who interprets, contextualises, and communicates those scores to leadership is protected.
- Title fragmentation dilutes market signal. "Cybersecurity Risk Manager" fragments across Cyber Risk Manager, Information Security Risk Manager, IT Risk Manager, GRC Manager, and Risk & Compliance Manager. Job posting data for any single title understates true demand for the function.
- Function-spending vs people-spending. Organisations investing in risk management platforms (ServiceNow IRM, Archer, MetricStream) are investing in the function, not necessarily in headcount. One risk manager plus AI-powered platforms may replace a risk team of three.
- Regulatory tailwind is time-limited. NIS2, DORA, and EU AI Act are creating a surge in risk management demand now. Once organisations build their risk frameworks and achieve initial compliance, the ongoing maintenance workload is smaller and more automatable than the initial build.
Who Should Worry (and Who Shouldn't)
If you are a Cybersecurity Risk Manager who owns the risk strategy, presents to leadership, makes risk treatment recommendations, and is the named risk owner in regulatory documentation — you are well-positioned. Your role is the human accountability layer that regulations demand and boards expect. AI compresses the analytical work but expands your strategic mandate.
If your primary value is "running risk assessments" — populating risk registers, scoring threats against matrices, generating risk reports from templates — that's the 40% AI is eating fastest. CRQ platforms and agentic AI workflows can execute structured risk assessment methodologies end-to-end. The risk manager whose day looks like a senior risk analyst's faces greater pressure than the label suggests.
The single biggest separator: whether you own the risk conversation with leadership or you feed data into it. The person who translates risk into business decisions is safe. The person who populates the risk register is being compressed by platforms.
What This Means
The role in 2028: The surviving cybersecurity risk manager is a strategic risk advisor — someone who interprets AI-generated risk scores, contextualises novel threats (AI-specific risks, supply chain attacks, regulatory changes), communicates risk appetite to leadership, and makes judgment calls on risk treatment. They spend less time on data collection and risk scoring (AI handles that) and more time on stakeholder engagement, regulatory interpretation, and emerging risk identification. New specialisations in AI risk assessment (EU AI Act classification, model risk, adversarial AI) define the highest-demand niche.
Survival strategy:
- Move from risk scorer to risk advisor. The value isn't in populating the risk register — it's in interpreting what the register means and influencing how the organisation responds. Build the stakeholder relationships and communication skills that AI cannot replicate.
- Specialise in AI risk assessment. EU AI Act risk classification, NIST AI RMF, model risk, shadow AI governance — this is net new work entering your domain. The risk manager who becomes the AI risk specialist occupies the highest-growth niche.
- Master CRQ platforms, don't compete with them. CyberSaint, Bitsight, ServiceNow IRM are force multipliers. One risk manager plus platforms replaces a risk team. Be the one who orchestrates the platforms and interprets the output, not the one whose scoring exercises they automate.
Timeline: 5-7+ years. Structural barriers (regulatory accountability, risk acceptance authority) provide durable protection. The compressed timeline (2-3 years) applies to junior risk analysts without strategic scope or named accountability.