Role Definition
| Field | Value |
|---|---|
| Job Title | Catastrophe Modeller |
| Seniority Level | Mid-Level (3-6 years experience) |
| Primary Function | Runs and interprets catastrophe models (RMS RiskLink/Intelligent Risk Platform, AIR Touchstone/Verisk Extreme Event Solutions) to quantify potential financial losses from natural and man-made perils. Prepares exposure data, validates vendor model outputs, performs scenario and sensitivity analysis, and communicates loss estimates to underwriting, pricing, and reinsurance teams. Supports portfolio optimisation and capital adequacy decisions. |
| What This Role Is NOT | NOT a credentialed actuary (FSA/FCAS) — cat modellers typically hold quantitative degrees but not actuarial fellowships. NOT a climate scientist — though they consume climate data. NOT a senior/lead cat modeller who defines methodology, sets assumptions, and bears sign-off accountability (would score higher Yellow or low Green). |
| Typical Experience | 3-6 years. Degree in mathematics, meteorology, geophysics, engineering, or related quantitative field. Proficiency in RMS and/or AIR platforms plus Python/R/SQL. No mandatory professional credential, though IFoA CERA or CAS designations add value. |
Seniority note: Junior/entry-level cat modellers (0-2 years) who primarily run vendor models and clean data would score deeper Yellow (~28-32). Senior/lead cat modellers (7+ years) who own methodology, validate models, and present to boards would score higher Yellow or low Green (~42-50) due to stronger judgment and accountability components.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully digital, desk-based. No physical component. |
| Deep Interpersonal Connection | 1 | Communicates loss estimates to underwriters, reinsurance teams, and occasionally clients. Professional/technical relationships — not deeply personal. |
| Goal-Setting & Moral Judgment | 1 | Interprets model outputs and recommends risk positions, but at mid-level does not set ultimate risk appetite or bear personal regulatory accountability. Follows methodology defined by senior modellers and chief actuaries. |
| Protective Total | 2/9 | |
| AI Growth Correlation | 1 | Weakly positive. Climate change drives expanding cat modelling demand — more perils, more granular models, more regulatory requirements (TCFD, IFRS S2). AI creates new tasks (validating AI-enhanced models, interpreting ML-driven loss estimates). But AI also automates the computational core. Net: weak positive. |
Quick screen result: Protective 2/9 AND Correlation +1 — likely Yellow Zone. Low protective principles but growing demand from climate risk. Proceed to full assessment.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Running vendor cat models (RMS/AIR/Verisk) — configuring model runs, batch processing, generating loss exceedance curves | 20% | 4 | 0.80 | DISPLACEMENT | AI agents can configure and execute model runs end-to-end with defined parameters. RMS Intelligent Risk Platform and Verisk's cloud-native tools increasingly automate batch execution, sensitivity sweeps, and output generation. Human reviews exceptions but is not in the loop for every run. |
| Data preparation, exposure validation & cleansing — ingesting policy/location data, geocoding, applying construction codes, resolving data quality issues | 15% | 4 | 0.60 | DISPLACEMENT | Structured data pipeline with defined rules. AI/ML tools already automate geocoding, occupancy classification, and data quality checks. Moody's RMS and Verisk both offer automated exposure management. Human handles edge cases only. |
| Loss analysis, scenario modelling & sensitivity testing — interpreting loss outputs, running what-if scenarios, comparing vendor vs internal models, tail risk analysis | 20% | 3 | 0.60 | AUGMENTATION | AI accelerates scenario generation and can run thousands of sensitivity permutations. But interpreting results, identifying anomalies in loss distributions, and contextualising tail risk for specific portfolios requires domain expertise. Human-led, AI-accelerated. |
| Model validation, vendor model evaluation & assumption review — assessing model changes (e.g., RMS v23 vs v21), back-testing against historical losses, challenging vendor assumptions | 15% | 2 | 0.30 | AUGMENTATION | Evaluating whether vendor model updates are appropriate for a specific book of business requires deep peril science knowledge and professional judgment. AI drafts comparison reports but the modeller must assess whether assumption changes are justified. Barrier-protected by expertise. |
| Stakeholder communication & underwriting advisory — presenting loss estimates to underwriters, pricing teams, reinsurance buyers; translating technical outputs into business decisions | 15% | 2 | 0.30 | AUGMENTATION | AI generates dashboards and summary reports. But explaining why a model produces a specific loss estimate, fielding questions from underwriters about peril assumptions, and advising on risk appetite requires human credibility and contextual judgment. |
| Climate risk assessment & emerging peril analysis — incorporating climate change projections, modelling new perils (wildfire, flood, cyber-nat), supporting TCFD/IFRS S2 disclosures | 10% | 2 | 0.20 | AUGMENTATION | Novel risk domain with limited historical data. Climate science integration, forward-looking scenario design, and regulatory interpretation require genuine expertise. AI provides data processing but cannot exercise judgment on unprecedented risk scenarios. |
| Custom/proprietary model development & coding — building internal models, supplementing vendor gaps, Python/R scripting for bespoke analyses | 5% | 3 | 0.15 | AUGMENTATION | AI coding assistants (Copilot, Cursor) accelerate development significantly, but designing model architecture, selecting statistical distributions, and validating outputs against peril science requires human expertise. Human-led with AI assistance. |
| Total | 100% | 2.95 |
Task Resistance Score: 6.00 - 2.95 = 3.05/5.0
Displacement/Augmentation split: 35% displacement, 65% augmentation, 0% not involved.
Reinstatement check (Acemoglu): Moderate reinstatement. AI creates new cat modelling tasks: validating AI-enhanced vendor models, interpreting ML-driven loss estimates, integrating climate AI projections into traditional cat models, and supporting emerging regulatory requirements (TCFD scenario analysis, IFRS S2 climate disclosures). These tasks require the cat modeller's peril science expertise combined with AI/ML understanding — the role shifts from "run the model" to "govern and interpret the model."
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | +1 | 150-270+ active cat modelling roles on major boards (Glassdoor, ZipRecruiter, LinkedIn) as of March 2026. Climate risk regulation (TCFD, IFRS S2) and increasing catastrophe losses ($100B+ insured losses in 2024-2025) sustain demand. Not surging but consistently healthy — no decline signal. |
| Company Actions | 0 | No major insurers or reinsurers have announced cat modelling team reductions citing AI. Moody's RMS and Verisk investing heavily in AI-enhanced platforms, but positioning these as tools for modellers rather than replacements. Some team restructuring toward "analytics" titles but no net headcount reduction. Neutral. |
| Wage Trends | +1 | Mid-level cat modellers earning $90K-$140K (US), with 5-8% annual growth above inflation. Premium for Python/ML skills. Senior modellers $140K-$200K+. Competitive with actuarial salaries at equivalent experience levels. Growing modestly above inflation. |
| AI Tool Maturity | -1 | RMS Intelligent Risk Platform, Verisk Extreme Event Solutions, and Moody's climate-enhanced models are production-ready and automate 50-70% of model execution and data processing tasks. AI handles batch runs, exposure validation, and basic scenario generation at scale. The computational core is substantially automated. |
| Expert Consensus | 0 | Mixed. Swiss Re, EY, and Deloitte agree cat modelling is transforming — from "model runner" to "risk analyst/data scientist." No consensus that the role disappears; rather it shifts. Actupool and industry analysts see climate risk creating new demand but acknowledge AI compresses the operational layer. Net neutral. |
| Total | 1 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 1 | No mandatory professional credential for cat modellers (unlike actuaries with FSA/FCAS). However, Solvency II internal model approval, NAIC requirements, and Lloyd's model validation standards require human oversight and sign-off on model outputs. Moderate regulatory friction — someone must certify the model, but it's typically the chief actuary, not the mid-level modeller. |
| Physical Presence | 0 | Fully remote-capable. No physical presence requirement. |
| Union/Collective Bargaining | 0 | Professional, at-will employment. No union protection in insurance/reinsurance. |
| Liability/Accountability | 1 | Model outputs directly influence pricing, reserving, and capital allocation decisions worth billions. Errors have significant financial consequences. However, at mid-level, personal liability is limited — accountability sits with senior modellers, chief actuaries, and the appointed actuary who signs off. Moderate, not strong. |
| Cultural/Ethical | 0 | Industry actively embracing AI in cat modelling. Vendors marketing AI-enhanced platforms. No cultural resistance to AI-driven model execution — the opposite. Reinsurers and insurers welcome faster, more granular models. |
| Total | 2/10 |
AI Growth Correlation Check
Confirmed +1 (Weak Positive). Climate change is driving expanding demand for catastrophe modelling — more perils require modelling (wildfire, flood, convective storm), regulatory frameworks mandate climate scenario analysis (TCFD, IFRS S2, PRA stress tests), and increasing insured losses create urgency for better risk quantification. AI adoption in insurance creates new tasks for cat modellers (validating AI-enhanced models, interpreting ML outputs). However, AI also automates the computational core — net effect is weakly positive. More AI means different cat modellers, not necessarily more. This is NOT an Accelerated Green Zone role because the demand driver is climate risk, not AI adoption itself.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.05/5.0 |
| Evidence Modifier | 1.0 + (1 x 0.04) = 1.04 |
| Barrier Modifier | 1.0 + (2 x 0.02) = 1.04 |
| Growth Modifier | 1.0 + (1 x 0.05) = 1.05 |
Raw: 3.05 x 1.04 x 1.04 x 1.05 = 3.4638
JobZone Score: (3.4638 - 0.54) / 7.93 x 100 = 36.9/100
Zone: YELLOW (Green >=48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 60% |
| AI Growth Correlation | 1 |
| Sub-label | Yellow (Urgent) — AIJRI 25-47 AND >=40% of task time scores 3+ |
Assessor override: None — formula score accepted. Score of 36.9 sits comfortably in Yellow territory, 11.1 points below Green and 11.9 points above Red. The 60% of task time at score 3+ correctly triggers the Urgent sub-label — model runs (20%) and data preparation (15%) are fully displaceable, and loss analysis/custom coding (25%) is highly AI-accelerated. Compare to Actuary Mid-to-Senior (51.1 Green Transforming) — the actuary's stronger barriers (5/10 vs 2/10 from FSA/FCAS credentialing and appointed actuary sign-off), stronger evidence (+4 vs +1), and Anthropic observed exposure of only 5.39% justify the 14.2-point gap. The cat modeller lacks the credentialing moat that protects the actuary.
Assessor Commentary
Score vs Reality Check
The 36.9 AIJRI places the catastrophe modeller solidly in Yellow (Urgent), 11 points below Green. The classification is honest. The role's computational core — model runs, data preparation, batch processing — is substantially automated by vendor platforms (RMS, Verisk). What keeps it Yellow rather than Red is the 65% augmentation split: model validation, stakeholder advisory, and climate risk assessment require domain expertise that AI cannot reliably provide alone. The weak barriers (2/10) are the most concerning factor — unlike actuaries, cat modellers have no mandatory credential or personal regulatory accountability at mid-level.
What the Numbers Don't Capture
- Climate risk is an expanding moat — but only for those who embrace it. TCFD, IFRS S2, and PRA climate stress tests create new regulatory demand for cat modelling expertise. Modellers who develop climate science literacy and forward-looking scenario capabilities are positioning for the growing part of this role. Those who only run vendor models on historical perils are on the shrinking side.
- The vendor platform shift changes the skill profile faster than the job market reflects. RMS's migration to the Intelligent Risk Platform (cloud-native, API-driven) and Verisk's Extreme Event Solutions fundamentally change what "running a cat model" means. The click-and-run desktop operator is becoming obsolete; the Python/API-proficient analyst who can programmatically interact with these platforms is in demand. Job postings still say "cat modeller" but the actual work is diverging rapidly.
- Title rotation is already occurring. Some organisations are rebranding cat modelling teams as "risk analytics," "peril science," or "climate risk" — the work persists but the title may not. Tracking "catastrophe modeller" postings alone may understate actual demand for the underlying skills.
Who Should Worry (and Who Shouldn't)
Cat modellers who validate, interpret, and advise are safer than the label suggests. If you spend your time evaluating vendor model updates, challenging assumptions against peril science, presenting loss estimates to underwriters, and developing climate risk scenarios — you are closer to the Green boundary. Your value is judgment and interpretation, not computation. Cat modellers who primarily run vendor models and clean data should be concerned. If 80% of your day is configuring RMS/AIR runs, fixing geocoding issues, and producing standard loss reports — AI platforms are doing this faster and more consistently. Your role compresses to oversight and exception handling. The single biggest separator: whether you interpret models or operate models. The interpreter who understands why a model produces a given loss estimate, can challenge vendor assumptions, and translates complex outputs into business decisions remains valuable. The operator who knows which buttons to press in RMS is competing directly with automated pipelines.
What This Means
The role in 2028: The mid-level catastrophe modeller spends far less time on model execution and data preparation — these are handled by cloud-native vendor platforms and automated pipelines. The surviving version of this role focuses on model validation, climate risk scenario development, emerging peril analysis, and translating complex loss estimates into underwriting and capital allocation decisions. Python/ML proficiency is table stakes, not a differentiator.
Survival strategy:
- Develop climate risk expertise — TCFD scenario analysis, climate projection integration, and emerging peril modelling (wildfire, flood, convective storm) are the fastest-growing components of cat modelling. The modeller who can bridge climate science and insurance risk quantification is 3x more valuable than one running standard hurricane models
- Master the new vendor platforms programmatically — learn the RMS Intelligent Risk Platform API, Verisk cloud tools, and Python/R integration. The GUI operator is being automated; the API-proficient analyst who builds custom workflows is not
- Position toward model validation and governance — as AI-enhanced models become standard, someone must validate their outputs, challenge assumptions, and ensure regulatory compliance (Solvency II internal model approval, Lloyd's model validation). This governance layer requires deep peril science understanding that AI cannot provide
Where to look next. If you're considering a career shift, these Green Zone roles share transferable skills with catastrophe modelling:
- Actuary (Mid-to-Senior) (AIJRI 51.1) — quantitative modelling, insurance domain expertise, and risk assessment transfer directly; requires FSA/FCAS exam commitment (5-7 years) but provides the credentialing moat cat modelling lacks
- Cybersecurity Risk Manager (AIJRI 60.3) — risk quantification, scenario analysis, and stakeholder communication transfer; growing demand and strong barriers
- Data Architect (Mid-to-Senior) (AIJRI 54.1) — data pipeline design, data quality expertise, and analytical skills transfer; cloud-native platform proficiency is directly applicable
Browse all scored roles at jobzonerisk.com to find the right fit for your skills and interests.
Timeline: 3-5 years for the operational/computational layer to be substantially automated by cloud-native vendor platforms. The interpretive and governance layers persist longer but require deliberate skill development. Cat modellers who have pivoted toward climate risk, model validation, and strategic advisory by 2029 will thrive. Those still primarily running desktop vendor model sessions will find their role absorbed into automated pipelines.