Frequently Asked Questions

Everything you need to know about JobZone Risk scores and how we assess AI displacement risk.

General

Understanding your score and what to do with it.

What is JobZone Risk?
JobZone Risk is a free tool that scores how resistant your job is to AI displacement. Every role gets a JobZone Score from 0 to 100 — higher means more resistant. Scores map to three zones: Green (resistant, 48–100), Yellow (transforming, 25–47), and Red (vulnerable, 0–24). The full scoring methodology is published openly.
Will AI take my job?
It depends on the specific role and seniority level. Most jobs won’t vanish overnight — they’ll transform. The dominant near-term impact of AI is role transformation, not outright elimination. Yellow is consistently the largest zone in our assessments. That said, some entry-level roles with highly automatable tasks are already seeing displacement. Search for your role to see its specific score and what the evidence says.
What do the Green, Yellow, and Red zones mean?
  • Green (48–100): The role is protected or growing. Safe for 5+ years based on current AI trajectories.
  • Yellow (25–47): The role is transforming. You have 2–7 years to adapt your skills. The job won’t disappear, but it will change significantly.
  • Red (0–24): The role is being actively displaced. Act now — upskill, pivot, or move into a more senior version of the role.

Each zone has sub-labels (e.g., Green Accelerated, Yellow Urgent, Red Imminent) that give more precise guidance. See the zone classification system for the full breakdown.

How do I find my job's score?
Go to the homepage and search by job title. You can also filter by zone (Green, Yellow, Red) or browse by professional domain. Each role page includes the full JobZone Score, task breakdown, evidence summary, and practical recommendations.
Why does seniority matter so much?
Stanford research found that workers aged 22–25 in AI-exposed roles saw −13% employment since 2022, while older workers in the same occupations grew 6–9%. Junior roles tend to involve repetitive, well-defined tasks that AI handles well. Senior roles involve judgment, stakeholder management, strategic thinking, and accountability — tasks AI cannot yet perform autonomously. For example, a Junior Software Developer scores 9.3 (Red) while a Senior Software Engineer scores 55.4 (Green). Same field, different zones.
What should I do if my job is in the Red zone?
Red doesn’t mean “fired tomorrow” — it means the displacement pressure is real and accelerating. Three practical moves:
  1. Move up: Senior versions of the same role often score much higher. Invest in experience and certifications.
  2. Move sideways: Use the Compare Roles tool to find adjacent roles in a safer zone.
  3. Become the operator: Learn to work with AI tools in your field. The people who deploy and manage AI systems in each domain will be the last to be displaced.
Are these scores perfect? Should I make career decisions based on them?
No — and we want to be upfront about that. JobZone Scores are not perfect. They are structured analytical assessments, not empirical predictions. The scoring involves expert judgment, and there can be errors, outdated evidence, or gaps in the data. Two reasonable assessors could produce different scores for the same role.

You should not make major career decisions based solely on a JobZone Score. Use it as one input alongside your own research, industry knowledge, conversations with people in the field, and professional career advice. A score can highlight risks you hadn’t considered or confirm what you already suspected — but it’s a starting point for investigation, not a final verdict.

We publish our full methodology and 15 known limitations precisely so you can judge the strengths and weaknesses for yourself.

How often are scores updated?
We target a 6-month refresh cycle for high-volatility roles (Red, Yellow Urgent) and 12 months for stable roles (Green). AI capability changes quarter by quarter, so published scores may become less accurate between refresh cycles. Each assessment shows its date of last review.

Methodology & Data

How the scoring works under the hood.

How is the JobZone Score calculated?
The score uses a multiplicative composite formula:
Raw = TRS × Emod × Bmod × Gmod

Four dimensions multiply together: Task Resistance Score (how hard tasks are to automate), Evidence modifier (what the labour market shows), Barrier modifier (regulatory, physical, and cultural protections), and Growth modifier (whether demand rises or falls with AI). The raw result is normalised to 0–100. The full formula and worked examples are in the Composite Scoring Model section.

Why is it multiplicative, not additive?
Because weakness in any dimension should drag the score down proportionally — not be hidden by strength elsewhere. A role with highly resistant tasks but collapsing market evidence should NOT score Green. The market has spoken. This non-compensatory property is borrowed from CVSS (vulnerability scoring) and the UN Human Development Index’s geometric mean approach, where poor health can’t be offset by high income.
What data sources do you use?
Every assessment draws from multiple tiers: labour market data (Indeed, LinkedIn, Glassdoor, Google Jobs, USAJobs, Reed), company actions from 79 curated RSS feeds and earnings calls, AI tool maturity reviews, academic research via Semantic Scholar (220M+ papers), and regulatory/licensing databases. Evidence is cross-validated across independent search engines — when two engines return the same article, that counts as one source, not two. See the Data Sources section for the full breakdown.
How are individual tasks scored?
Each role is decomposed into 5–10 constituent tasks, weighted by percentage of total role time. Each task is scored 1–5 for agentic AI automation potential:
  • 1 — Irreducible Human: Protected by legal accountability, ethical judgment, trust
  • 2 — Barrier-Protected: Requires licensed professional judgment
  • 3 — Human-Led, AI-Accelerated: AI handles sub-workflows, human leads
  • 4 — Agent-Executable: AI agent can execute end-to-end
  • 5 — Fully Automatable: Deterministic, AI already performs at scale

The question isn’t “can AI assist?” but “can an AI agent execute this entire workflow without a human?”

What is the TruthSeeker protocol?
Every factual claim in an assessment passes through a 12-phase fact-checking protocol adapted from professional journalism and intelligence analysis. It applies methods from the IFCN (International Fact-Checking Network), the SIFT method by Mike Caulfield, and Analysis of Competing Hypotheses from intelligence tradecraft. Claims receive a confidence score (0–100) and a structured verdict. Claims that can’t be verified to a high threshold are excluded or explicitly flagged. See Section 3.4 for the full protocol.
Can I trust these scores?
We designed the scoring to be maximally transparent precisely so you can judge for yourself. Every rubric, coefficient, and formula is published. Every assessment shows its task decomposition, evidence sources, and modifier calculations. We list 15 known limitations openly. That said, the task resistance score at the heart of each assessment is a structured expert judgment, not an empirical measurement. Two reasonable assessors could disagree by enough to shift a zone boundary. We describe the modifiers as “evidence-informed” rather than “evidence-based” because the underlying task scoring remains a judgment call.
What assumptions does the scoring make?
Every scoring framework rests on assumptions. We want to be explicit about ours:
  • Sub-AGI AI: The methodology assumes increasingly capable, agentic AI — not artificial general intelligence (AGI). If AGI arrives, the question shifts from “what can AI do?” to “what should AI be allowed to do?” and all scores would need rethinking.
  • 3–5 year horizon: Scores reflect near-to-medium term displacement risk based on current AI trajectories, not long-term predictions about 2035 or beyond.
  • Roles, not individuals: We assess occupations as categories. A uniquely skilled individual in a Red-zone role may have personal expertise that the aggregate score doesn’t capture. Your mileage may vary.
  • Western labour markets: Evidence sources are predominantly US/UK-centric (BLS, O*NET, Indeed, LinkedIn). Scores may not transfer to markets with different regulatory frameworks, union structures, or technology adoption rates.
  • Current regulations hold: We model existing barriers (licensing, regulation, liability) but don’t predict future AI legislation. Major new regulation (like the EU AI Act or potential US laws) could materially shift scores.
  • No robotics breakthrough: Physical trades score highly partly because humanoid robotics hasn’t achieved dexterity in unstructured environments. If that changes, trades scores would need revision.
  • The worker adapts (or doesn’t): Many Yellow-zone assessments carry an implicit assumption that the practitioner will adopt AI tools. If you refuse to adapt, a Yellow score may understate your risk. If you’re already ahead of the curve, it may overstate it.
  • Task decomposition is representative: Each role is broken into 5–10 weighted tasks. Different employers may structure the same role differently, shifting which tasks dominate.

These assumptions are stated formally in Section 1.1 of the methodology and revisited in the Limitations section. If any of them don’t hold for your situation, adjust your interpretation accordingly.

How many jobs have you assessed?
The corpus covers occupations across 25 domains and over 150 specialisms, assessed on a rolling basis targeting 90% US workforce coverage by employment volume. Browse the full corpus to see all assessed roles.
Is this just for US workers?
The primary evidence sources are US/UK-centric (BLS, O*NET, Indeed, LinkedIn). Scores may not fully transfer to labour markets with different regulatory frameworks, union structures, or technology adoption rates. That said, the core AI capability assessment (can AI do these tasks?) is largely geography-independent. The modifiers — especially barriers and evidence — reflect Western labour market conditions. We acknowledge this as a known limitation.
Who created this?
JobZone Risk was developed by Nathan House and HAL (an AI research system) at StationX Research. Nathan has 30 years of cybersecurity experience and has trained over 500,000 students globally. The methodology is published openly for peer review and independent replication. Learn more.
Is the methodology open for peer review?
Yes. The complete methodology, all scoring rubrics, coefficients, normalisation constants, and worked examples are published at /methodology. We actively invite scrutiny. The interactive calculator lets you test how different inputs produce different outputs. If our scores are wrong, tell us where and we will correct them.

Still have questions?

Read the full methodology or get in touch.