Frequently Asked Questions
Everything you need to know about JobZone Risk scores and how we assess AI displacement risk.
General
Understanding your score and what to do with it.
What is JobZone Risk?
Will AI take my job?
What do the Green, Yellow, and Red zones mean?
- Green (48–100): The role is protected or growing. Safe for 5+ years based on current AI trajectories.
- Yellow (25–47): The role is transforming. You have 2–7 years to adapt your skills. The job won’t disappear, but it will change significantly.
- Red (0–24): The role is being actively displaced. Act now — upskill, pivot, or move into a more senior version of the role.
Each zone has sub-labels (e.g., Green Accelerated, Yellow Urgent, Red Imminent) that give more precise guidance. See the zone classification system for the full breakdown.
How do I find my job's score?
Why does seniority matter so much?
What should I do if my job is in the Red zone?
- Move up: Senior versions of the same role often score much higher. Invest in experience and certifications.
- Move sideways: Use the Compare Roles tool to find adjacent roles in a safer zone.
- Become the operator: Learn to work with AI tools in your field. The people who deploy and manage AI systems in each domain will be the last to be displaced.
Are these scores perfect? Should I make career decisions based on them?
You should not make major career decisions based solely on a JobZone Score. Use it as one input alongside your own research, industry knowledge, conversations with people in the field, and professional career advice. A score can highlight risks you hadn’t considered or confirm what you already suspected — but it’s a starting point for investigation, not a final verdict.
We publish our full methodology and 15 known limitations precisely so you can judge the strengths and weaknesses for yourself.
How often are scores updated?
Methodology & Data
How the scoring works under the hood.
How is the JobZone Score calculated?
Four dimensions multiply together: Task Resistance Score (how hard tasks are to automate), Evidence modifier (what the labour market shows), Barrier modifier (regulatory, physical, and cultural protections), and Growth modifier (whether demand rises or falls with AI). The raw result is normalised to 0–100. The full formula and worked examples are in the Composite Scoring Model section.
Why is it multiplicative, not additive?
What data sources do you use?
How are individual tasks scored?
- 1 — Irreducible Human: Protected by legal accountability, ethical judgment, trust
- 2 — Barrier-Protected: Requires licensed professional judgment
- 3 — Human-Led, AI-Accelerated: AI handles sub-workflows, human leads
- 4 — Agent-Executable: AI agent can execute end-to-end
- 5 — Fully Automatable: Deterministic, AI already performs at scale
The question isn’t “can AI assist?” but “can an AI agent execute this entire workflow without a human?”
What is the TruthSeeker protocol?
Can I trust these scores?
What assumptions does the scoring make?
- Sub-AGI AI: The methodology assumes increasingly capable, agentic AI — not artificial general intelligence (AGI). If AGI arrives, the question shifts from “what can AI do?” to “what should AI be allowed to do?” and all scores would need rethinking.
- 3–5 year horizon: Scores reflect near-to-medium term displacement risk based on current AI trajectories, not long-term predictions about 2035 or beyond.
- Roles, not individuals: We assess occupations as categories. A uniquely skilled individual in a Red-zone role may have personal expertise that the aggregate score doesn’t capture. Your mileage may vary.
- Western labour markets: Evidence sources are predominantly US/UK-centric (BLS, O*NET, Indeed, LinkedIn). Scores may not transfer to markets with different regulatory frameworks, union structures, or technology adoption rates.
- Current regulations hold: We model existing barriers (licensing, regulation, liability) but don’t predict future AI legislation. Major new regulation (like the EU AI Act or potential US laws) could materially shift scores.
- No robotics breakthrough: Physical trades score highly partly because humanoid robotics hasn’t achieved dexterity in unstructured environments. If that changes, trades scores would need revision.
- The worker adapts (or doesn’t): Many Yellow-zone assessments carry an implicit assumption that the practitioner will adopt AI tools. If you refuse to adapt, a Yellow score may understate your risk. If you’re already ahead of the curve, it may overstate it.
- Task decomposition is representative: Each role is broken into 5–10 weighted tasks. Different employers may structure the same role differently, shifting which tasks dominate.
These assumptions are stated formally in Section 1.1 of the methodology and revisited in the Limitations section. If any of them don’t hold for your situation, adjust your interpretation accordingly.