Role Definition
| Field | Value |
|---|---|
| Job Title | AI Policy Analyst |
| Seniority Level | Mid-Level |
| Primary Function | Analyses AI regulation and policy frameworks (EU AI Act, US executive orders, NIST AI RMF, ISO/IEC 42001), drafts policy briefs and position papers, conducts regulatory impact assessments for AI systems, monitors global AI policy developments, advises internal or external stakeholders on compliance obligations and policy positions, and engages with standards bodies and regulatory consultations. Works at think tanks, government agencies, tech companies' policy teams, or consultancies. |
| What This Role Is NOT | NOT a general Policy Adviser — that role covers broad government policy without AI technical depth (Yellow Urgent, 31.0). NOT an AI Governance Lead — that role manages organisational governance programmes with cross-functional coordination authority (Green Accelerated, 72.3). NOT a Political Scientist — that role is academic research-focused. NOT a Data Protection Officer — that role is privacy-specific with stronger regulatory mandate. |
| Typical Experience | 3-7 years. Typically holds a master's degree in public policy, law, international relations, or technology policy, with 2-4 years in policy analysis plus specialisation in AI/tech regulation. No formal licensing required. Familiarity with EU AI Act, NIST AI RMF, and ISO/IEC 42001 expected. |
Seniority note: Junior AI policy researchers (0-2 years) would score lower Yellow — heavy research and drafting, less interpretive judgment. Senior/Director-level AI policy leads with stakeholder authority and strategic influence would score higher Yellow or borderline Green, approaching the AI Governance Lead profile.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully desk-based. All work is research, writing, and stakeholder communication. |
| Deep Interpersonal Connection | 2 | Significant stakeholder engagement — consulting with policymakers, industry representatives, civil society groups, and regulators. Building relationships with standards bodies and participating in multi-stakeholder forums. Trust and credibility matter, but the core deliverable is analytical, not relational. |
| Goal-Setting & Moral Judgment | 2 | Interprets ambiguous regulations where guidance is still being published. Makes judgment calls about regulatory scope, risk classification, and policy positioning. But operates within established frameworks rather than setting organisational or political direction. |
| Protective Total | 4/9 | |
| AI Growth Correlation | 1 | More AI adoption creates more regulatory activity, more compliance obligations, and more policy questions — driving demand for analysts who can interpret the landscape. But the relationship is indirect: AI growth drives regulation, which drives policy analysis need. Not +2 because the role analyses AI policy rather than directly governing AI systems. |
Quick screen result: Protective 4/9 AND Correlation +1 = Likely Yellow Zone with upward pull from AI growth. Proceed to confirm.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Regulatory analysis & interpretation | 25% | 3 | 0.75 | AUGMENTATION | AI summarises regulatory texts, cross-references provisions, and maps requirements. But interpreting ambiguous provisions (e.g., EU AI Act "high-risk" classification for novel systems), assessing regulatory intent, and anticipating enforcement direction requires human judgment and political context. Human leads, AI handles sub-workflows. |
| Policy brief & report drafting | 20% | 4 | 0.80 | DISPLACEMENT | AI agents draft policy briefs, summarise regulatory developments, and generate comparison frameworks end-to-end. Structured inputs (regulatory text), defined format (policy brief), verifiable outputs. The analyst reviews for accuracy and political tone, but the generation workflow is increasingly AI-executed. |
| Stakeholder engagement & advisory | 15% | 2 | 0.30 | AUGMENTATION | Engaging with policymakers, industry groups, standards bodies, and civil society. Presenting analysis to decision-makers, participating in regulatory consultations, and building credibility in policy communities. AI prepares briefing materials; the human IS the trusted interlocutor. |
| Impact assessment & risk evaluation | 15% | 3 | 0.45 | AUGMENTATION | AI analyses regulatory requirements, models compliance costs, and generates initial risk frameworks. Human evaluates contextual factors — political feasibility, industry-specific implications, second-order effects — and makes judgment calls on materiality. Human-led with significant AI sub-workflows. |
| Regulatory monitoring & horizon scanning | 10% | 4 | 0.40 | DISPLACEMENT | Tracking regulatory developments across jurisdictions, monitoring legislative proposals, and flagging relevant changes. AI agents scan legislative databases, regulatory feeds, and news sources comprehensively. The monitoring function is largely automatable; the interpretation of what changes mean is not. |
| Cross-functional coordination | 10% | 2 | 0.20 | AUGMENTATION | Coordinating between legal, technical, and business teams to translate policy requirements into operational guidance. Requires understanding both the regulatory framework and the technical reality of AI systems. AI assists with documentation; human navigates organisational dynamics. |
| Public speaking & expert testimony | 5% | 1 | 0.05 | NOT INVOLVED | Presenting at conferences, providing expert testimony to legislative bodies, participating in panel discussions. Requires personal credibility, real-time adaptation, and the ability to handle adversarial questioning. Irreducibly human. |
| Total | 100% | 2.95 |
Task Resistance Score: 6.00 - 2.95 = 3.05/5.0
Displacement/Augmentation split: 30% displacement, 65% augmentation, 5% not involved.
Reinstatement check (Acemoglu): AI creates new tasks for this role: evaluating AI-generated policy proposals for accuracy, analysing AI-specific regulatory frameworks that did not exist 3 years ago (EU AI Act, state-level AI bills), assessing compliance obligations for novel AI systems (agentic AI, foundation models), and interpreting the intersection of AI regulation with existing sectoral rules. The role is expanding in scope even as individual tasks become more automatable.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 1 | AI governance postings growing 37-45% CAGR. AI policy-specific roles at think tanks (Brookings, CSET, Ada Lovelace Institute), tech companies (Google, Microsoft, Meta policy teams), and consultancies (Deloitte, PwC) are growing. But the role is niche — total postings remain modest compared to AI engineering. Not +2 because the absolute volume is small. |
| Company Actions | 1 | Major tech companies expanding AI policy teams. EU AI Office hiring policy specialists. Think tanks creating dedicated AI governance programmes. No evidence of cuts. But growth is steady rather than explosive — companies often absorb AI policy into existing legal/compliance teams rather than creating standalone analyst roles. |
| Wage Trends | 0 | Mid-level salaries range $110K-$135K US (think tanks/government lower at $80K-$110K, tech companies higher at $130K-$170K). Moderate growth tracking inflation. Not commanding the 28% AI premium seen in AI engineering roles. Stable, not surging. |
| AI Tool Maturity | -1 | AI tools are already strong at the core analytical tasks: summarising regulations, comparing frameworks, drafting policy briefs, monitoring legislative changes. Claude, GPT-4, and specialised legal AI tools (e.g., Harvey, Thomson Reuters CoCounsel) perform regulatory analysis at production quality. The analyst's judgment layer remains, but the analytical grunt work that defines 30% of the role is increasingly automated. |
| Expert Consensus | 0 | Mixed. Demand for AI policy expertise is growing, but experts note that AI tools themselves can perform much of the analytical work. The role is seen as transforming rather than disappearing — analysts who combine policy skills with genuine AI technical understanding are valued; those doing purely desk research are vulnerable. No clear consensus on whether headcount grows or shrinks. |
| Total | 1 |
Anthropic cross-reference: Political Scientists (closest O*NET parent, 19-3094) show 0.452 observed exposure (45.2%) — significant AI exposure with mixed automated/augmented share. This supports the -1 to 0 range for tool maturity and aligns with Yellow zone positioning.
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 1 | No formal licensing required. But EU AI Act creates implicit demand for human policy expertise — Article 14 mandates human oversight, and regulatory interpretation requires professional judgment that regulators expect from credentialed humans, not AI tools. Moderate barrier from regulatory complexity, not licensing. |
| Physical Presence | 0 | Fully remote capable. Some roles require presence at legislative hearings, standards body meetings, or stakeholder consultations, but these are occasional, not core. |
| Union/Collective Bargaining | 0 | Professional services and think tank sector. At-will employment in US; limited union representation in European policy institutions. Minimal barrier. |
| Liability/Accountability | 1 | Policy recommendations carry consequences — incorrect regulatory interpretation can lead to compliance failures, fines (EU AI Act penalties up to 7% global revenue), or reputational damage. Organisations want a human accountable for policy positions. But liability is diffuse — the analyst advises; the executive decides. |
| Cultural/Ethical | 1 | Policymakers, regulators, and standards bodies expect human policy analysts as interlocutors. AI-generated policy positions lack credibility in political and regulatory contexts. But this is institutional preference, not deep cultural resistance — it will erode as AI outputs improve. |
| Total | 3/10 |
AI Growth Correlation Check
Confirmed at +1 (Weak Positive). More AI adoption drives more regulatory activity: the EU AI Act, US executive orders, state-level AI legislation, and international standards (ISO/IEC 42001) all create demand for policy analysts who can interpret the regulatory landscape. But this is not +2 because the relationship is indirect — the AI Policy Analyst interprets and analyses regulation rather than directly governing AI systems. The AI Governance Lead (Growth +2, AIJRI 72.3) has the direct recursive property: every AI deployment creates governance scope. The AI Policy Analyst benefits from the same regulatory wave but is one step removed from operational AI deployment.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.05/5.0 |
| Evidence Modifier | 1.0 + (1 × 0.04) = 1.04 |
| Barrier Modifier | 1.0 + (3 × 0.02) = 1.06 |
| Growth Modifier | 1.0 + (1 × 0.05) = 1.05 |
Raw: 3.05 × 1.04 × 1.06 × 1.05 = 3.5304
JobZone Score: (3.5304 - 0.54) / 7.93 × 100 = 37.7/100
Zone: YELLOW (Green ≥48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 70% |
| AI Growth Correlation | 1 |
| Sub-label | Yellow (Urgent) — ≥40% task time scores 3+ |
Assessor override: None — formula score accepted. The 37.7 score is well-calibrated between the general Policy Adviser (31.0) and AI Governance Lead (72.3). The 6.7-point premium over the Policy Adviser reflects the AI technical knowledge requirement and positive growth correlation. The 34.6-point gap below the AI Governance Lead reflects the critical difference: the Governance Lead manages operational AI governance programmes with cross-functional authority and direct recursive demand (+2), while the Policy Analyst produces analysis and recommendations without organisational execution authority.
Assessor Commentary
Score vs Reality Check
The 37.7 Yellow (Urgent) label is honest and well-calibrated. The score sits 12.7 points above the Red boundary — not borderline. The key tension is between the positive growth correlation (AI adoption drives regulatory demand) and the partial automatability of core analytical tasks (policy briefs, regulatory summaries, framework comparisons). The role is more protected than the general Policy Adviser because AI technical understanding adds genuine differentiation — you cannot assess the regulatory impact of foundation model deployment without understanding what foundation models do. But it is far less protected than the AI Governance Lead because the analyst produces analysis rather than exercising organisational authority.
What the Numbers Don't Capture
- Function-spending vs people-spending. Investment in AI policy is growing, but much of it goes to AI-powered legal and compliance platforms (Harvey, CoCounsel, Credo AI) rather than to human analyst headcount. The market for AI policy work grows; the number of humans doing it may not keep pace.
- Title rotation. "AI Policy Analyst" competes with AI Governance Analyst, Responsible AI Analyst, AI Ethics Researcher, and Technology Policy Analyst. The function is real but the title is unstable, making job market data harder to interpret.
- Absorption into adjacent roles. At many organisations, AI policy analysis is absorbed into existing legal, compliance, or government affairs teams rather than staffed as a standalone function. This limits the growth of dedicated AI policy analyst positions even as the work increases.
- The AI technical knowledge differentiator is narrowing. As AI tools become more capable at explaining AI concepts, the premium for analysts who "understand AI" erodes. The bar for genuine technical differentiation rises — surface-level AI literacy is no longer sufficient.
Who Should Worry (and Who Shouldn't)
If you combine genuine AI technical understanding with policy analysis skills — you can assess the regulatory implications of specific AI architectures, evaluate whether a system meets "high-risk" classification under the EU AI Act, and advise on technical compliance measures — you are in the stronger version of this role. This intersection is still relatively rare, and regulators, companies, and think tanks need people who can bridge the technical-policy gap.
If your AI policy work is primarily desk research — summarising regulations, comparing international frameworks, drafting standard policy briefs without deep technical engagement — you are in the weaker version. AI tools already perform regulatory summarisation, framework comparison, and brief drafting at production quality. The analyst whose value is "I read the regulation and wrote a summary" faces direct displacement.
The single biggest factor: whether your analysis requires genuine AI technical judgment or is primarily research synthesis. The analyst who can tell a regulator "this provision won't work because of how transformer architectures process data" is protected. The analyst who summarises what the provision says is not.
What This Means
The role in 2028: The AI Policy Analyst of 2028 spends far less time on research synthesis and regulatory summarisation — AI tools handle these comprehensively. The surviving analyst focuses on interpretive judgment: assessing how novel AI capabilities interact with evolving regulatory frameworks, advising on compliance strategies for systems that did not exist when the regulations were drafted, and serving as the credible human voice in regulatory consultations and standards body negotiations. Teams are smaller, individual analysts cover broader regulatory portfolios, and the premium shifts from research throughput to interpretive depth.
Survival strategy:
- Build genuine AI technical literacy. Not "I understand what machine learning is" but "I can evaluate whether a specific system meets EU AI Act high-risk classification based on its architecture and deployment context." The technical bar is rising — invest in understanding foundation models, agentic AI, and AI safety.
- Develop regulatory interpretation expertise. Become the person who can navigate ambiguous regulatory provisions and anticipate enforcement direction. EU AI Act implementation is still evolving — the analysts who shape interpretation during this formative period build lasting authority.
- Invest in stakeholder credibility. Build relationships with regulators, standards bodies, and industry groups. The AI Policy Analyst whose name carries weight in regulatory consultations and whose testimony is sought by legislative bodies has protection that no AI tool can replicate.
Where to look next. If you are considering a career shift, these Green Zone roles share transferable skills with AI policy analysis:
- AI Governance Lead (AIJRI 72.3) — Policy analysis, regulatory interpretation, and stakeholder coordination skills transfer directly to operational AI governance, which is Accelerated Green with direct recursive demand.
- AI Auditor (AIJRI 64.5) — Regulatory knowledge and AI technical understanding translate well to conformity assessment under the EU AI Act, with ISACA AAIA certification adding a credentialing barrier.
- Data Protection Officer (AIJRI Green Transforming) — Regulatory analysis and compliance expertise transfer to privacy governance, which has stronger licensing barriers and established regulatory mandate.
Browse all scored roles at jobzonerisk.com to find the right fit for your skills and interests.
Timeline: 3-5 years. EU AI Act full enforcement by mid-2027 creates near-term demand, but AI tools are advancing rapidly in regulatory analysis. The window for analysts to shift from research synthesis to interpretive judgment is 2-3 years before AI tools close the gap on analytical tasks.