Role Definition
| Field | Value |
|---|---|
| Job Title | Government Program Analyst |
| SOC Code | 13-1111 (Management Analysts) |
| Seniority Level | Mid-Level |
| Primary Function | Evaluates effectiveness of government programs at US federal or state level. Collects and analyses performance data, measures policy outcomes against legislative mandates (GPRA, Evidence Act), writes evaluation reports, develops performance metrics, briefs stakeholders and legislative bodies, and recommends programme improvements. Typically GS-11 to GS-13 in federal service. |
| What This Role Is NOT | Not a Budget Analyst (13-2031, budget compilation and spending monitoring — scored 21.1 Red). Not a Management Analyst in private consulting (process improvement — scored 26.4 Yellow Urgent). Not a Policy Adviser (legislative strategy and ministerial briefing — scored 39.7 Yellow Urgent). Not a senior program director who owns programme design and bears executive accountability. |
| Typical Experience | 3-8 years. Bachelor's in public administration, political science, economics, or related field. Many hold MPA or MPP. CGFM or PMP credentials enhance competitiveness. O*NET Job Zone 4. |
Seniority note: Entry-level programme analysts (GS-7/GS-9, 0-2 years) performing only data gathering and report assembly would score Red — their work is the most directly automated. Senior programme directors (GS-14/GS-15, SES) who design evaluation frameworks, testify before Congress, and bear executive accountability would score Green (Transforming).
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully digital, desk-based work. No physical component. |
| Deep Interpersonal Connection | 1 | Briefs programme managers and legislative staff, presents findings to oversight bodies. Interaction is information-driven rather than trust-based. |
| Goal-Setting & Moral Judgment | 2 | Interprets legislative intent behind programme mandates, judges whether programmes achieve intended outcomes, recommends continuation or termination of government initiatives. More evaluative judgment than a budget analyst but operates within established policy frameworks. |
| Protective Total | 3/9 | |
| AI Growth Correlation | -1 | AI tools automate performance data collection, metric tracking, and report generation — reducing analyst headcount per evaluation cycle. Not -2 because policy interpretation, stakeholder engagement, and evaluative judgment sustain demand. |
Quick screen result: Low-moderate protection (3/9) with weak negative correlation predicts Yellow Zone. Proceed to verify.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Programme performance data collection and analysis | 20% | 4 | 0.80 | DISP | Gathering quantitative and qualitative performance data from programme databases, surveys, and agency systems. AI agents ingest structured data, run statistical analyses, and flag anomalies. GSA's USAi platform and agency-specific analytics tools already automate bulk data processing. |
| Programme evaluation and effectiveness reporting | 20% | 3 | 0.60 | AUG | Assessing whether programmes achieve intended outcomes, writing evaluation reports with findings and recommendations. AI drafts initial reports and synthesises data — but evaluating programme logic, causal attribution, and unintended consequences requires human judgment. Human-led, AI-accelerated. |
| Policy outcome analysis and recommendations | 15% | 2 | 0.30 | AUG | Interpreting legislative mandates, analysing whether policy objectives translate into measurable outcomes, recommending programme modifications. Requires understanding of legislative intent, political context, and inter-agency dynamics that AI cannot replicate. |
| Performance metric development and monitoring | 15% | 4 | 0.60 | DISP | Designing KPIs, building dashboards, and tracking programme performance against targets. Dashboard platforms (Power BI + Copilot, Tableau) automate metric monitoring. AI agents generate performance scorecards and flag deviations. Human oversight reduces to exception review. |
| Stakeholder briefings and legislative reporting | 15% | 2 | 0.30 | NOT | Presenting evaluation findings to programme managers, agency leadership, OMB, and Congressional oversight committees. Requires navigating political sensitivities, translating technical findings into actionable recommendations, and responding to adversarial questioning. |
| Regulatory compliance and GPRA documentation | 10% | 3 | 0.30 | AUG | Ensuring programme evaluations comply with GPRA Modernization Act, Foundations for Evidence-Based Policymaking Act, and agency-specific requirements. AI can check compliance templates, but interpreting evolving regulatory requirements and agency-specific implementation requires human judgment. |
| Cross-agency coordination and advisory | 5% | 2 | 0.10 | NOT | Coordinating with other agencies on shared programme goals, advising programme managers on evaluation methodology and evidence standards. Relationship-dependent, context-specific advisory work. |
| Total | 100% | 3.00 |
Task Resistance Score: 6.00 - 3.00 = 3.00/5.0
Displacement/Augmentation split: 35% displacement, 45% augmentation, 20% not involved.
Reinstatement check (Acemoglu): AI creates new tasks — validating AI-generated evaluation findings, auditing algorithmic performance assessments, configuring AI analytics platforms, and interpreting AI-flagged anomalies in programme data. The Evidence Act's emphasis on evidence-based policymaking reinforces demand for human analysts who can validate AI outputs and translate them into policy recommendations.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 0 | BLS projects 9% growth for Management Analysts (13-1111) 2024-2034, faster than average. 98,100 annual openings. However, DOGE cut the federal workforce by 9% in 2025 (~271,000 positions), with management and programme analyst roles directly impacted — NOAA, HHS, and EPA all reduced programme evaluation staff. Federal postings stable after initial shock; state-level postings growing. Net: stable. |
| Company Actions | -1 | DOGE-driven federal workforce reductions explicitly targeted management and programme analysis functions. Jacob Cross (NOAA management and programme analyst) testified about layoffs. Multiple agencies consolidating evaluation functions. State governments expanding programme evaluation capacity partly offsets federal contraction. |
| Wage Trends | 0 | BLS median $99,410 for management analysts; government sub-sector $92,310. Federal GS-12 ($103K-$134K with DC locality) and GS-13 ($122K-$159K) tracking inflation with 1.7% 2025 raise. PayScale reports $84,418 average for government programme analysts. Stable, not surging. |
| AI Tool Maturity | -1 | GSA's USAi platform gives agencies free access to AI models from Google, Meta, Anthropic, and OpenAI. Agencies piloting AI for programme evaluation workflows — FDA launched Elsa (gen AI tool), CMS uses AI for programme integrity analysis. OECD published "AI in Policy Evaluation" (June 2025) documenting government AI adoption for programme assessment. Tools in pilot/early adoption phase — performing 50-80% of data collection and reporting tasks but evaluation judgment remains human. |
| Expert Consensus | 0 | Mixed. GAO reports generative AI use cases increased ninefold (32 to 282) across agencies 2023-2024. OPM identifies programme analyst (0343 series) as critical for AI implementation. OECD and Brookings see transformation, not elimination — programme evaluation requires contextual judgment that AI lacks. Federal News Network: "AI may not be the federal buzzword for 2026" — adoption slower than hype suggested. |
| Total | -2 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 0 | No professional licensing required. GPRA and Evidence Act mandate programme evaluation but do not require licensed evaluators. CGFM is voluntary. |
| Physical Presence | 0 | Fully remote-capable. COVID proved programme evaluation can be conducted entirely remotely. |
| Union/Collective Bargaining | 1 | Federal employees covered by AFGE with RIF protections. State employees often covered by AFSCME or state unions. Government employment procedures slow (but do not prevent) headcount reduction. |
| Liability/Accountability | 1 | Programme evaluation findings inform Congressional funding decisions and agency budgets. Inaccurate evaluations can lead to programme defunding or misallocation of public resources. However, accountability is shared across evaluation offices rather than borne personally by mid-level analysts. |
| Cultural/Ethical | 0 | Government agencies actively pursuing AI adoption for programme evaluation. OMB M-25-21 and M-25-22 mandate AI integration across federal operations. No cultural resistance to AI-assisted programme analysis. |
| Total | 2/10 |
AI Growth Correlation Check
Confirmed -1. AI adoption reduces the number of programme analysts needed per evaluation cycle by automating data collection, metric monitoring, and standardised reporting. However, the Evidence Act and GPRA Modernization Act create a structural floor — agencies are legally required to conduct programme evaluations and demonstrate evidence-based policymaking. AI accelerates the work but does not eliminate the statutory requirement for human-validated evaluation findings. Not -2 because the regulatory mandate sustains baseline demand.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.00/5.0 |
| Evidence Modifier | 1.0 + (-2 x 0.04) = 0.92 |
| Barrier Modifier | 1.0 + (2 x 0.02) = 1.04 |
| Growth Modifier | 1.0 + (-1 x 0.05) = 0.95 |
Raw: 3.00 x 0.92 x 1.04 x 0.95 = 2.7269
JobZone Score: (2.7269 - 0.54) / 7.93 x 100 = 27.6/100
Zone: YELLOW (Yellow 25-47)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 65% |
| AI Growth Correlation | -1 |
| Sub-label | Yellow (Urgent) — AIJRI 25-47, 65% >= 40% task time scoring 3+ |
Assessor override: None — formula score accepted. The 27.6 score sits just above the Yellow/Red boundary (25), reflecting a role with moderate task resistance (more evaluative judgment than budget analysts) but weak evidence and minimal barriers. Comparison to Budget Analyst (21.1 Red) is explained by programme evaluation's stronger qualitative and advisory components.
Assessor Commentary
Score vs Reality Check
The Yellow (Urgent) classification is honest but borderline — 27.6 sits just 2.6 points above Red. The task resistance of 3.00 reflects a genuine split: 35% of task time faces direct displacement (data collection, metric monitoring) while 65% involves evaluation judgment, policy analysis, and stakeholder engagement that AI augments rather than replaces. The DOGE-driven federal workforce contraction is a real headwind that the evidence score (-2) captures. If federal cuts deepen or state governments follow suit, this role could slip into Red within 1-2 years.
What the Numbers Don't Capture
- Government adoption lag vs DOGE acceleration: Historically, government AI adoption lags the private sector by 3-5 years. But DOGE-era efficiency mandates are accelerating adoption in evaluation functions specifically — GSA's USAi platform and OMB's AI mandates compress the timeline.
- Statutory floor: GPRA and the Evidence Act create a legal requirement for programme evaluation that cannot be eliminated by AI adoption. Someone must sign off on evaluation findings submitted to Congress. This provides a structural floor that the barrier score (2/10) understates.
- Title rotation: "Program Analyst" (0343 series) is one of the broadest federal job classifications. The evaluative work may persist under titles like "Evaluation Officer," "Evidence Specialist," or "Data Analytics Lead" even as the traditional programme analyst title contracts.
- Bimodal distribution: Federal programme analysts doing GPRA compliance reporting and Congressional testimony score meaningfully higher than those doing routine performance data compilation and metric tracking.
Who Should Worry (and Who Shouldn't)
If you are a mid-level programme analyst whose day consists primarily of pulling performance data from agency databases, compiling quarterly reports, updating performance dashboards, and tracking metrics against targets — your work is being automated now. AI analytics platforms handle this end-to-end with minimal human oversight.
If you are a programme analyst who evaluates whether government programmes achieve their intended outcomes, interprets legislative mandates, advises programme managers on evaluation methodology, and presents findings to Congressional oversight committees — you are safer than this score suggests. That evaluative judgment and stakeholder engagement work constitutes the 45% augmentation share that resists automation.
The single biggest factor separating the safer from the at-risk version is whether your work is primarily data-driven (structured inputs, standardised reports) or primarily evaluative (judgment, policy interpretation, stakeholder communication).
What This Means
The role in 2028: Surviving programme analysts will function as evaluation strategists and policy interpreters, supported by AI platforms that handle all routine data collection, metric monitoring, and report generation. Agencies will need fewer analysts per evaluation cycle, but the remaining positions will focus on programme design evaluation, causal attribution, and translating findings into policy recommendations for legislative bodies.
Survival strategy:
- Specialise in evaluation methodology — build expertise in quasi-experimental design, logic modelling, and causal inference that AI tools cannot replicate. Become the person who designs evaluations, not just the person who compiles data for them.
- Master government AI platforms — become proficient with GSA's USAi, agency-specific analytics tools, and AI-powered evaluation workflows. The analyst who validates AI outputs absorbs the work of three who do manual data processing.
- Build cross-agency expertise — develop deep knowledge in a complex programme domain (healthcare policy, defence acquisition, environmental regulation) where contextual judgment and political navigation create moats.
Where to look next. If you are considering a career shift, these Green Zone roles share transferable skills with government programme analysis:
- Emergency Management Director (Mid-to-Senior) (AIJRI 56.8) — programme coordination, interagency liaison, and crisis evaluation skills transfer directly; physical presence and accountability barriers protect
- Compliance Manager (AIJRI 48.2) — regulatory interpretation and programme compliance skills transfer directly; licensing and liability barriers create structural protection
- Data Protection Officer (AIJRI 58.1) — policy analysis, regulatory compliance, and stakeholder advisory skills transfer; growing demand from AI governance requirements
Browse all scored roles at jobzonerisk.com to find the right fit for your skills and interests.
Timeline: 2-5 years. Federal DOGE-era contraction is happening now; state-level transformation follows. Statutory evaluation mandates (GPRA, Evidence Act) sustain a floor of demand but at reduced headcount.