Role Definition
| Field | Value |
|---|---|
| Job Title | Artillery and Missile Officer |
| Seniority Level | Mid-to-Senior (O-2 to O-4: First Lieutenant to Major) |
| Primary Function | Commands artillery batteries and missile units, plans and authorizes fire missions, coordinates fire support with supported manoeuvre commanders, interprets rules of engagement for lethal fires employment, conducts collateral damage estimation, and bears personal legal accountability under UCMJ and the Law of Armed Conflict (LOAC) for every round fired. Deployed with units in field conditions — forward observation posts, firing positions, tactical operations centres. Decides WHERE, WHEN, and WHAT to fire. |
| What This Role Is NOT | NOT an enlisted artilleryman/cannon crew member (operates the weapon system, does not authorize fire — scored separately under Military Enlisted Tactical Operations). NOT a C2 centre officer (works from fixed installations, not field-deployed). NOT a defence industry systems engineer (designs weapons, does not employ them). NOT a drone operator (different authority chain and employment model). |
| Typical Experience | 4-12 years commissioned service. Field Artillery Basic Officer Leader Course (BOLC), Captain's Career Course (CCC), possibly Command and General Staff College (CGSC). Branch 13A (Field Artillery), 14A (Air Defense Artillery). BLS does not track military occupations; employment estimated from DoD FY2024 personnel data. |
Seniority note: Junior officers (O-1, 0-2 years) would score slightly lower — they execute fire missions under supervision but hold less autonomous authority. Senior officers (O-5+) shift toward strategic fire support planning and brigade-level command, remaining deeply Green with even higher goal-setting scores.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 2 | Field-deployed with batteries and missile units in unstructured environments — forward observation posts, firing positions, tactical assembly areas. Not performing manual labour but must be physically present in austere, often dangerous field conditions to command effectively. Less physical than infantry but more than C2 centre staff. |
| Deep Interpersonal Connection | 2 | Commands soldiers under extreme stress, coordinates with supported manoeuvre commanders face-to-face, builds trust with subordinate leaders. Fire support coordination requires rapid interpersonal negotiation — the supported commander must trust the artillery officer's judgment. Not therapeutic, but human authority and trust are mission-critical. |
| Goal-Setting & Moral Judgment | 3 | Core to role. The officer DECIDES whether to fire — interpreting ROE, assessing proportionality, estimating collateral damage, and determining whether a target meets legal engagement criteria. These are moral and legal judgments with lethal consequences. If the decision is wrong, the officer faces UCMJ prosecution and potential war crimes charges. This is irreducible human accountability. |
| Protective Total | 7/9 | |
| AI Growth Correlation | 0 | AI adoption (precision targeting, sensor-to-shooter networks, JADC2) enhances fire support capabilities but does not reduce the number of artillery officers. Force structure is driven by threat environment, Congressional authorization, and Army modernization priorities — not technology substitution. AI creates new tasks (validating AI-generated targeting data, managing autonomous launcher integration) without eliminating existing billets. |
Quick screen result: Protective 7/9 with neutral growth — strong Green Zone signal. Proceed to confirm.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Fire mission authorization & ROE interpretation | 25% | 1 | 0.25 | NOT INVOLVED | The officer personally authorizes each fire mission, interpreting rules of engagement, confirming target identification, and applying proportionality principles. This is the core legal accountability — someone goes to prison if the decision is wrong. DoD Directive 3000.09 mandates "appropriate levels of human judgment over the use of force." AI has zero authority here. Irreducible human work. |
| Fire support planning & coordination | 20% | 2 | 0.40 | AUGMENTATION | Planning fire support for manoeuvre operations — target lists, engagement priorities, ammunition allocation, counterfire plans. AI-powered tools (AFATDS, JADOCS) accelerate planning by optimizing firing solutions and deconflicting airspace. The officer directs strategy and makes allocation decisions; AI handles computational optimization. |
| Collateral damage estimation & proportionality | 15% | 2 | 0.30 | AUGMENTATION | Assessing civilian presence, structural damage radius, proportionality under LOAC. AI tools provide damage modelling and pattern-of-life analysis from ISR feeds. The officer makes the legal judgment — AI provides data, human decides if the strike meets proportionality requirements. Personal legal liability ensures human ownership. |
| Unit command & soldier leadership | 15% | 1 | 0.15 | NOT INVOLVED | Commanding battery/battalion, mentoring junior officers, enforcing discipline, managing welfare, conducting performance evaluations. Human leadership of soldiers under combat stress. No AI substitute for command presence — troops follow officers they trust. |
| Tactical positioning & field operations | 10% | 1 | 0.10 | NOT INVOLVED | Selecting and occupying firing positions, conducting reconnaissance for observation posts, moving with the battery in field conditions. Physical presence in austere environments, terrain assessment, survivability decisions. No remote or AI substitute. |
| Fire direction computation & targeting data | 10% | 3 | 0.30 | AUGMENTATION | Computing firing solutions, managing target acquisition data, integrating sensor feeds. AI-enabled fire direction (AFATDS, precision targeting algorithms) handles ballistic computation and sensor fusion. The officer validates outputs and resolves conflicts — AI computes, human confirms. This is the most AI-accelerated portion of the role. |
| Administrative duties & reporting | 5% | 4 | 0.20 | DISPLACEMENT | OERs, readiness reports, ammunition expenditure tracking, training schedules, maintenance records. AI and digital systems automate much documentation. Most automatable portion of the role. |
| Total | 100% | 1.70 |
Task Resistance Score: 6.00 - 1.70 = 4.30/5.0
Displacement/Augmentation split: 5% displacement, 45% augmentation, 50% not involved.
Reinstatement check (Acemoglu): AI creates significant new tasks: validating AI-generated targeting solutions, supervising autonomous launcher systems (Army's Autonomous Multi-Domain Launcher), managing human-machine teaming with AI-enabled sensor networks (JADC2), and overseeing AI-assisted collateral damage estimation tools. The officer role expands to include AI oversight responsibilities — classic augmentation-driven reinstatement.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 0 | Military billets are set by force structure tables, not market demand. Artillery officer authorizations remain stable across FY2024-2026 as the Army prioritizes Long Range Precision Fires (LRPF) modernization. Not market-driven — neutral. |
| Company Actions | 0 | DoD is not cutting artillery officer billets. The Army's modernization priorities (LRPF, HIMARS expansion, Precision Strike Missile fielding) are adding capability, not reducing officer positions. The Autonomous Multi-Domain Launcher program augments launchers, not replaces the officers who authorize their employment. |
| Wage Trends | 0 | Military officer pay follows statutory pay tables set by Congress. O-2 to O-4 compensation tracks inflation through annual NDAA adjustments. No AI-driven wage pressure — military compensation is structurally insulated from market forces. |
| AI Tool Maturity | 1 | AI fire direction tools (AFATDS upgrades, Project Maven targeting, JADC2 sensor fusion) augment the officer's capabilities significantly but create new validation tasks rather than displacing the officer. No AI system is authorized to make lethal fire decisions. Tools enhance accuracy and speed while increasing human oversight responsibilities. |
| Expert Consensus | 1 | Broad expert agreement: human-in-the-loop is mandatory for lethal force employment. CSIS, CRS, and DoD policy analysis uniformly confirm that DoD Directive 3000.09 requires human judgment for weapons employment. The FY2025 NDAA requires annual reporting on autonomous weapons deployment — Congressional oversight reinforces human control. |
| Total | 2 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 2 | DoD Directive 3000.09 mandates "appropriate levels of human judgment over the use of force." Geneva Conventions and LOAC require human accountability for targeting decisions. Commissioned officers hold legal authority to authorize fires — no AI system can be commissioned. FY2025 NDAA Section 1066 requires annual Congressional reporting on autonomous weapons. Maximum regulatory barrier. |
| Physical Presence | 2 | Field-deployed with batteries in forward areas, observation posts, and tactical assembly areas. Must physically observe terrain, assess conditions, and maintain command presence with soldiers. Not desk-based — operates in unstructured field environments that vary with every deployment. |
| Union/Collective Bargaining | 0 | Military. No union representation. |
| Liability/Accountability | 2 | The officer is personally liable under UCMJ and international law for every fire mission authorized. War crimes prosecution, court martial, personal criminal liability. AI has no legal personhood — it cannot be court-martialled, imprisoned, or held accountable under LOAC. This is the strongest barrier: someone must go to prison if the decision is wrong. |
| Cultural/Ethical | 2 | Military culture, allied nations, and the international community will not accept autonomous lethal fire employment without human authorization. The UN Convention on Certain Conventional Weapons continues debating LAWS restrictions. Even nations developing autonomous capabilities maintain that a human must authorize lethal force. Society will not delegate kill authority to machines. |
| Total | 8/10 |
AI Growth Correlation Check
Confirmed 0. AI modernization (JADC2, precision targeting, autonomous launchers) dramatically enhances fire support capabilities but does not reduce the number of officers needed to authorize and command fires employment. The Autonomous Multi-Domain Launcher reduces the number of enlisted crew needed at the launcher — it does not eliminate the officer who decides what to fire at. Force structure is threat-driven and Congressionally authorized, not technology-driven. Neutral correlation.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 4.30/5.0 |
| Evidence Modifier | 1.0 + (2 × 0.04) = 1.08 |
| Barrier Modifier | 1.0 + (8 × 0.02) = 1.16 |
| Growth Modifier | 1.0 + (0 × 0.05) = 1.00 |
Raw: 4.30 × 1.08 × 1.16 × 1.00 = 5.39
JobZone Score: (5.39 - 0.54) / 7.93 × 100 = 61.1/100
Zone: GREEN (Green ≥48)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 15% |
| AI Growth Correlation | 0 |
| Sub-label | GREEN (Stable) — AIJRI ≥48, <20% of task time scores 3+ |
Assessor override: None — formula score accepted. Score aligns well with comparable military assessments (First-Line Enlisted Supervisor 63.6, Military Enlisted Tactical Operations 60.3). Artillery officers score slightly above enlisted tactical operators due to stronger goal-setting/accountability barriers but below senior NCO supervisors who have deeper interpersonal connection scores.
Assessor Commentary
Score vs Reality Check
The Green (Stable) classification at 61.1 accurately reflects the structural reality of this role. The score is barrier-dependent — 8/10 barriers provide a 16% boost — but these barriers are structural, not temporal. DoD Directive 3000.09, UCMJ accountability, and international humanitarian law are not eroding; they are being reinforced by the FY2025 NDAA and ongoing CCW deliberations. Even if AI becomes technically capable of autonomous targeting, the legal and ethical framework preventing autonomous lethal fire employment is hardening, not softening.
What the Numbers Don't Capture
- Autonomous launcher trajectory — the Army's Autonomous Multi-Domain Launcher program could reduce crew sizes at the weapon system level, but this affects enlisted billets, not officer authorization authority. The officer role may actually expand as one officer oversees more autonomous launchers.
- JADC2 transformation — the Joint All-Domain Command and Control initiative is fundamentally changing how fire support is coordinated, adding AI-assisted sensor-to-shooter linkages. This transforms HOW the officer works (faster, more data) without changing WHAT the officer decides (whether to fire).
- International LAWS debate — if the Convention on Certain Conventional Weapons or a successor treaty restricts autonomous weapons, it would further entrench human-in-the-loop requirements, pushing the score higher. Regulatory risk is asymmetrically protective.
Who Should Worry (and Who Shouldn't)
Artillery and missile officers at the company and battalion level (O-2 to O-4) are deeply protected — they hold the legal authority to authorize fires and bear personal liability for those decisions. No AI system can substitute for this accountability chain. Officers who lean into AI-assisted targeting, autonomous launcher management, and JADC2 integration will be the most valuable. The only version of this role that faces any pressure is a hypothetical future where fire direction becomes so automated that fewer officers are needed to oversee more systems — but even then, the authorization and accountability requirement remains irreducible. The single biggest factor separating safe from at-risk is command authority: if you authorize fires, you are protected by law. If you only compute firing solutions, AI is coming for that task.
What This Means
The role in 2028: Artillery and missile officers will command more capable, more automated fire support systems — autonomous launchers, AI-assisted targeting, real-time sensor fusion via JADC2 — but will remain the irreplaceable human in the kill chain. The officer who authorizes fires in 2028 will process more data, oversee more systems, and make faster decisions, but the decision authority itself cannot be delegated to a machine.
Survival strategy:
- Master AI-enabled fire support tools (AFATDS upgrades, JADC2 interfaces, autonomous launcher command systems) — become the officer who integrates AI, not the one who resists it
- Deepen expertise in collateral damage estimation methodology and proportionality assessment — as AI speeds up the kill chain, the officer's judgment on "should we fire?" becomes more valuable, not less
- Build cross-domain fire support coordination skills (cyber, space, electronic warfare) — the fires officer of the future coordinates effects across all domains, not just kinetic
Timeline: 15-25+ years. Driven by the structural permanence of human-in-the-loop requirements for lethal force under international and domestic law.