Will AI Replace Artillery and Missile Officers Jobs?

Also known as: Battery Commander

Mid-to-Senior (O-2 to O-4: First Lieutenant to Major) Ground Combat Military Leadership Live Tracked This assessment is actively monitored and updated as AI capabilities change.
GREEN (Stable)
0.0
/100
Score at a Glance
Overall
0.0 /100
PROTECTED
Task ResistanceHow resistant daily tasks are to AI automation. 5.0 = fully human, 1.0 = fully automatable.
0/5
EvidenceReal-world market signals: job postings, wages, company actions, expert consensus. Range -10 to +10.
+0/10
Barriers to AIStructural barriers preventing AI replacement: licensing, physical presence, unions, liability, culture.
0/10
Protective PrinciplesHuman-only factors: physical presence, deep interpersonal connection, moral judgment.
0/9
AI GrowthDoes AI adoption create more demand for this role? 2 = strong boost, 0 = neutral, negative = shrinking.
0/2
Score Composition 61.1/100
Task Resistance (50%) Evidence (20%) Barriers (15%) Protective (10%) AI Growth (5%)
Where This Role Sits
0 — At Risk 100 — Protected
Artillery and Missile Officers (Mid-to-Senior): 61.1

This role is protected from AI displacement. The assessment below explains why — and what's still changing.

Artillery and missile officers hold personal legal authority over lethal fire employment — a decision that international law, DoD policy, and the UCMJ mandate must remain with a human. AI accelerates fire direction computation and targeting but cannot bear command responsibility. Safe for 15-25+ years.

Role Definition

FieldValue
Job TitleArtillery and Missile Officer
Seniority LevelMid-to-Senior (O-2 to O-4: First Lieutenant to Major)
Primary FunctionCommands artillery batteries and missile units, plans and authorizes fire missions, coordinates fire support with supported manoeuvre commanders, interprets rules of engagement for lethal fires employment, conducts collateral damage estimation, and bears personal legal accountability under UCMJ and the Law of Armed Conflict (LOAC) for every round fired. Deployed with units in field conditions — forward observation posts, firing positions, tactical operations centres. Decides WHERE, WHEN, and WHAT to fire.
What This Role Is NOTNOT an enlisted artilleryman/cannon crew member (operates the weapon system, does not authorize fire — scored separately under Military Enlisted Tactical Operations). NOT a C2 centre officer (works from fixed installations, not field-deployed). NOT a defence industry systems engineer (designs weapons, does not employ them). NOT a drone operator (different authority chain and employment model).
Typical Experience4-12 years commissioned service. Field Artillery Basic Officer Leader Course (BOLC), Captain's Career Course (CCC), possibly Command and General Staff College (CGSC). Branch 13A (Field Artillery), 14A (Air Defense Artillery). BLS does not track military occupations; employment estimated from DoD FY2024 personnel data.

Seniority note: Junior officers (O-1, 0-2 years) would score slightly lower — they execute fire missions under supervision but hold less autonomous authority. Senior officers (O-5+) shift toward strategic fire support planning and brigade-level command, remaining deeply Green with even higher goal-setting scores.


Protective Principles + AI Growth Correlation

Human-Only Factors
Embodied Physicality
Significant physical presence
Deep Interpersonal Connection
Deep human connection
Moral Judgment
High moral responsibility
AI Effect on Demand
No effect on job numbers
Protective Total: 7/9
PrincipleScore (0-3)Rationale
Embodied Physicality2Field-deployed with batteries and missile units in unstructured environments — forward observation posts, firing positions, tactical assembly areas. Not performing manual labour but must be physically present in austere, often dangerous field conditions to command effectively. Less physical than infantry but more than C2 centre staff.
Deep Interpersonal Connection2Commands soldiers under extreme stress, coordinates with supported manoeuvre commanders face-to-face, builds trust with subordinate leaders. Fire support coordination requires rapid interpersonal negotiation — the supported commander must trust the artillery officer's judgment. Not therapeutic, but human authority and trust are mission-critical.
Goal-Setting & Moral Judgment3Core to role. The officer DECIDES whether to fire — interpreting ROE, assessing proportionality, estimating collateral damage, and determining whether a target meets legal engagement criteria. These are moral and legal judgments with lethal consequences. If the decision is wrong, the officer faces UCMJ prosecution and potential war crimes charges. This is irreducible human accountability.
Protective Total7/9
AI Growth Correlation0AI adoption (precision targeting, sensor-to-shooter networks, JADC2) enhances fire support capabilities but does not reduce the number of artillery officers. Force structure is driven by threat environment, Congressional authorization, and Army modernization priorities — not technology substitution. AI creates new tasks (validating AI-generated targeting data, managing autonomous launcher integration) without eliminating existing billets.

Quick screen result: Protective 7/9 with neutral growth — strong Green Zone signal. Proceed to confirm.


Task Decomposition (Agentic AI Scoring)

Work Impact Breakdown
5%
45%
50%
Displaced Augmented Not Involved
Fire mission authorization & ROE interpretation
25%
1/5 Not Involved
Fire support planning & coordination
20%
2/5 Augmented
Collateral damage estimation & proportionality
15%
2/5 Augmented
Unit command & soldier leadership
15%
1/5 Not Involved
Tactical positioning & field operations
10%
1/5 Not Involved
Fire direction computation & targeting data
10%
3/5 Augmented
Administrative duties & reporting
5%
4/5 Displaced
TaskTime %Score (1-5)WeightedAug/DispRationale
Fire mission authorization & ROE interpretation25%10.25NOT INVOLVEDThe officer personally authorizes each fire mission, interpreting rules of engagement, confirming target identification, and applying proportionality principles. This is the core legal accountability — someone goes to prison if the decision is wrong. DoD Directive 3000.09 mandates "appropriate levels of human judgment over the use of force." AI has zero authority here. Irreducible human work.
Fire support planning & coordination20%20.40AUGMENTATIONPlanning fire support for manoeuvre operations — target lists, engagement priorities, ammunition allocation, counterfire plans. AI-powered tools (AFATDS, JADOCS) accelerate planning by optimizing firing solutions and deconflicting airspace. The officer directs strategy and makes allocation decisions; AI handles computational optimization.
Collateral damage estimation & proportionality15%20.30AUGMENTATIONAssessing civilian presence, structural damage radius, proportionality under LOAC. AI tools provide damage modelling and pattern-of-life analysis from ISR feeds. The officer makes the legal judgment — AI provides data, human decides if the strike meets proportionality requirements. Personal legal liability ensures human ownership.
Unit command & soldier leadership15%10.15NOT INVOLVEDCommanding battery/battalion, mentoring junior officers, enforcing discipline, managing welfare, conducting performance evaluations. Human leadership of soldiers under combat stress. No AI substitute for command presence — troops follow officers they trust.
Tactical positioning & field operations10%10.10NOT INVOLVEDSelecting and occupying firing positions, conducting reconnaissance for observation posts, moving with the battery in field conditions. Physical presence in austere environments, terrain assessment, survivability decisions. No remote or AI substitute.
Fire direction computation & targeting data10%30.30AUGMENTATIONComputing firing solutions, managing target acquisition data, integrating sensor feeds. AI-enabled fire direction (AFATDS, precision targeting algorithms) handles ballistic computation and sensor fusion. The officer validates outputs and resolves conflicts — AI computes, human confirms. This is the most AI-accelerated portion of the role.
Administrative duties & reporting5%40.20DISPLACEMENTOERs, readiness reports, ammunition expenditure tracking, training schedules, maintenance records. AI and digital systems automate much documentation. Most automatable portion of the role.
Total100%1.70

Task Resistance Score: 6.00 - 1.70 = 4.30/5.0

Displacement/Augmentation split: 5% displacement, 45% augmentation, 50% not involved.

Reinstatement check (Acemoglu): AI creates significant new tasks: validating AI-generated targeting solutions, supervising autonomous launcher systems (Army's Autonomous Multi-Domain Launcher), managing human-machine teaming with AI-enabled sensor networks (JADC2), and overseeing AI-assisted collateral damage estimation tools. The officer role expands to include AI oversight responsibilities — classic augmentation-driven reinstatement.


Evidence Score

Market Signal Balance
+2/10
Negative
Positive
Job Posting Trends
0
Company Actions
0
Wage Trends
0
AI Tool Maturity
+1
Expert Consensus
+1
DimensionScore (-2 to 2)Evidence
Job Posting Trends0Military billets are set by force structure tables, not market demand. Artillery officer authorizations remain stable across FY2024-2026 as the Army prioritizes Long Range Precision Fires (LRPF) modernization. Not market-driven — neutral.
Company Actions0DoD is not cutting artillery officer billets. The Army's modernization priorities (LRPF, HIMARS expansion, Precision Strike Missile fielding) are adding capability, not reducing officer positions. The Autonomous Multi-Domain Launcher program augments launchers, not replaces the officers who authorize their employment.
Wage Trends0Military officer pay follows statutory pay tables set by Congress. O-2 to O-4 compensation tracks inflation through annual NDAA adjustments. No AI-driven wage pressure — military compensation is structurally insulated from market forces.
AI Tool Maturity1AI fire direction tools (AFATDS upgrades, Project Maven targeting, JADC2 sensor fusion) augment the officer's capabilities significantly but create new validation tasks rather than displacing the officer. No AI system is authorized to make lethal fire decisions. Tools enhance accuracy and speed while increasing human oversight responsibilities.
Expert Consensus1Broad expert agreement: human-in-the-loop is mandatory for lethal force employment. CSIS, CRS, and DoD policy analysis uniformly confirm that DoD Directive 3000.09 requires human judgment for weapons employment. The FY2025 NDAA requires annual reporting on autonomous weapons deployment — Congressional oversight reinforces human control.
Total2

Barrier Assessment

Structural Barriers to AI
Strong 8/10
Regulatory
2/2
Physical
2/2
Union Power
0/2
Liability
2/2
Cultural
2/2

Reframed question: What prevents AI execution even when programmatically possible?

BarrierScore (0-2)Rationale
Regulatory/Licensing2DoD Directive 3000.09 mandates "appropriate levels of human judgment over the use of force." Geneva Conventions and LOAC require human accountability for targeting decisions. Commissioned officers hold legal authority to authorize fires — no AI system can be commissioned. FY2025 NDAA Section 1066 requires annual Congressional reporting on autonomous weapons. Maximum regulatory barrier.
Physical Presence2Field-deployed with batteries in forward areas, observation posts, and tactical assembly areas. Must physically observe terrain, assess conditions, and maintain command presence with soldiers. Not desk-based — operates in unstructured field environments that vary with every deployment.
Union/Collective Bargaining0Military. No union representation.
Liability/Accountability2The officer is personally liable under UCMJ and international law for every fire mission authorized. War crimes prosecution, court martial, personal criminal liability. AI has no legal personhood — it cannot be court-martialled, imprisoned, or held accountable under LOAC. This is the strongest barrier: someone must go to prison if the decision is wrong.
Cultural/Ethical2Military culture, allied nations, and the international community will not accept autonomous lethal fire employment without human authorization. The UN Convention on Certain Conventional Weapons continues debating LAWS restrictions. Even nations developing autonomous capabilities maintain that a human must authorize lethal force. Society will not delegate kill authority to machines.
Total8/10

AI Growth Correlation Check

Confirmed 0. AI modernization (JADC2, precision targeting, autonomous launchers) dramatically enhances fire support capabilities but does not reduce the number of officers needed to authorize and command fires employment. The Autonomous Multi-Domain Launcher reduces the number of enlisted crew needed at the launcher — it does not eliminate the officer who decides what to fire at. Force structure is threat-driven and Congressionally authorized, not technology-driven. Neutral correlation.


JobZone Composite Score (AIJRI)

Score Waterfall
61.1/100
Task Resistance
+43.0pts
Evidence
+4.0pts
Barriers
+12.0pts
Protective
+7.8pts
AI Growth
0.0pts
Total
61.1
InputValue
Task Resistance Score4.30/5.0
Evidence Modifier1.0 + (2 × 0.04) = 1.08
Barrier Modifier1.0 + (8 × 0.02) = 1.16
Growth Modifier1.0 + (0 × 0.05) = 1.00

Raw: 4.30 × 1.08 × 1.16 × 1.00 = 5.39

JobZone Score: (5.39 - 0.54) / 7.93 × 100 = 61.1/100

Zone: GREEN (Green ≥48)

Sub-Label Determination

MetricValue
% of task time scoring 3+15%
AI Growth Correlation0
Sub-labelGREEN (Stable) — AIJRI ≥48, <20% of task time scores 3+

Assessor override: None — formula score accepted. Score aligns well with comparable military assessments (First-Line Enlisted Supervisor 63.6, Military Enlisted Tactical Operations 60.3). Artillery officers score slightly above enlisted tactical operators due to stronger goal-setting/accountability barriers but below senior NCO supervisors who have deeper interpersonal connection scores.


Assessor Commentary

Score vs Reality Check

The Green (Stable) classification at 61.1 accurately reflects the structural reality of this role. The score is barrier-dependent — 8/10 barriers provide a 16% boost — but these barriers are structural, not temporal. DoD Directive 3000.09, UCMJ accountability, and international humanitarian law are not eroding; they are being reinforced by the FY2025 NDAA and ongoing CCW deliberations. Even if AI becomes technically capable of autonomous targeting, the legal and ethical framework preventing autonomous lethal fire employment is hardening, not softening.

What the Numbers Don't Capture

  • Autonomous launcher trajectory — the Army's Autonomous Multi-Domain Launcher program could reduce crew sizes at the weapon system level, but this affects enlisted billets, not officer authorization authority. The officer role may actually expand as one officer oversees more autonomous launchers.
  • JADC2 transformation — the Joint All-Domain Command and Control initiative is fundamentally changing how fire support is coordinated, adding AI-assisted sensor-to-shooter linkages. This transforms HOW the officer works (faster, more data) without changing WHAT the officer decides (whether to fire).
  • International LAWS debate — if the Convention on Certain Conventional Weapons or a successor treaty restricts autonomous weapons, it would further entrench human-in-the-loop requirements, pushing the score higher. Regulatory risk is asymmetrically protective.

Who Should Worry (and Who Shouldn't)

Artillery and missile officers at the company and battalion level (O-2 to O-4) are deeply protected — they hold the legal authority to authorize fires and bear personal liability for those decisions. No AI system can substitute for this accountability chain. Officers who lean into AI-assisted targeting, autonomous launcher management, and JADC2 integration will be the most valuable. The only version of this role that faces any pressure is a hypothetical future where fire direction becomes so automated that fewer officers are needed to oversee more systems — but even then, the authorization and accountability requirement remains irreducible. The single biggest factor separating safe from at-risk is command authority: if you authorize fires, you are protected by law. If you only compute firing solutions, AI is coming for that task.


What This Means

The role in 2028: Artillery and missile officers will command more capable, more automated fire support systems — autonomous launchers, AI-assisted targeting, real-time sensor fusion via JADC2 — but will remain the irreplaceable human in the kill chain. The officer who authorizes fires in 2028 will process more data, oversee more systems, and make faster decisions, but the decision authority itself cannot be delegated to a machine.

Survival strategy:

  1. Master AI-enabled fire support tools (AFATDS upgrades, JADC2 interfaces, autonomous launcher command systems) — become the officer who integrates AI, not the one who resists it
  2. Deepen expertise in collateral damage estimation methodology and proportionality assessment — as AI speeds up the kill chain, the officer's judgment on "should we fire?" becomes more valuable, not less
  3. Build cross-domain fire support coordination skills (cyber, space, electronic warfare) — the fires officer of the future coordinates effects across all domains, not just kinetic

Timeline: 15-25+ years. Driven by the structural permanence of human-in-the-loop requirements for lethal force under international and domestic law.


Other Protected Roles

Special Forces Officer (Mid-to-Senior)

GREEN (Stable) 80.3/100

Special Forces Officers command the most autonomous, high-stakes, and culturally complex military operations — unconventional warfare, foreign internal defense, and direct action — requiring irreducible human judgment, personal legal accountability for lethal force, and deep relationship-building with foreign partners that no AI system can replicate. Safe for 25+ years.

Also known as sas officer sbs officer

Infantry (Mid-Level)

GREEN (Stable) 74.6/100

Infantry combat roles demand maximum embodied physicality in the most unstructured, hostile environments imaginable. AI and robotics augment reconnaissance and logistics but cannot replace the human soldier in close combat, terrain holding, or escalation-of-force judgment. Safe for 20+ years.

Also known as commando guardsman

Infantry Officer (Mid-to-Senior)

GREEN (Stable) 70.4/100

Infantry officers command soldiers in close combat across the most unstructured, hostile environments on earth. Personal criminal liability under UCMJ, mandated human-in-the-loop for lethal force, and irreducible physical presence in the battlespace make this role structurally immune to AI displacement. Safe for 20+ years.

Also known as army officer platoon commander

Aircraft Launch and Recovery Officers (Mid-to-Senior)

GREEN (Stable) 69.7/100

Launch and recovery officers hold personal authority over the lives of aircrew and the fate of aircraft worth $80-200M each — the "Shooter" literally gives the signal to launch. EMALS/AAG changes the underlying technology but the officer DIRECTS operations. No AI system will be trusted with this authority. Safe for 20+ years.

Also known as flight deck officer

Sources

Get updates on Artillery and Missile Officers (Mid-to-Senior)

This assessment is live-tracked. We'll notify you when the score changes or new AI developments affect this role.

No spam. Unsubscribe anytime.

Personal AI Risk Assessment Report

What's your AI risk score?

This is the general score for Artillery and Missile Officers (Mid-to-Senior). Get a personal score based on your specific experience, skills, and career path.

No spam. We'll only email you if we build it.