Role Definition
| Field | Value |
|---|---|
| Job Title | Computer and Information Research Scientist |
| Seniority Level | Mid-to-Senior (5-15+ years experience) |
| Primary Function | Conducts fundamental research in computer and information science as theorists, designers, or inventors. Develops novel algorithms, computational methods, and theoretical frameworks. Designs solutions to problems in computer hardware and software, often pioneering new computing paradigms. Works in industry research labs (Google DeepMind, Microsoft Research, Meta FAIR), national laboratories, or academia. |
| What This Role Is NOT | NOT a Software Developer (does not build production applications). NOT a Data Scientist (does not primarily analyse business data). NOT an ML Engineer (does not primarily deploy models to production). NOT a Systems Analyst (does not optimise existing systems). The research scientist creates new knowledge and methods; adjacent roles apply existing ones. |
| Typical Experience | 5-15+ years. PhD typically required (O*NET Job Zone 5). Deep expertise in 1-2 subfields (machine learning, cryptography, distributed systems, computer vision, NLP, robotics, quantum computing). Publication record in top venues (NeurIPS, ICML, SIGCOMM, STOC/FOCS). |
Seniority note: Entry-level research scientists (fresh PhDs, postdocs) would score lower — more execution-heavy, narrower scope, less direction-setting. Junior research roles trend toward Yellow as AI handles more routine experimentation and literature processing. Senior/principal research scientists with agenda-setting authority would score higher Green.
Protective Principles + AI Growth Correlation
| Principle | Score (0-3) | Rationale |
|---|---|---|
| Embodied Physicality | 0 | Fully digital, desk-based. Remote-capable. No physical component. |
| Deep Interpersonal Connection | 2 | Leads research teams, mentors PhD students and junior researchers, collaborates across interdisciplinary groups. Builds trust with research partners, communicates findings to stakeholders. Not therapy-level depth, but research mentorship and cross-team collaboration are core to impact. |
| Goal-Setting & Moral Judgment | 3 | Defines research agendas, identifies which problems are worth solving, sets the direction for entire research programmes. Evaluates feasibility of novel approaches where no precedent exists. Makes judgment calls on ethical implications of new technologies. This is genuinely novel goal-setting — deciding what SHOULD be researched, not executing defined tasks. |
| Protective Total | 5/9 | |
| AI Growth Correlation | 1 | AI adoption creates additional demand for computing research — more AI means more unsolved problems in efficiency, safety, alignment, robustness, and novel architectures. However, this is a weak positive rather than strong: the role existed before AI and spans many subfields beyond AI/ML (cryptography, quantum computing, formal verification, HCI). |
Quick screen result: Protective 5/9 + Correlation 1 = Likely Green Zone. Proceed to confirm.
Task Decomposition (Agentic AI Scoring)
| Task | Time % | Score (1-5) | Weighted | Aug/Disp | Rationale |
|---|---|---|---|---|---|
| Novel research & hypothesis generation | 25% | 1 | 0.25 | NOT INVOLVED | Identifying unsolved problems, formulating new theoretical frameworks, generating original hypotheses where no precedent exists. Irreducibly human — requires genuine novelty, intuition about which problems matter, and creative leaps that AI cannot produce. |
| Algorithm design & theoretical work | 20% | 2 | 0.40 | AUGMENTATION | AI assists by suggesting optimisations, running formal proofs, exploring parameter spaces. The researcher defines the problem, designs the approach, and evaluates correctness in unprecedented contexts. AI accelerates but cannot originate novel algorithmic paradigms. |
| Experimental design & methodology | 15% | 2 | 0.30 | AUGMENTATION | AI helps design experiments, suggest baselines, and identify confounders. The researcher defines what to measure, why it matters, and how to interpret results in the context of the broader field. Judgment about experimental validity remains human. |
| Data analysis & computational modeling | 15% | 3 | 0.45 | AUGMENTATION | AI handles significant portions of data processing, model training, hyperparameter tuning, and statistical analysis. The researcher directs the analysis, interprets results, identifies anomalies, and determines significance. Human leads, AI executes substantial sub-workflows. |
| Literature review & synthesis | 5% | 4 | 0.20 | DISPLACEMENT | AI tools (Semantic Scholar, Elicit, Consensus) can now synthesise hundreds of papers, identify gaps, and summarise findings. The researcher validates and contextualises, but the core search-and-summarise work is AI-executable. |
| Writing papers, grants & reports | 10% | 3 | 0.30 | AUGMENTATION | AI drafts sections, generates figures, handles formatting, and produces first-pass literature sections. The researcher provides the original ideas, structures the argument, ensures scientific rigour, and adds the nuanced interpretation that reviewers expect. |
| Mentoring, collaboration & team leadership | 5% | 1 | 0.05 | NOT INVOLVED | Guiding PhD students, building research collaborations, navigating academic/industry politics, fostering intellectual culture. Irreducibly human — mentorship and intellectual community cannot be automated. |
| Stakeholder communication & consulting | 5% | 2 | 0.10 | AUGMENTATION | Presenting research to funders, translating findings for industry partners, advising on technology strategy. AI can prepare slides and summaries; the researcher provides credibility, answers questions, and navigates organisational dynamics. |
| Total | 100% | 2.05 |
Task Resistance Score: 6.00 - 2.05 = 3.95/5.0
Displacement/Augmentation split: 5% displacement, 60% augmentation, 35% not involved.
Reinstatement check (Acemoglu): AI creates substantial new tasks for this role: designing AI safety and alignment frameworks, developing methods for AI interpretability and robustness, researching adversarial machine learning, creating benchmarks for evaluating AI systems, and validating AI-generated scientific discoveries. The role is expanding into AI-adjacent research areas, not contracting.
Evidence Score
| Dimension | Score (-2 to 2) | Evidence |
|---|---|---|
| Job Posting Trends | 1 | BLS projects 20% growth 2024-2034 ("much faster than average"), with 3,200 annual openings. 40,300 employed as of 2024. Demand driven by AI/ML research, cybersecurity, quantum computing, and data science. Tech sector projects 414% growth in data science roles through 2035. Strong but not surging — niche role with small absolute numbers. |
| Company Actions | 1 | Major tech companies (Google, Meta, Microsoft, Amazon) maintain and expand research labs. Google DeepMind, Meta FAIR, and OpenAI actively competing for research talent. Some 2024-2025 tech layoffs hit broadly but largely spared research divisions. No companies cutting this role citing AI — AI is what these people build. |
| Wage Trends | 1 | Median $140,910/year (BLS May 2024). ZipRecruiter reports $139,843 average in Feb 2026. Coursera cites $145,080 median. Wages strong and stable, growing modestly above inflation. AI/ML specialisation commands significant premiums. Not surging like AI engineering roles, but well above market. |
| AI Tool Maturity | 0 | AI tools augment core tasks (literature search, data analysis, code generation) but cannot originate novel research questions or design breakthrough algorithms. Tools like AlphaFold demonstrate AI can make discoveries in specific domains, but the research scientist designed, trained, and validated AlphaFold. Tools are powerful augmentors, not replacers — unclear net headcount impact. |
| Expert Consensus | 1 | Broad agreement that computing research is transforming but not being displaced. UNC CS: "AI is likely to spur new demand for workers." WEF and McKinsey highlight research roles as beneficiaries of AI investment. CRA (Computing Research Association) reports strong demand. No credible source predicts displacement of research scientists — the role creates AI, it doesn't compete with it. |
| Total | 4 |
Barrier Assessment
Reframed question: What prevents AI execution even when programmatically possible?
| Barrier | Score (0-2) | Rationale |
|---|---|---|
| Regulatory/Licensing | 1 | PhD requirement is a de facto barrier, not a legal licence. Some research requires IRB approval for human subjects, security clearance for government labs, or export control compliance (ITAR/EAR). Not as strict as medical licensing but creates meaningful credentialing friction. |
| Physical Presence | 0 | Fully remote-capable. Some research benefits from in-person collaboration but is not structurally required. |
| Union/Collective Bargaining | 0 | No union representation. Academic tenure provides some protection but is not collective bargaining in the traditional sense. |
| Liability/Accountability | 1 | Research integrity carries real consequences — retraction, loss of funding, career destruction. Principal investigators are personally accountable for research ethics, data integrity, and responsible disclosure of security vulnerabilities. Not prison-level liability, but professional accountability is meaningful. |
| Cultural/Ethical | 1 | Strong cultural expectation that fundamental research requires human creativity and intellectual ownership. Peer review system presumes human authors with accountable expertise. Academic and industry research communities resist the idea that AI can be an independent researcher. Emerging norms around AI co-authorship reinforce human primacy. |
| Total | 3/10 |
AI Growth Correlation Check
Confirmed at +1 from Step 1. Computer and information research scientists have a weak positive correlation with AI growth. More AI adoption creates more unsolved problems in safety, alignment, efficiency, and new architectures — all of which require research scientists. However, the role spans subfields well beyond AI/ML (cryptography, distributed systems, quantum computing, HCI, formal methods), so AI growth is not the sole demand driver. This is not an "AI-created" role (it existed decades before the current AI wave), which prevents a +2 score. The correlation is real but not recursive in the way AI security or AI governance roles are.
JobZone Composite Score (AIJRI)
| Input | Value |
|---|---|
| Task Resistance Score | 3.95/5.0 |
| Evidence Modifier | 1.0 + (4 × 0.04) = 1.16 |
| Barrier Modifier | 1.0 + (3 × 0.02) = 1.06 |
| Growth Modifier | 1.0 + (1 × 0.05) = 1.05 |
Raw: 3.95 × 1.16 × 1.06 × 1.05 = 5.0998
JobZone Score: (5.0998 - 0.54) / 7.93 × 100 = 57.5/100
Zone: GREEN (Green >= 48, Yellow 25-47, Red <25)
Sub-Label Determination
| Metric | Value |
|---|---|
| % of task time scoring 3+ | 30% (data analysis 15% + writing 10% + lit review 5%) |
| AI Growth Correlation | 1 |
| Sub-label | Green (Transforming) — >= 20% task time scores 3+, correlation not 2 |
Assessor override: None — formula score accepted.
Assessor Commentary
Score vs Reality Check
The 57.5 JobZone Score places this role comfortably in Green, 9.5 points above the boundary — not borderline. The score aligns with the Senior Software Engineer (55.4) and is consistent with the calibration expectation: high-judgment research roles with moderate evidence and low barriers land in mid-Green. The low barrier score (3/10) means protection is capability-based — AI genuinely cannot do novel research — rather than structurally enforced. This is an honest classification: the role is safe because the core work (generating new knowledge) is among the most irreducibly human activities in the economy.
What the Numbers Don't Capture
- Subfield divergence. "Computer and Information Research Scientist" spans subfields from theoretical cryptography (very safe) to applied ML research (more exposed to AI-assisted discovery). Researchers working on problems where AI itself is the tool of investigation face an unusual dynamic: their subject matter accelerates their augmentation.
- Function-spending vs people-spending. Corporate research lab budgets are growing, but some spending shifts from headcount to compute infrastructure. A $10M research budget increasingly buys fewer researchers and more GPU clusters.
- Academic vs industry split. Industry research scientists at Google, Meta, and OpenAI earn 2-3x academic counterparts and have stronger evidence signals. Academic researchers face flat funding, adjunctification pressure, and slower AI tool adoption — their individual risk is higher than the aggregate score suggests.
- Rate of AI capability improvement. AI research tools (automated ML, neural architecture search, AI-assisted theorem proving) are improving faster than in most domains. The augmentation score of 3 for data analysis could trend toward 4 as AI handles more complex experimental workflows autonomously.
Who Should Worry (and Who Shouldn't)
If you are a research scientist who defines research agendas, identifies novel problems, and leads teams — you are well-positioned. Your ability to ask the right questions, identify what is worth investigating, and synthesise across fields is exactly what AI cannot do. The more senior and agenda-setting your role, the safer you are.
If you are a research scientist whose primary value is running experiments, processing data, and producing incremental results on well-defined problems — you face compression risk. AI tools increasingly handle routine experimentation, and the premium shifts toward researchers who can do what AI cannot: generate genuinely novel ideas and set research direction.
The single biggest factor: whether your value comes from defining what to investigate (safe) or executing well-defined research protocols (increasingly automatable). The research scientist of 2028 spends more time on creative ideation and less time on data wrangling.
What This Means
The role in 2028: Research scientists spend more time formulating hypotheses, designing novel approaches, and interpreting surprising results — the irreducibly creative parts of research. AI agents handle literature synthesis, data preprocessing, model training, hyperparameter optimisation, and first-draft writing. The researcher becomes an "AI-augmented investigator" who orchestrates AI tools to explore larger problem spaces faster, but the fundamental value proposition — generating new knowledge — remains human.
Survival strategy:
- Master AI research tools deeply. Become proficient with AI-assisted experimentation, automated ML, and AI-powered literature synthesis. The researcher who can leverage AI to explore 10x more hypotheses per year will outcompete those who work manually.
- Invest in problem identification over problem execution. The scarce skill is knowing which questions matter, not running experiments. Build expertise in cross-disciplinary synthesis and research taste.
- Develop AI safety, alignment, or interpretability expertise. These subfields have the strongest growth correlation with AI adoption and the highest demand trajectory. Researchers who can work on making AI systems trustworthy and robust are in the strongest possible position.
Timeline: 5-10+ years. Protection is strong and capability-based. The core work (novel research) is among the last things AI will automate, but the supporting work (data analysis, writing, literature review) is transforming now. Researchers who adapt their workflow to leverage AI tools will thrive; those who don't will see their productivity fall behind.