Artificial Intelligence Bias
Risks for Military Intelligence Operations
By CPT Tyrese Bender
Article published on:
in the January-June 2026
edition of Military Intelligence
Read Time:
< 8 mins
U.S Air Force photo by SSG Kaitlin Frazier altered and annotations added
by MIPB staff.
Introduction
The rise of artificial intelligence (AI) in modern warfare presents
profound opportunities and operational risks for the military intelligence
(MI) community. While AI promises to enhance analytic speed and efficiency
for MI, a critical vulnerability threatens these advantages: AI bias. AI
bias refers to systematic errors in AI systems that yield inaccurate
outputs due to unrepresentative training data, flawed design, or improper
use.1
Empirical evidence, in both civilian and military contexts, reveals the
extent to which AI bias can compromise AI-enabled targeting operations,
enemy analysis, and intelligence sharing. In short, AI bias-related risks
endanger not only mission success, but also servicemembers’ lives. To
mitigate these risks, the Department of Defense (DoD) should consider
implementing technical, operational, and institutional policy safeguards
to ensure AI-enabled MI operations remain combat effective and ethically
grounded.
Background
In recent years, the U.S. Government has accelerated the adoption of
military AI capabilities to advance its national security objectives.2
This push comes as no surprise. AI has the potential to drastically
transform military operations, and more specifically MI operations, by
improving target identification, accelerating intelligence production, and
strengthening decision-making support.3
In an era characterized by near-peer threats and rapid innovation, such
operational efficiencies are national security imperatives. However, it is
important to recognize that innovation and strategic necessity do not
ensure operational advantage. History shows that breakthrough technologies
present unforeseen risks on the battlefield. For example, during World War
II, new radar technology failed to differentiate between friendly and
enemy aircraft, providing U.S. forces with no early warning of the Pearl
Harbor attack.4
Additionally, network-enabled warfare in the early 21st century created
diffuse electromagnetic vulnerabilities that have rendered contemporary
operations more susceptible to paralysis.5
AI’s integration into MI operations will likely introduce several similar
bias-driven risks with the potential to compromise mission results and
endanger U.S. Soldiers.
Risks for Military Intelligence
Evidence in both civilian and military contexts demonstrates how AI
bias-driven misidentification can endanger human life. In 2019, U.S.
researchers found that a wide range of AI recognition tools (e.g., facial
recognition) relied on training data that was unrepresentative of the
general population. This led law enforcement to misidentify, interrogate,
and wrongfully arrest marginalized groups at disproportionate rates.6
As DoD programs develop similar AI recognition systems, these failures
raise serious questions about the operational vulnerabilities AI bias
poses.7
AI bias in military targeting operations may lead to high rates of
misidentification in non-Western environments, where U.S.-centric data can
skew results. Consequently, AI bias increases the risk of lethal strikes
on incorrect targets, noncombatants, or even friendly forces. The Israeli
Defense Force’s AI-enabled targeting in Gaza underscores the reality of
these dangers, as AI bias-driven identification flaws resulted in unlawful
interrogations and fatal strikes on Palestinian civilians.8
If AI bias errors were to similarly misguide U.S. targeting missions, the
operational, humanitarian, and moral consequences would be severe.
AI bias vulnerabilities can also compromise MI analysis, where AI bias
threatens to distort threat assessments and undermine decision making. In
the U.S. justice system, AI bias has often informed inaccurate high-risk
scores that contributed to harsher punitive sentences for marginalized
citizens.9
Applied to MI, similar distortions could inform flawed assessments and
biased intelligence pictures, prompting commanders to misallocate troops
and resources based on false indications of high risk.10
At best, MI staff will waste time correcting skewed intelligence
assessments; at worst, commanders may act on them. Such poorly informed
decisions grant adversaries an advantage, undermine mission success, and
expose servicemembers to greater danger. Just as flawed intelligence
shaped the United States decision to invade Iraq, AI bias in intelligence
will likely distort threat pictures, with equally grave consequences.
Compounding these challenges, AI bias also jeopardizes intelligence
sharing between the United States and its allies. Effective intelligence
collaboration can often foment strength in international partnerships and
drive effective global responses to crises. The United States decision to
disclose intelligence to Europe before Russia’s invasion of Ukraine
demonstrated this well,11
but AI bias could introduce mistrust into these collaborative intelligence
partnerships. Evidence already indicates that lack of trust is a
significant obstacle to forming and maintaining U.S. intelligence-sharing
relationships.12
Hence, as AI bias-driven mistrust continues to proliferate throughout the
international community, allies may become hesitant to solicit or act on
AI-enabled intelligence from the United States.13
Such impediments to the free flow of intelligence would leave America and
its allies less informed, less unified, and less prepared for future
crises. The likelihood and severity of these risks will only increase,
especially as the DoD pushes to rapidly integrate AI without implementing
effective safeguards against AI bias.
Recommendations
Mitigating these vulnerabilities will require the DoD to implement a
coordinated policy approach comprised of effective technical, operational,
and institutional safeguards against AI bias. Technically, the DoD should
publish development and acquisition requirements that demand AI models be
debiased and capable of producing transparency reports. As far as true
debiasing is attainable, military AI models should draw from diverse,
operationally relevant datasets that can adapt to dynamic battlefield
conditions. The classification algorithms underlying these models should
also account for region-specific social realities. AI models that rely on
data reflecting local religions and that classify risk using metrics
pertinent to regional cultures will likely result in fewer
misidentifications and assessment errors. If uncertainty or malign
influence arises, DoD AI models should also include transparency report
functions, enabling users to validate the relevancy and sources of AI bias
after the fact.
Operationally, the DoD should develop mandates and regulations to guide
the ethical use of AI at the tactical level. Evidence indicates that
maintaining human-in-the-loop requirements for targeting operations and
intelligence production is often an effective safeguard against fallout
from AI bias errors.14
In almost all cases, trained professionals need to maintain final
authority over operational decisions affecting life and death, regardless
of the efficiencies AI might offer. Importantly, the military needs to
formalize these operational red lines, procedures, and exceptions within
existing regulations to ensure servicemembers comply with AI bias-related
mitigation measures. This can also help communicate an important
operational tenet: AI can enable warfighting—but warfighters, not AI,
should drive mission outcomes.
Implementing institutional measures can reinforce these technical and
operational procedures. As AI use in the military becomes normalized, the
DoD should require service-members to complete annual AI risk management
training. Such training could help Soldiers counteract AI bias-related
vulnerabilities before any operational consequences materialize. The DoD
should also configure training programs to equip officers and senior
enlisted leaders with the tools needed to manage the ethical use of
military AI. Lastly, the DoD must, in coordination with the Office of the
Director of National Intelligence and the National Security Council,
develop international AI standards for military and intelligence
operations. These measures can establish global standards for AI use in
intelligence activities, counteract AI bias-driven mistrust, and enable
intelligence-sharing relationships to flourish.
Conclusion
As the DoD looks to integrate AI into MI operations, it should recognize
AI bias not just as a technical flaw but also as a strategic
vulnerability. A large body of evidence in civilian and military
literature points to the grave risks posed by AI bias to targeting
operations, threat analysis, and U.S. intelligence-sharing relationships.
To mitigate these vulnerabilities, the DoD must implement technical,
operational, and institutional policies to protect against AI bias and
ensure that AI delivers the operational edge it promises. With U.S.
national security at stake, the choice is clear: the DoD must continue to
adopt AI while forcefully addressing AI bias risks to secure America’s
military advantage now and in the rapidly approaching future.
Endnotes
1. Laura Bruun and
Marta Bo,
Bias in Military Artificial Intelligence and Compliance with
International Humanitarian Law
(Stockholm International Peace Research Institute, 2025),
https://www.sipri.org/sites/default/files/2025-08/0825_ai_military_bias.pdf.
2. Joseph R. Biden,
Jr., presidential action, Memorandum on Advancing the United States’
Leadership in Artificial Intelligence; Harnessing Artificial
Intelligence to Fulfill National Security Objectives; and Fostering the
Safety, Security, and Trustworthiness of Artificial Intelligence (The
White House, October 24, 2024),
https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security/; The Executive Office of the President of the United States,
Winning the Race: America’s AI Action Plan (The White House,
July 2025),
https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf; and U.S. Secretary of Defense, Pete Hegseth, memorandum,
Army Transformation and Acquisition Reform (U.S. Department of
Defense, April 30, 2025),
https://media.defense.gov/2025/May/01/2003702281/-1/-1/1/ARMY-TRANSFORMATION-AND-ACQUISITION-REFORM.PDF.
3. Courtney Albon,
“Palantir Wins Contract to Expand Access to Project Maven AI
Tools”, Defense News, May 30, 2024,
https://www.defensenews.com/ai/2024/05/30/palantir-wins-contract-to-expand-access-to-project-maven-ai-tools/; and U.S. Department of State, Office of the Spokesperson,
Freedom Online Coalition Joint Statement on Responsible Government
Practices for AI Technologies, U.S. Department of State, September 23, 2024,
https://2021-2025.state.gov/freedom-online-coalition-joint-statement-on-responsible-government-practices-for-ai-technologies/.
4. Richard B. Frank,
“The Three Missed Tactical Warnings That Could Have Made a Difference
at Pearl Harbor”, National WWII Museum, October 13, 2021,
https://www.nationalww2museum.org/war/articles/pearl-harbor-missed-tactical-warnings.
5. Jacquelyn Schneider,
“Digitally Enabled Warfare: The Capability-Vulnerability Paradox”, Center for a New American Security, August 29, 2016,
https://www.cnas.org/publications/reports/digitally-enabled-warfare-the-capability-vulnerability-paradox.
6. Douglas MacMillan et
al.,
“Arrested by AI: Police Ignore Standards after Facial Recognition
Matches”, The Washington Post, January 13, 2025,
https://www.washingtonpost.com/business/interactive/2025/police-artificial-intelligence-facial-recognition/.
7. Michael Zequeira,
“Artificial Intelligence as a Combat Multiplier: Using AI to Unburden
Army Staffs”, Online Exclusive, Military Review, September 18, 2024,
https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2024-OLE/AI-Combat-Multiplier/.
8. Joan Wong,
“Israel’s A.I. Experiments in Gaza War Raise Ethical Concerns”,
The New York Times, April 25, 2025,
https://www.nytimes.com/2025/04/25/technology/israel-gaza-ai.html.
9. Julia Angwin et al.,
“Machine Bias”, ProPublica, May 23, 2016,
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
10. Bruun and Bo,
Bias in Military Artificial Intelligence.
11. Joakim Barrett,
“Intelligence Disclosure as a Strategic Messaging Tool”,
NATO Review, December 16, 2024,
https://archives.nato.int/uploads/r/nato-archives-online/0/b/1/0/b1efea2718c16b63a16cc086c18381cfa61fac31c04064991855a97e0279/2024-12-16_Intelligence_disclosure_as_a_strategic_messaging_tool_ENG.pdf.
12. Daniel Byman,
“Improving U.S. Intelligence Sharing with Allies and Partners”,
Center for Strategic and International Studies, January 28, 2025,
https://www.csis.org/analysis/improving-us-intelligence-sharing-allies-and-partners.
13. Yoshua Bengio et
al., International AI Safety Report (DSIT 2025/001, 2025),
https://www.internationalsafetyreport.org/publication/international-ai-safety-report-2025; and U.S. Department of State,
Government Practices for AI Technologies.
14. Yoshua Bengio et
al., International AI Safety Report; Bruun and Bo,
Bias in Military Artificial Intelligence; and Kathleen M. Vogel
et al.,
“The Impact of AI on Intelligence Analysis: Tackling Issues of
Collaboration, Algorithmic Transparency, Accountability, and
Management”, Intelligence and National Security 36, no. 6: 827–848,
https://doi.org/10.1080/02684527.2021.1946952.
Authors
CPT Tyrese Bender is currently a student at the
Military Intelligence Captain’s Career Course (MICCC). Before attending
the MICCC, CPT Bender served as a policy advisor in the Intelligence and
Defense Policy Directorate of the National Security Council. He holds a
Bachelor of Science degree in engineering management from the United
States Military Academy and a Master of Philosophy in sociology and
demography from the University of Oxford.