Defending the Homeland Against AI
Driven Information Warfare
By Capt. Nolan R. Hedglin
Article published on: September 1, 2025 in the Gray Space Fall 2025 Edition
Read Time: < 10 mins
Introduction
Artificial intelligence (AI), which is broadly defined as the ability to stimulate human cognition through
machine action, can empower individuals and organizations to create and disseminate digital advertising at
an unprecedented scale. Through advanced techniques in profiling individuals from their online behavior,
organizations can wage micro-targeted influence operations—be it to influence a user’s purchasing decisions
or political activity—automatically with minimal resources and across borders. The United States (U.S.) has
contended with this reality in every election since 2018 after The New York Times reported that
Cambridge Analytica misused Facebook marketing data to conduct targeted political advertising in
the 2016 U.S. election (Hakim & Rosenberg, 2018). Historically, AI’s role in micro-targeting has been
limited to individual profiling. Within the past year, however, platforms such as Meta, Amazon, and TikTok
are experimenting with allowing advertisers to use generative AI when creating content (Roth, 2023). By
arming individual advertisers with tools to automatically generate ads, these platforms fully connect the
stack required to identify a new target and influence their behavior without human input.
Malign influence actors (MIAs) ride on the backbone of commercially available data and profiling tactics, and
the introduction of generative AI into commercial ad-tech stacks should be perceived as a fundamental shift
in how malign influence activity may occur in the future. In this paper, I argue the U.S. should re-focus
efforts away from adopting a “defend-forward” framework for combatting the foreign influence of U.S.
citizens. Taking active measures to counter malign influence activity may be less effective than in the
past, as the final step of influencing an audience - content creation and delivery - no longer relies on
human input. Actively defending automated influence campaigns would require an automated system of our own.
Furthermore, the U.S. is not effectively equipped to combat foreign influence domestically in an automated
manner that would be compliant with federal privacy laws. Instead, I propose that the U.S. should partner
with the European Union (EU) and conduct a case study on differing governance models regarding the
protection of consumer privacy. This would aid U.S. legislators in enacting a federal consumer privacy law
that curbs unfettered data collection against U.S. citizens. By providing stronger protections against
violations of consumer privacy, the U.S. would severely inhibit a malign influence actor’s ability to
profile U.S. citizens and serve AI-developed content.
Generative AI Exponentially Increases the Rate of Malign Influence Activity
Two main applications arise in the assessment of the use of AI as a tool in malign influence activity: target
identification and content creation (Hunter et al., 2024). Ultimately, AI rapidly accelerates the
notification-to-strike decision cycle for targeted influence operations.
With respect to target identification, MIAs employ AI in the exact same manner as advertisers that conduct
surveillance capitalism (Hunter et al., 2024). As an example, a website fingerprints an individual’s
activities on their platform. A data broker then aggregates and connects user behavior across platforms
through their advertising identity (Ramirez et al., 2014). Advertising firms analyze a user’s digital
pattern of life, categorizing them into over 70 buckets via clustering (e.g. person X is a “tech
enthusiast”) and predicting their response to new advertisements through supervised learning techniques such
as extreme gradient boosting and recurrent neural networks (RNNs) (Chen & Guestrin, 2016; Ebadi Jokandan et
al., 2022). Ostensibly, MIAs should not find much use from an advertiser’s robust profiling of consumer
behavior. However, reports indicate that consumer behavior reveals political behavior. Thus, consumer
behavior is an inroad for MIAs to target susceptible audiences (Jung & Mittal, 2020).
With respect to content creation, MIAs use AI in many forms. For example, generative adversarial networks
(GANs) create deepfake videos through tools such as DeepFaceLab (Liu et al., 2023). Variational
autoencoders (VAEs), like DALL-E, provide near-instantaneous text-to-image generation (Ramesh et
al., 2022). Lastly, large language models (LLMs) such as Open AI’s GPT-4 can mimic human speech
across multiple languages with minimal operator input (Naveed et al., 2024). As a kit of content creation
tools, AI can enable MIAs to create and deliver content at machine speeds.
Although AI may hallucinate while creating content or misidentify vulnerable targets, MIAs do not view this
as a major concern. Through the command and control of a bot, AI can deliver strikes at a rate that far
outpaces any manual action. In an effective information warfare campaign, MIAs employ perpetual
information-barraging during competition to gradually change the attitudes and perceptions of a target
audience. These actions then set the conditions for precision strikes to affect an individual’s decision
cycle at the optimal moment, such as who they should vote for in an upcoming election (Hunter et al., 2024).
To exacerbate matters further, introducing generative AI into the ad-tech stack means every user may
encounter bespoke advertisement. This presents a unique challenge to actively defend against influence
campaigns. A core principle in understanding the intentions and tactics of MIAs is by comparing which target
audiences are receiving the same messages. Developing a broad situational understanding of who a MIA chooses
to target allows the U.S. to appropriately allocate resources to defend acute threats. However, if users are
consistently served unique content, the tools we typically use to cross-reference influence activities may
become ineffective.
The U.S. is Ill-Equipped to Automatically Combat Foreign Influence Domestically
The U.S. has developed strong guardrails around data collection on its citizens as compared to Russia and
China, which puts the DoD at a severe disadvantage in controlling the information environment domestically.
Russia and China leverage their authoritarian structure to collect vast amounts of data on their citizens
without consent (Hunter et al., 2024). These regimes then use surveillance capitalism techniques to micro-
target citizen behaviors and execute AI-driven censorship campaigns domestically, akin to a “defend-forward”
construct within the information environment (Dawson, 2021). To employ similar tactics domestically, the
U.S. would have to become a master of two functions: citizen profiling and content censorship.
As illustrated by the techniques employed in surveillance capitalism, effective citizen profiling requires
persistent data collection on individual users without continually receiving their explicit consent.
However, robust U.S. digital privacy laws protect U.S. citizens from government surveillance, except when
evidence ultimately leads to a law enforcement action (Records Maintained on Individuals, 1988; Wire and
Electronic Communications Interception, 1986). Nonetheless, the U.S. should not loosen these restrictions
because such laws set the foundation of our democracy and establish trust between citizens and the
government.
Executing content deletion in a “defend-forward” construct, meanwhile, violates the First Amendment because
it involves government overreach into bot deletion and censorship. Additionally, a preponderance of malign
influence activity occurs on social media platforms—such as Facebook, Twitter, and Reddit. Except in the
case of criminal activity, federal statute treats such platforms as U.S. persons and protects them from
government censorship of their platform via the First Amendment (“United States Person” Defined,
1992).Currently, the U.S. government combats malign influence activity as a reactionary measure by
countering instances of disinformation and promoting government transparency. Unfortunately, experts report
that presenting the truth as a reactionary measure does not sufficiently combat a barrage of foreign
influence because narratives establish a foothold through emotion (Heslen, 2020). Rather than reacting to
malign influence activity with truth, the U.S. ought to focus on preventing citizens from falling victim to
malign influence when it is initially encountered.
A Call for Robust Consumer Privacy Protection
Protecting U.S. citizens from malign influence begins with protection from targeted profiling. Given
surveillance capitalism drives targeted influence operations, the U.S. largely leaves its citizens to defend
themselves against violations of consumer privacy. Although members of Congress have proposed several
privacy bills, the U.S. has not enacted a single comprehensive federal privacy law to protect citizens
against violations of consumer privacy, nor has Congress passed a law to regulate the behavior of data
brokers. This presents a problem because data brokers and online platforms expand the threat landscape by
feeding profiling algorithms with real-time information about user perceptions and attitudes.
Current federal policy (or lack thereof) has resulted in U.S. citizens taking privacy into their own hands
through the FTC’s notice-and-consent mandate. Before fingerprinting occurs, a platform must notify a user of
their data collection practices and request consent. Ultimately, notice- and-consent has proven ineffective
because individual users do not value privacy as much as convenience (Norman-Webler, 2024). For example, in
2025 Congress banned TikTok from the U.S. market after conducting a thorough review of their consumer
privacy policies. Mere hours after the ban took effect, an alternative Chinese app called RedNote with
similarly problematic privacy policies became the top-downloaded app (Cheung et al., 2025).
I recommend that the U.S. initiate a partnership with the European Union to study how consumer privacy
affects malign influence activity. In 2016, the EU armed citizens of its member countries with the ability
to file a civil lawsuit against companies, known as a private right of action, for violations of their
privacy through the General Data Protection Regulation (Art. 82 GDPR). Under the GDPR, any EU citizen can
bring forth a lawsuit if they suspect that a company has: collected personal data without their consent,
demonstrated negligence in protecting user data, or failed to comply with data deletion requests. A private
right of action deters the practice of third-party data scraping and dramatically restricts the ability of
platforms to fingerprint and sell user information without fear of economic repercussion. Presently, only
the California Consumer Privacy Act (2018) contains a similar statute, though its private right of
action limits lawsuits filed by California citizens to data breaches due to poor data security practices
(Buchanan & Cruz, 2019).
At a high level, the U.S. and EU practice fundamentally different models of governance regarding the
protection of consumer privacy, with the latter leaning heavily on top-down legislation to restrict
individual data collection. And although EU legislators did not design the GDPR to curb malign influence
activity, its effects may still be present in a potential case study since surveillance capitalism and
malign influence activity involve the same practices. The U.S. and EU could compare data on the
effectiveness of malign influence activity over the past decade within their respective countries to
determine if either model of governance is more effective at preventing individual profiling by MIAs. This
partnership provides an alternate path to adopting the authoritarian tactics used by Russia and China in
actively combatting foreign influence.
Conclusion
Advancements in AI allow organizations to conduct influence operations at an unprecedented scale with
precision down to the individual level. In the first phase of influence operations, data collection and
profiling, MIAs adopt a surveillance capitalism framework to identify individual perception and attitudes.
In the second phase, MIAs use generative AI to create and deliver content nearly instantaneously. To combat
malign influence activity in a “defend-forward” construct, the U.S. would need to adopt citizen profiling
techniques and censorship policies such as Russia and China. However, such policies do not adhere to the
Fourth and First Amendment, respectively. Rather than enable government censorship, the U.S. should seek to
strengthen consumer privacy protections through a comprehensive federal privacy law to degrade the target
acquisition cycle for malign influence activity.
References
Buchanan, M., & Cruz, A. (2019, August 24). A Closer Look at the CCPA’s Private Right of Action and
Statutory Damages. Patterson Belknap Webb & Tyler LLP.
https://www.jdsupra.com/legalnews/a-closer-look-at-the-ccpa-s-private-28984/
Chen, T., & Guestrin, C. (2016). XGBoost: A scalable tree boosting system. Proceedings of the ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, 13-17-August- 2016,
785–794.
https://doi.org/10.1145/2939672.2939785/SUPPL_FILE/KDD2016_CHEN_BOOSTING_SYSTEM_01-ACM.MP4
Cheung, E., Jiang, J., & Tayir, H. (2025, January 15). What is RedNote, the Chinese app that US ‘Tik- Tok
refugees’ are flocking to? CNN.
https://www.cnn.com/2025/01/14/tech/rednote-china-popularity-us-tiktok-ban-intl-hnk/index.html
Dawson, J. (2021). Microtargeting as Information Warfare. The Cyber Defense Review, 6(1), 63–80.
https://doi.org/10.2307/26994113
Ebadi Jokandan, S. M., Bayat, P., & Farrokhbakht Foumani, M. (2022). Targeted Advertising in Social Media
Platforms Using Hybrid Convolutional Learning Method besides Efficient Feature Weights. Journal of
Electrical and Computer Engineering, 2022(1), 6159650. https://doi.org/10.1155/2022/6159650
Hakim, D., & Rosenberg, M. (2018, March 17). Data Firm Tied to Trump Campaign Talked Business With
Russians. The New York Times.
https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-russia.html
Heslen, J. J. (2020). Neurocognitive hacking: A new capability in cyber conflict? Politics and the
Life Sciences, 39(1), 87–100. https://doi.org/10.1017/PLS.2020.3
Hunter, L. Y., Albert, C. D., Rutland, J., Topping, K., & Hennigan, C. (2024). Artificial intelligence
and information warfare in major power states: how the US, China, and Russia are using artificial
intelligence in their information warfare and influence operations. Defense and Security Analysis,
40(2), 235–269.
https://doi.org/10.1080/14751798.2024.2321736
Jung, J., & Mittal, V. (2020). Political Identity and the Consumer Journey: A Research Review.
Journal of Retailing, 96(1), 55–73.
https://doi.org/10.1016/J.JRETAI.2019.09.003
Liu, K., Perov, I., Gao, D., Chervoniy, N., Zhou, W., & Zhang, W. (2023). Deepfacelab: Integrated,
flexible and extensible face-swapping framework. Pattern Recognition, 141, 109628. https://doi.org/10.1016/J.PATCOG.2023.109628
Naveed, H., Ullah Khan, A., Qiu, S., Saqib, M., Anwar, S., Usman, M., Akhtar, N., Barnes, N., & Mian, A.
(2024). A Comprehensive Overview of Large Language Models.
Norman-Webler, T. (2024). How Both Washington and the FTC Miss the Mark on “Notice and Consent” - .
CMBA News and Information.
https://www.clemetrobar.org/?pg=CMBABlog&blAction=showEntry&blogEntry=104787
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen OpenAI, M. (2022). Hierarchical
Text-Conditional Image Generation with CLIP Latents. https://arxiv.org/abs/2204.06125v1
Ramirez, E., Brill, J., Ohlhausen, M., Wright, J., & McSweeny, T. (2014). Data Brokers A Call for
Transparency and Accountability.
Records Maintained on Individuals, Pub. L. No. 5 U.S. Code Section 552a, Legal Information Institute
(1988). https://www.law.cornell.edu/uscode/text/5/552a
Roth, E. (2023, May 18). Google, Meta, and Amazon’s next frontier: AI-generated ads. The Verge.
https://www.theverge.com/2023/5/18/23728256/google-ai-generate-content-advertisers-palm-2
“United States Person” Defined, Pub. L. No. 22 U.S. Code Section 6010, Legal Information Institute
(1992). https://www.law.cornell.edu/uscode/text/22/6010
Wire and Electronic Communications Interception, Pub. L. No. 18 U.S. Code Chapter 119, Legal Information
Institute (1986).
https://www.law.cornell.edu/uscode/text/18/part-I/chapter-119
Author
Capt. Nolan Hedglin commissioned from the United States Military Academy (USMA) in May 2018 as a Cyber
Operations Officer (17A) with a B.S. in Mathematics and a B.S. in Physics. Following West Point, Nolan
matriculated to the Massachusetts Institute of Technology (MIT) and graduated in June 2020 with an M.S.
in Electrical Engineering and an M.S. in Tech Policy.
Following MIT, CPT Hedglin deployed as a Cyber Planner in support of Combined Joint Task Force -
Operation Inherent Resolve in May 2022. Upon returning from deployment, CPT Hedglin served as the
Technical Director for Cyber National Mission Force Task Force 2 before being selected as A/781st MI BN
(Cyber) Company Commander in March 2023. CPT Hedglin served as Company Commander for 15 months,
supporting over 150 Soldiers and Civilians across Cyber National Mission Force (CNMF) Joint Task Forces
Two and Four. Following command, CPT Hedglin served as the 780th MI BDE Plans OIC. He is currently a
Math Instructor at West Point.