6 November 2025, Geneva
A new phase of information warfare has arrived, one that is defined by automation instead of ideology. Generative AI has industrialized deception, creating a global marketplace where influence operations are faster, cheaper, and harder to trace than ever before.
Across multiple 2025 election cycles, from the United States to Eastern Europe, ISRS analysts have observed coordinated campaigns deploying synthetic credibility assets: faked experts, journalists, and AI-generated content that blends real and fabricated data. These assets are trained on localized speech patterns and cultural references, giving them a veneer of authenticity that overwhelms traditional detection systems.
What once required large troll farms now demands only a few operators armed with fine-tuned models and prompt engineering expertise. The new “disinformation stack” includes:
Narrative engines that generate thousands of context-variant posts aligned to emotional triggers.
Voice cloning services to replicate politicians and commentators for use in synthetic calls or videos.
Generative persona networks, complete with social history and interaction patterns, that evolve automatically to mimic human credibility.
The result is an always-on propaganda ecosystem that operates at an algorithmic scale. While platforms scramble to adapt content moderation tools, state proxies and commercial disinformation vendors have turned the diffusion of falsehood into a subscription service.
“The defining threat of this decade is not that truth can be faked, it’s that meaning itself is becoming optional and the real contest is over attention instead of accuracy.”
- Dr. Dave Venable, Chair, ISRS
AI-generated content is implicated in nearly half of misinformation incidents flagged by major OSINT verification projects in Q3 2025.
The number of “synthetic influencer” accounts (e.g., AI-created personas with >10k followers) has tripled since January, with most activity traced to Eastern Europe and Southeast Asia.
Disinformation-as-a-service offerings on Telegram and dark-web markets now sell full campaign packages (from 1,000–100,000 auto-generated assets) for under $5,000 USD.
Several language models used in active influence operations have been fine-tuned on scraped social data from the targeted region, producing culturally attuned, emotionally resonant messages that evade pattern recognition filters.
The synthetic disinformation boom marks a turning point in modern influence warfare. This takes operations out of traditional troll farms and into machine-accelerated psychological warfare, conducted with commercial tools and minimal human oversight.
Traditional deterrence models fail here because the cost of fabrication approaches zero, while the cost of verification continues to rise. The core asymmetry lies in speed and volume: each fabricated event or quote can be generated in seconds, while validation may take hours or days. This dynamic erodes institutional credibility and exhausts public attention, resulting in what ISRS describes as cognitive saturation.
The implications are profound:
Erosion of epistemic security: Shared factual baselines, what societies collectively believe to be “true,” are collapsing under the weight of synthetic consistency.
Weaponization of authenticity: Deepfakes are no longer designed to fool everyone, but to make everyone doubt everything.
Disinformation as diplomacy: Several authoritarian regimes are now outsourcing narrative campaigns to private AI influence contractors, creating plausible deniability while flooding the infosphere with manufactured consensus.
Collapse of institutional trust: Even genuine experts and journalists increasingly face "reverse credibility attacks," where authentic voices are dismissed as synthetic because they sound or look too polished.
Weaponization of the C-Suite: Highly personalized synthetic content (voice cloning, deepfake video calls) is now the primary vector for sophisticated social engineering, corporate fraud, and espionage, bypassing traditional technical security layers.
Sudden emergence of "regional analysts" or "security experts" with no verifiable background but high output consistency.
Cross-platform proliferation of cloned media personalities or synthetic anchors delivering identical narratives in multiple languages.
Growth of commercial narrative-generation APIs offering customizable ideological outputs.
Decline in verified journalist engagement as public audiences disengage from traditional media channels.
Increased appearance of "meta-disinformation" (claims that everything is AI-generated) further eroding discernment.
1. Transition from detection to authentication.
Governments and media institutions must adopt cryptographic content provenance systems, digital signatures, watermarking, and blockchain-based registries, to verify the origin of authentic material.
2. Invest in cognitive security infrastructure.
National resilience programs should include public education on synthetic persuasion recognition, emotional manipulation awareness, and critical digital literacy. The goal is not only to expose falsehoods but to inoculate populations against manipulation itself.
3. Regulate the synthetic content supply chain.
International frameworks must address the accountability gap for model owners, synthetic media vendors, and API platforms that knowingly facilitate covert influence operations.
4. Develop AI counterintelligence capabilities.
States and institutions need autonomous detection and attribution engines capable of mapping coordinated synthetic activity in near real time.
The synthetic disinformation boom marks a deeper shift in modern conflict: from fighting over facts to competing for belief itself. Generative AI has made deception scalable and trust expendable, eroding the shared reality on which democratic governance depends. The next frontier of national defense will not be built on firewalls or filters, but on cognitive resilience, the ability of citizens and institutions to recognize manipulation without retreating into cynicism.
Prepared by:
ISRS Strategic Advisory & Risk Analysis Unit
Geneva, Switzerland
About ISRS
The Institute for Strategic Risk and Security (ISRS) is an independent, non-profit NGO focusing on global risk and security.
Copyright (c) 2025, Institute for Strategic Risk and Security