The Information War Won’t Be Won Behind Gates
Information integrity demands public visibility as strategic defense
In 2025, we're witnessing a dramatic transformation of America's information landscape. Public data is vanishing, independent media is under attack, and protections for researchers and organizations are being rolled back. As accurate information disappears or is altered, bad actors are stepping in, manipulating search results and Wikipedia entries, poisoning AI training data, and ensuring their narratives dominate the spaces where people look for facts.
At the same time, the way Americans get information has fundamentally changed. People turn to Google and AI for answers, expecting instant information from search summaries and AI-generated responses, and increasingly accepting these answers without further verification.
Our response can't be to retreat behind paywalls or chatbot gates, or to simply disengage from these spaces. That only cedes more ground to misinformation. Instead, we must actively fill these gaps with open, accessible content that can compete with and resist manipulated narratives. In a world where public understanding is shaped by what appears in instant answers, accessibility itself becomes a form of information defense.
For anyone working in information integrity, this shift demands a fundamental recalculation of how we fight back.
The Rise of Zero-Click Information and Its Consequences
Here's what's actually happening with information consumption in 2025:
Most people trust Google and AI answers enough to form beliefs based on what they see in search results, and they're doing it faster than ever. With 60% of searches ending without clicks and Google remaining the most trusted information source, most Americans now get their information directly from search summaries, AI answers, and featured snippets rather than clicking through to original sources.
The problem is that this system is increasingly compromised. When people want to fact-check something confusing, they Google it. But research shows people are 20% more likely to believe falsehoods after googling them, and users will adopt and reverse their beliefs based on what appears in these instant answers.
Meanwhile, bad actors are gaming this system. They're manipulating Wikipedia entries, poisoning AI training data, and optimizing content to ensure their narratives appear in the trusted spaces where Americans look for facts. With attention spans down to 6-8 seconds and users expecting answers within 3 seconds, the first information people see often becomes the information they believe.
This means that if you want to counter misinformation effectively, you have to show up in these public spaces with openly accessible content that can compete with and resist manipulated narratives.
The Poisoning of Public Information Infrastructure
The combination of systematic content removal and active manipulation campaigns creates a particularly dangerous situation for public understanding.
This systematic poisoning of our public information infrastructure is happening by design. At the same time that Project 2025 is being implemented, accurate information is being altered or removed from public knowledge, while harmful narratives proliferate unchecked. AI systems are getting trained increasingly on disinformation and manipulated content.
Data voids don't stay empty. When accurate information such as climate data disappears, or LGBTQ is erased from data, when public health information vanishes, or equity based research is altered or removed, the environment is already vulnerable for data voids and primed for information manipulation. When important information also moves behind paywalls or chatbot gates, we lose resistance, and the information gaps that are created get filled by the content that is accessible to the algorithms that generate instant answers. And right now, hostile actors are working to make sure that's their content.
This leaves space for manipulated content to dominate the search feeds and AI training data that millions of people rely on daily.
The Limits of Gated Content in the Fight Against Misinformation
Given this reality, gated approaches to counter-misinformation inadvertently become part of the problem.
Let’s address something directly: there's been a false binary created between intervention strategies, such as between chatbots and open access strategies (search interventions), as if we need to prove one is universally 'better.' This misses the point entirely.
Different problems require different tools and approaches. Each intervention strategy serves specific needs. For example, chatbots are great behavioral intervention tools, excellent for individual belief change, privacy, and sensitive contexts. Search interventions are ecosystem level defense tools, essential for shaping how millions discover information. Comparing them is like comparing a scalpel to a fire truck. Both save lives, but you don't perform surgery with a fire truck or fight fires with a scalpel.
But when we're facing ecosystem-level threats to accurate information, friction works against reach. Gated content is gated for a reason, it’s exclusive, not meant for broader consumption. And each step you add just makes things harder: view ad, click ad, land on page, engage with chatbot. Each barrier significantly reduces how many people actually get accurate information. Meanwhile, misinformation appears instantly in search results with zero barriers.
AI systems learn from available content. When accurate information is gated while misinformation flows freely, AI training data gets skewed toward whatever is publicly accessible or trained on. The systems providing instant answers to millions of users learn to cite (even if poorly) the propaganda that fills information gaps.
I want to be clear: private tools serve important purposes, especially for sensitive situations like reproductive healthcare where confidentiality can literally save lives. But as primary strategies for countering misinformation at scale, gated approaches work against how people actually consume information in 2025.
Open Information as the First Line of Defense
Truth be told, this moment calls for multi-layered strategies that defend information integrity, ensure public access and visibility, and build trust, while also protecting private access where necessary, such as for sensitive or confidential information.
To truly defend information integrity, we need more than takedowns, fact-checks, and monitoring. We need infrastructure that actively resists manipulation and serves the public good, and we must operate in the spaces where the public engages and discovers information and where AI systems learn what to consider authoritative.
Perfect conditions for fighting misinformation don’t exist, and we can’t wait for Big Tech to fix this. While we build that broader infrastructure, millions of people are getting their information from these flawed systems every day. Bad actors are systematically filling information gaps while accurate sources retreat behind gates.
The choice organizations make about how to approach information access is no longer a strategic preference. It’s deciding whether to defend or abandon the information environment that democracy depends on.
If this resonates with you, I'd love to hear from you. Drop a comment or reach out directly.
If you think others need to see this, please share it. And subscribe if you want more honest conversations.