Among all the recent changes in the information environment, the most threatening is often overshadowed by new deception techniques like AI-generated videos or images. Deception has always evolved, just like it did with Photoshop years ago. The deeper and more fundamental shift is in how AI determines what information social media users encounter in the first place—and, more importantly, what they never see. Over the last two years, chronological social media feeds based on follower/following networks have quietly been replaced with algorithmically curated ones—like TikTok’s "For You Page." Now, what users see is determined by what’s most likely to capture their attention, regardless of whether they follow the source.
This shift toward “attention-driven feeds” matters because adversaries no longer have to build up networks to propel their misleading content in front of vulnerable audiences. As long as the algorithms deem content attention-grabbing, adversaries have a direct line to reach end consumers.
Introduction
TikTok’s curated feed is now ubiquitous across social media, but it was initially a revolutionary departure. Before TikTok, information spread was network-based: users followed friends and creators, whose posts display chronologically in a user’s feed. But TikTok’s ‘For You Page’ requires no followers or network building. Users don’t even need to log in. The attention-driven feed puts content in a feed based on what an algorithm predicts will capture a user’s attention—no matter who posted it or when—based on signals like the theme of posts that users linger on or engage with, refining its recommendations as the user scrolls the infinite feed.
TikTok’s approach worked. It became the fastest-growing social media app between 2021–2023, urging nearly all other platforms—like Instagram, YouTube, Facebook—to copy its formula with Reels and Shorts to retain users.
Attention-driven feeds didn’t rise in a vacuum
Widespread internet access, mass opportunities for content creation, instant sharing, and AI automation created a dilemma between the expanding volume of online content, and humans’ relatively fixed capacity to consume it all (based on limitations like time and willingness to consume more). Attention driven feeds appeared to be the market’s answer to information overload: they curate content for the user.
I. The Vulnerability
The shift toward attention-based feeds introduces a new vulnerability in society: they reduce a user’s control over what information they see, and restrict their intake to a narrow slice of content that is not balanced or representative of reality, but rather skewed toward shock value and engagement. The vulnerability has two sides: what is included and what is excluded.
- What Is Included: Attention-based feeds prioritize extreme or emotionally-charged content that aligns with a user’s biases while burying neutral or corrective content. And there are endless opportunities to do so: the vast amount of content on social media means that algorithms can always find content that is highly tailored to any user’s specific array of biases, interests, or grievances.
- What Is Excluded: Attention-based feeds strip away crucial context, misleading users without spreading verifiably false information. The brevity of short-form videos makes it easier to take moments out of context (what particularly online people refer to as “clip chimping”). In an infinite feed that can pull content from any user, revisiting a source once the user has left the app is difficult. The infinite feed does not retain its history. Instead, users have to trust that their feeds present a comprehensive picture of reality—something they are not designed to do.
In other words, these feeds resemble “echo chambers”—a term popular a decade ago that has since been debunked by academia. Back then, echo chambers were imagined social media feeds that brainwashed users by drowning them in identical opinions. But researchers revealed that people sought out the content they wanted to see by curating their own feeds. They encountered—but ignored—diverse perspectives by choice. Echo chambers were more myth than reality.
II. The Exploit
In this new environment where information spreads based on attention rather than social networks, it is likely that adversaries will shift their cognitive warfare strategies. They will likely move away from network-based deception (e.g., bot networks and coordinated astroturfing) to attention-based techniques. These may include:
- The Whack-A-Mole Exploit: Adversaries no longer need to build up bot networks for “astroturfing”—the practice of creating fake grassroots engagement around posts to propel them to a wider audience. Instead, they can focus on content alone, creating highly personalized, attention-grabbing content that the algorithm will surface them to the right users. The more they produce, the more likely neutral or corrective information will be crowded out. This approach creates a “whack-a-mole” conundrum: adversaries can spin up new accounts to pump out content—bypassing the need to build credibility over time and side-stepping traditional counterstrategies to identify and dismantle inauthentic networks.
- Misleading Through New Strategies: Adversaries will likely produce content that selectively omits information instead of flat-out lying, sliding past traditional fact-checking efforts. After all, it is much harder to spot omissions when sorting through 1,000 pieces of information compared to just 10, for example. Rather than “true” vs. “false,” this exploit is better framed as “accurate” vs. “misleading.”
III. The Defense
Any defense against these exploits must adapt to the reality of attention-driven social media. Three recommendations flow from the exploits above:
- Identify Attention-Grabbing Content: Counter-efforts should prioritize detecting impactful narratives over dismantling bot networks or coordinated inauthentic behavior. New strategies must operate at the speed of relevance to identify what content is capturing a vulnerable audience's attention, and therefore what an adversary may try to exploit.
- Focus on Creating Counter-Narratives Over Fact-Checking: To combat an adversary’s exploits of attention-driven feeds, counter efforts should focus on operating at the narrative level. This framing means going beyond facts to understand how pieces of information are assembled into narratives to resonate with an audience. Solutions may include pre-bunking—anticipating what misleading messages adversaries will likely push—or creating counter-narratives that are just as attention-grabbing as the misleading content. A good starting point is speaking directly to those being misled and framing the correction in a way that resonates with their concerns—as this will capture their attention.
- Make Citizens Aware: Counter-efforts should secure U.S. citizens by revamping media literacy education. It should seek to show how attention-driven feeds omit content and shape a person’s view of current events without outright lying. Teaching about algorithmic omission is fundamental. After all, it's impossible to evaluate information that users never encounter in the first place. Media literacy should make them aware of the mechanisms that hide information.
Understanding the shift from networks to attention—or from “social” media to “attention” media—is central to countering the next generation of information operations.
Under information overload, the stakes could not be higher
What has emerged is a "needle-in-the-haystack fallacy," where individuals believe that every question must have a definitive answer hidden somewhere within the deluge of information. This flaw overlooks the reality that an answer might not exist, be incomplete, or remain inconclusive—at least for the time being. But the desire for a definitive answer compels people to discredit the idea of uncertainty when searching and evaluating information. People want answers. Solutions must meet them where they are, recognizing that asking them to pay attention to content that is not worthy of their time is a losing battle.