The 2026 Fake Account Report
TLDR
Bot traffic exceeded human traffic for the first time in 2025. This report documents the scale of fake accounts on every major platform, the financial damage they cause, and why post-hoc moderation cannot solve a problem that should be blocked at account creation.
Executive Summary
For the first time in the history of the internet, bots outnumbered humans in 2025. Imperva’s annual bot traffic report found that 51% of all web traffic was generated by automated systems — up from 37% the year before. On social media specifically, every major platform is running a containment operation against fake accounts that they are structurally incapable of winning.
The numbers from 2024 enforcement actions illustrate the scale. X/Twitter suspended 464 million accounts in the first half of 2024 alone. Meta removed 1.4 billion fake accounts in a single quarter. TikTok removed 348 million accounts in Q3 2024. These are not edge cases being mopped up — this is the core traffic management problem of every platform that allows open account creation.
What changed in 2024 and 2025 is not the existence of fake accounts. Those have been around since social media launched. What changed is the economics. Large language models reduced the cost of creating convincing fake identities to near zero. A bot farm that previously required a team of contractors to write posts, respond to comments, and maintain plausible personalities can now run on a single GPU. The FBI seizure of the Russian “Meliorator” software in July 2024 documented exactly how this works at industrial scale: mass AI persona creation, automated cross-platform posting, and network coordination designed to manipulate public discourse.
For ordinary users, the practical result is this: you cannot assume the accounts you interact with represent real people. On several major platforms, the probability that any given account is not operated by a human is somewhere between 10% and 44%, depending on the platform and the topic.
Why This Is Getting Worse, Not Better
Platform incentives are structurally misaligned with fake account removal. More accounts mean higher reported user counts. Higher user counts attract advertisers. Advertisers pay for impressions, many of which are delivered to bots. Removing bots shrinks the numbers that platforms use to justify ad rates.
The detection arms race has also shifted decisively in favor of the fakers. Traditional detection methods flagged accounts based on behavioral signals: posting too fast, too regularly, with too-similar text. Those signals are now easy to defeat. AI personas can post with human-like irregular timing, generate original text on every post, vary their topics, and maintain consistent personalities across thousands of interactions. The behavioral tells that worked against 2019-era bots do not work against 2025-era AI personas.
The Meliorator case, unsealed after an FBI seizure in July 2024, described software designed specifically to evade platform detection. The software created AI personas with synthetic faces generated by GANs, backstories, and posting histories seeded to look like real users before the accounts were activated for influence operations. The accounts passed manual review. They passed automated detection. They were caught through intelligence gathering, not platform-side detection.
The technical solutions exist. Liveness verification at account creation would eliminate the vast majority of fake accounts overnight. A living human with a face cannot be created in bulk. But liveness checks create friction, and friction reduces sign-up conversion rates. Every major platform has chosen growth metrics over verification integrity, which is why this problem keeps getting worse.
The 2026 Fake Account Report
Platform-by-platform data on fake accounts, bot activity, and AI personas across every major social network in 2026.
No spam, ever. Unsubscribe anytime.
Q&A
What percentage of social media accounts are fake?
It varies by platform, but the numbers are large. Carnegie Mellon/Nature research published in March 2025 found 20% of X/Twitter accounts are bots, with 15-44% by topic. Instagram has roughly 95 million bot accounts — about 10% of its base. Facebook removed 1.4 billion fake accounts in Q4 2024 alone, with an estimated 4-5% of monthly active users still being fake despite removal efforts. Imperva's 2025 report found bots account for 51% of all web traffic globally.
Q&A
How many fake accounts did platforms remove in 2024?
The removal numbers are staggering in scale. X/Twitter suspended 464 million accounts in the first half of 2024. TikTok removed 348 million accounts in Q3 2024. Meta has removed 27.67 billion fake accounts cumulatively since October 2017, including 1.4 billion in Q4 2024 alone. Despite this, significant numbers persist — the platforms are running a treadmill, not solving the underlying problem.
Q&A
What is the financial cost of fake accounts and bot activity?
Ad fraud driven by bots is projected at $41.4 billion in losses for 2025 (Juniper Research). In influencer marketing specifically, 81% of marketing professionals have encountered influencer fraud, with $4.8 billion — 12.4% of total spend — lost to fake engagement. The virtual influencer market (AI-generated personas operating as influencers) reached $6.9 billion in 2024 and is projected to hit $37.8 billion by 2030.