Skip to main content

Fake Accounts on Social Media: The 2026 Scale

Last updated: April 5, 2026

TLDR

In 2024 alone, Meta actioned 4.3 billion fake accounts. TikTok removed 348 million in a single quarter. X suspended 464 million for manipulation in six months. Despite these removals, fake accounts regenerate faster than platforms pull them down — because the underlying incentive structure rewards their existence.

DEFINITION

Fake Account
Any social media account that misrepresents the identity of the operator — including automated bots, AI-generated personas, sock puppets (multiple accounts operated by one person), compromised accounts operated by unauthorized parties, and accounts created by humans for deceptive purposes.

DEFINITION

AI Persona
An AI-generated fake account designed to pass as a real person, typically with a consistent identity, profile photo generated by a diffusion model, and AI-written content. More sophisticated than traditional spam bots.

DEFINITION

Bot Farm
A network of automated accounts controlled by a single operator, designed to simulate large-scale human activity for ad fraud, influence operations, or engagement farming.

Platforms remove billions of fake accounts per year. The number visible at any moment is only what hasn’t been caught yet. No one knows the real total — including the platforms.

This page collects the best available numbers, organized by platform, from enforcement reports, independent research, and government actions.

Why the Scale Is Hard to Measure

There are three distinct measurement problems:

Definitional disagreement. Platforms count “fake accounts” narrowly. A bot that passed an initial check and has been dormant for months may not be flagged. An AI persona that generates coherent content may not trigger automated detection. Academic researchers and platform security teams work from different definitions.

Detection lag. Enforcement data reflects accounts caught and removed. It doesn’t reflect accounts that exist and haven’t been detected. The ratio of detected to undetected fakes is not publicly known.

Platform incentive to undercount. Fake account rates affect advertiser confidence and valuation metrics. Platforms have financial motivation to report low estimates. This doesn’t mean reported figures are wrong, but it’s context worth keeping in mind when reading official disclosures.

Platform-by-Platform Data

Meta (Facebook and Instagram)

Meta is the most transparent about fake account enforcement, partly because it’s the largest platform and partly because it faces the most regulatory scrutiny.

In 2024, Meta actioned 4.3 billion fake accounts across its platforms. Cumulatively since October 2017, that figure is 27.67 billion. In Q4 2024 alone, Facebook removed 1.4 billion fake accounts.

Despite this scale of removal, Meta’s own SEC filings estimate fake accounts persist at approximately 4–5% of monthly active users — roughly 120–140 million accounts at any given time. Instagram is estimated by security researchers at approximately 95 million bot accounts, roughly 10% of users.

The arithmetic is telling: Meta removes billions of fake accounts per year and still has over 100 million at any given moment. Fake account creation is automated and cheap. Removal is expensive and manual-review-dependent.

X (formerly Twitter)

X suspended 464 million accounts for platform manipulation in the first half of 2024. Carnegie Mellon University, in research published in Nature in March 2025, estimated that approximately 20% of X accounts are bots — a figure that ranges from 15% to 44% depending on topic.

X disputes these figures and uses its own methodology, which produces lower estimates. The methodological dispute is long-running and unresolved.

What is documented: approximately 10,000 daily active X accounts use AI-generated synthetic faces, based on conservative floor estimates from academic research. X’s revenue-sharing program for creators — which pays based on engagement — creates a direct financial incentive for bots to generate engagement, complicating enforcement.

TikTok

TikTok removed 348 million fake accounts in Q3 2024 — roughly 116 million per month, or about 3.9 million per day. TikTok is newer than Facebook and has invested heavily in automated enforcement, but the volume of removal indicates the underlying creation rate is enormous.

Bluesky

Bluesky is smaller and newer, which makes detailed statistics harder to cite. However, security researchers have documented 15,000+ spam accounts and found that 44% of its top-100 most-followed accounts had impersonation copycats. The decentralized architecture of Bluesky (ActivityPub-based) creates enforcement challenges that centralized platforms don’t face — there is no single operator with authority to remove accounts across all instances.

LinkedIn

LinkedIn’s enforcement data is less granular than Meta’s. The platform faces persistent fake recruitment and business development bots — often more sophisticated than social spam bots because they target professional contexts where a credible persona has direct commercial value.

Types of Fake Accounts

The category “fake account” covers several distinct phenomena:

Spam bots are the oldest and most common type. Automated software creates accounts to post links, promote services, or generate engagement. Detection methods are well-developed. Platforms remove billions per year.

AI personas are the fastest-growing threat. A diffusion model generates a profile photo. A language model writes the account’s posts. The persona has a name, history, and consistent voice. These accounts are designed to pass as real people and are significantly harder to detect than traditional spam bots. The FBI’s July 2024 seizure of the Russian “Meliorator” bot farm documented this approach: custom software for mass AI persona creation used in influence operations targeting Western audiences.

Sock puppets are multiple accounts operated by a single person to amplify their own views or harass targets. More labor-intensive than bots; harder to detect because each account may be operated authentically.

Compromised accounts are real accounts taken over by unauthorized operators — typically through phishing or credential stuffing. These accounts have legitimate history and pass behavioral detection.

Bot farms are organized operations running large numbers of accounts under centralized control. The business model varies: some sell followers and engagement (follow farms), some conduct influence operations for state or political actors, some engage in ad fraud.

The AI Upgrade

The fake account problem existed before large language models. AI made it significantly worse along two dimensions:

Economics. Creating a convincing fake persona historically required human labor — writing posts, maintaining consistency, generating plausible photos. AI reduced the marginal cost of persona creation close to zero. A single operator can run thousands of convincing personas where before they could run dozens.

Detection difficulty. Behavioral detection works by identifying patterns: regular posting intervals, templated language, suspicious follow/unfollow ratios. AI personas break these patterns. They post at human-like irregular intervals. They write in varied, contextually appropriate language. They engage with other content in ways that look organic. Detection systems trained on 2018 bot behavior fail on 2025 AI personas.

The virtual influencer market — AI-generated characters marketed as real personalities for brand partnerships — reached $6.9 billion in 2024. This represents a segment where AI personas have commercial legitimacy and high production values. The same underlying technology used for virtual influencers is available to anyone running a bot farm.

What Platforms Report vs. What Independent Research Finds

The gap between self-reported figures and independent research estimates is consistent across platforms:

Platforms cite fake account rates under 5%. Independent researchers find 10–20%+. The discrepancy comes from methodology (platforms use narrow definitions; researchers cast wider nets) and incentive (low reported rates protect advertiser confidence and valuation).

Neither figure is necessarily wrong — they’re measuring different things. But for a user trying to understand what percentage of the engagement they see on any platform is genuine, the independent research figures are more relevant than the platform self-reports.

The Incentive Problem

Fake accounts generate engagement. Engagement drives ad revenue. Platforms measure success in engagement volume. Removing fakes reduces all three. That competition — between platform integrity and growth metrics — is the underlying reason enforcement has limits.

X’s revenue-sharing program illustrates this clearly. Paying creators based on engagement revenue makes engagement a commodity with direct monetary value. Bots that generate engagement are, from a platform revenue perspective, not entirely unwelcome. The cost falls on the creators who get paid less per real impression and on the users who see inflated engagement metrics.

Bad bots accounted for 37% of all web traffic in 2024, rising to 51% in 2025. The majority of internet traffic is now automated. Social platforms are not immune to this — they’re one of the primary targets.

Why Detection Alone Isn’t Enough

Detection improves. AI personas get more convincing in parallel. You’re always playing defense against an offensive capability with a faster iteration cycle. Detection catches known patterns. Novel techniques bypass it until the system catches up.

The alternative is verification at signup — not detecting fake accounts after creation, but making them structurally expensive to create. If creating an account requires a liveness check (a 60-second process that a live human passes once), the cost of running a bot farm increases dramatically. You can’t automate around a real-time liveness challenge.

That’s the bet Truliv is making. The platform doesn’t require a real name or a phone number. Just proof that a live human started the account. One check, at the point of creation. It doesn’t prevent all abuse — a determined person can verify multiple accounts over time — but it eliminates the economics of bot farms at scale.

Truliv requires one liveness check at signup. Every account is provably human. The 30-day free trial is open.

Q&A

How many fake accounts are on social media?

The total is not knowable with certainty, but platform enforcement data gives a sense of scale. Meta actioned 4.3 billion fake accounts in 2024 alone (27.67 billion cumulatively since 2017). TikTok removed 348 million in Q3 2024. X suspended 464 million for manipulation in H1 2024. Instagram has an estimated 95 million bot accounts. Despite these removals, fake accounts regenerate faster than they're removed.

Q&A

Are most social media accounts fake?

Probably not most accounts, but a significant minority. Platforms that self-report tend to cite 5% or less, but independent researchers consistently find higher figures. Carnegie Mellon research published in Nature estimated 20% of X accounts are bots, ranging 15–44% by topic. The more accurate framing: a meaningful percentage of the engagement you see on any major platform comes from automated accounts.

Q&A

Why do platforms allow fake accounts to persist?

Misaligned incentives. Fake accounts generate engagement (likes, comments, shares) that drives ad revenue. Platforms measure success in engagement metrics. Removing fake accounts reduces those metrics. There are exceptions — platforms do large enforcement actions — but the economic incentive is not cleanly aligned with elimination. X's revenue-sharing program for creators effectively rewards bots that generate engagement.

Want to be first on a human-only network?

Try Truliv free — no credit card required.

See plans & pricing

Want to learn more?

Frequently asked

Common questions before you try it

What's the difference between a bot and an AI persona?
A traditional bot is automated software that takes actions (like, follow, post) without any attempt to simulate a real person. An AI persona is designed to impersonate a human — it has a generated profile photo, a consistent name and backstory, and AI-written content that mimics natural speech. AI personas are significantly harder to detect automatically and are increasingly common in influence operations.
What is a bot farm?
A bot farm is an organized operation running large numbers of fake accounts under centralized control. The operator uses custom software to coordinate account behavior — posting, engaging, amplifying — at scale. In July 2024, the FBI seized a Russian bot farm called 'Meliorator' that used AI-generated personas for mass influence operations across multiple Western social platforms.
Do platforms accurately report their fake account rates?
Platform self-reporting is consistently lower than independent research findings. Meta reports fake accounts at under 5% of MAUs; security researchers find higher rates. Carnegie Mellon found 20% of X accounts are bots using methodology X disputes. The discrepancy is partly definitional (platforms use narrow definitions of 'fake') and partly incentive-driven (higher fake account rates are bad for advertiser confidence).
Why are AI-generated fake accounts harder to detect?
Traditional bots are identified by behavioral patterns: posting at regular intervals, using templated language, following/unfollowing in bulk. AI personas mimic human variation. They post at irregular hours, use varied language, maintain consistent identities across time, and have AI-generated photos that pass reverse image search. Detection methods that worked on 2018 bots fail on 2025 AI personas.