Bot Detection Playbook
TLDR
Social media is full of accounts that are not operated by real people. This playbook catalogs every major type of fake account, the detection signals that expose them, and what actual human verification looks like.
The Fake Account Taxonomy
Not all fake accounts work the same way or serve the same purpose. Understanding the categories makes detection much easier because each type leaves different fingerprints.
Spam bots. The oldest and most obvious category. These accounts exist to post links. They reply to popular tweets with crypto scams, drop phishing URLs in Instagram comments, and DM people on LinkedIn with fake job offers. They are high volume, low sophistication, and platforms catch most of them eventually. The ones that survive usually have stolen profile photos, generic bios, and posting histories that are 100% promotional links with zero genuine interaction.
Engagement farms. Coordinated networks of accounts that boost content artificially. A company or political operation pays for thousands of likes, retweets, or comments to make content appear more popular than it is. The individual accounts often look semi-legitimate because the operators put effort into making them pass casual inspection. They have profile pictures, some original posts, and follow realistic accounts. The coordination is what gives them away, not any single account in isolation.
AI personas. The newest and fastest-growing category. These are accounts operated entirely by language models. They post original text, respond to comments, and maintain consistent personalities. Unlike spam bots, they are not trying to sell anything directly. They generate engagement that platforms reward with algorithmic visibility, which is then monetized through ads, affiliate links, or influence campaigns. Some AI personas are transparent about being AI. Most are not.
Astroturf accounts. Fake grassroots campaigns. A company, political organization, or government creates dozens or hundreds of accounts that all push the same narrative while pretending to be independent individuals. Unlike engagement farms (which just boost numbers), astroturf accounts try to shape opinion. They post in local subreddits, community Facebook groups, and niche forums where genuine voices carry weight.
Sock puppets. One person operating multiple accounts to create the illusion of broader support. Common in online arguments, product reviews, and forum discussions. A single person posting from three accounts saying “I agree with this” creates a manufactured consensus that influences bystanders.
Compromised accounts. Real accounts that have been hacked and repurposed. These are dangerous because they have real histories, real followers, and real engagement patterns. When a hacked account starts posting crypto links, the account’s genuine history makes the spam seem more credible.
Detection Signals That Work Across Platforms
These signals are not platform-specific. They work on Twitter, Instagram, LinkedIn, Reddit, Facebook, and most other social networks. No single signal is conclusive by itself. Look for clusters.
Posting cadence. Humans are inconsistent. They post in bursts, go quiet for days, and have irregular schedules. Bots and automated accounts tend to post at regular intervals, often around the clock. If an account posts every 47 minutes, 24 hours a day, it is not a person with insomnia. Check the timestamps on the last 20-30 posts. Real people have gaps. Automated accounts do not.
Engagement ratios. An account with 50,000 followers that gets 3 likes per post is suspicious. So is an account with 200 followers that gets 5,000 likes on every post. Real accounts have messy, variable engagement. Some posts do well, some flop. Fake engagement tends to be suspiciously consistent.
Profile age vs. activity level. An account created last month that already has 10,000 posts is either automated or bought. Real account growth is gradual. Fast-growing accounts that post heavily from day one are usually purpose-built for a campaign.
Content originality. Copy a sentence from a suspicious post and search for it in quotes on Google. If the same text appears on dozens of other accounts, you found a coordinated campaign. Engagement farms often distribute the same talking points across their network with minor rephrasing.
Reply patterns. Real humans reply to varied content. Bots reply to high-visibility posts with generic comments (“Great post!” “So true!” “This is amazing!”). Check what an account is replying to. If every reply is on a viral tweet and the reply adds nothing substantive, it is farming engagement.
Bio structure. Certain bio formats are bot tells. Multiple flag emojis in a row. Crypto-related keywords combined with “dad/mom” and “freedom.” Suspiciously generic descriptions like “lover of life, coffee, and good vibes.” No single bio element is proof, but the patterns cluster.
Follower networks. If you check an account’s followers and most of them have default profile pictures, no posts, and follow thousands of accounts, the main account bought followers. Legitimate follower lists are messy. They contain a mix of active and inactive accounts, but the active ones have their own genuine posting history.
Bot Detection Playbook
How to spot fake accounts, AI personas, engagement farms, and astroturf campaigns across every major platform.
No spam, ever. Unsubscribe anytime.