TLDR
Liveness detection is the process of verifying that a biometric sample comes from a live person — not a photo, video, or AI-generated fake. Banks and governments use it routinely. Social media platforms don't. That gap is why every major social network has a bot problem.
- Liveness Detection
- The technical process of determining whether a biometric sample (face, fingerprint) comes from a live person present at the time of capture, rather than a photo, video, mask, or AI-generated forgery.
DEFINITION
- Presentation Attack
- An attempt to defeat a biometric system by presenting a fake artifact (printed photo, video replay, 3D mask) instead of a live person. Presentation Attack Detection (PAD) is the countermeasure.
DEFINITION
- Injection Attack
- A more sophisticated form of attack where synthetic biometric data is injected directly into the verification pipeline at the software level, bypassing the camera entirely.
DEFINITION
- iBeta Level 2 PAD
- An internationally recognized certification for liveness detection systems. Requires 0% success rate when subjected to standardized presentation attacks using photos, videos, and 3D artifacts.
DEFINITION
Banks have required liveness checks for remote account opening for years. The same step that stops someone from opening a bank account with a printed photo of your face is technically feasible on a smartphone, costs under $2 per check, and runs in under 60 seconds. Exactly zero major social platforms require it before you can post.
Active vs. Passive Liveness: The Two Main Approaches
Liveness detection comes in two forms with different tradeoffs.
Active liveness issues real-time challenges. Blink when the indicator flashes. Turn your head to the left. Follow the moving dot. The system verifies that your response matches the prompt — which is hard to fake with a static image or pre-recorded video because the challenge is randomized and timing-dependent. Active checks typically take 15–60 seconds. They’re more robust and more intrusive.
Passive liveness runs in the background without any user action. The system analyzes static biometric signals — skin texture, subsurface reflection patterns (real skin behaves differently than a printed image under certain light frequencies), depth cues from camera sensors, and micro-motion from breathing. Passive checks can complete in under 5 seconds. They’re less intrusive but generally more susceptible to sophisticated spoofing.
Most enterprise-grade deployments combine both. The initial challenge is passive; if confidence falls below a threshold, an active challenge fires. For high-stakes use cases — government ID, financial onboarding — active challenges with hardware-assisted depth sensing (structured light, time-of-flight) are the standard.
Where It’s Already Used
Liveness detection is not experimental technology. It runs at scale in several industries:
Financial services: Remote account opening at banks, neobanks, and lending platforms uses liveness checks to satisfy Know Your Customer (KYC) regulations. You’ve probably seen a “take a selfie and blink” step when opening an account with a digital bank.
Government and travel: Border agencies use liveness-checked facial comparison to verify travelers against passport photos. Several countries use it for remote digital ID issuance.
Fintech onboarding: Cryptocurrency exchanges, payment processors, and lending platforms use it to comply with Anti-Money Laundering (AML) requirements.
The global identity verification market — the broader category that includes liveness detection — is worth $14.3 billion. This is not a niche.
How the Attack Landscape Has Changed
For years, the main threat was unsophisticated: someone prints a photo or plays a video on a phone and holds it up to the camera. Modern systems caught these reliably. The attack surface has changed.
Deepfake-enabled attacks on verification systems surged 1,600% in early 2025. AI-generated fake IDs appeared in 2.15% of all verification sessions that year — a 4x increase from the prior period.
The more significant shift is the rise of injection attacks. Rather than fooling the camera, injection attacks bypass the camera entirely. Synthetic video or biometric data is injected directly into the verification software pipeline. The camera sees nothing because the attack doesn’t go through the camera. Detection requires monitoring the data pipeline itself — not just the visual feed.
This is why hardware attestation (cryptographic verification that the video stream is coming from the device’s actual camera) is becoming part of enterprise liveness deployments. Software-only liveness is increasingly inadequate against sophisticated adversaries.
The Accuracy Gap: What Top Vendors Deliver vs. What DHS Found
Leading vendors — iProov, Jumio, Onfido/Entrust, Sumsub — have each passed iBeta Level 2 Presentation Attack Detection certification with a 0% attack success rate. NEC, tested on a 12 million record database by NIST, achieved a 0.07% error rate.
DHS published independent testing through the RIVR program in 2025. Only 1 of 7 vendors met all benchmarks. One accepted 71–77% of fraudulent documents.
The technology varies enormously by vendor, and certification matters. “We use liveness detection” without specifying the implementation or certification level could mean almost anything.
Why Social Media Hasn’t Adopted It
Sumsub charges $1.35–$1.85 per verification. The lifetime value of a social media account dwarfs that cost. The reason platforms skip it is incentive misalignment:
Engagement metrics include fake accounts. Monthly active user counts, engagement rates, and time-on-platform figures — all of which drive advertising revenue — are inflated by bot activity. Removing bots reduces these numbers. Platforms have financial incentive not to fix this, or at least to fix it slowly.
Friction at signup reduces growth. Requiring a 60-second liveness check will turn away a percentage of legitimate users. Growth-stage platforms optimize for acquisition volume. Adding friction conflicts with that objective.
Bots generate advertising revenue. Fake accounts can be served ads. They appear as impressions. In some cases they generate clicks. The fraud cost is borne by advertisers, not the platform.
Building a platform where success is measured in engagement volume produces exactly this outcome.
Privacy: What’s Processed vs. What’s Stored
The privacy question around liveness detection comes down to one distinction: does the system store your biometric data after the check completes?
Some implementations do. Facial templates, biometric vectors, and identity verification records are retained for fraud investigation and re-verification purposes. These are legitimate use cases, but they require trusting the provider with persistent biometric data.
Other implementations — including the approach Truliv uses — process the check locally on the device and discard all biometric data once the check completes. The result (pass/fail) is recorded. The biometric sample is not.
This is not a minor distinction. Stored biometrics are a target. If the verification provider is breached, your facial template may be compromised permanently — unlike a password, you can’t change your face. The implementation choice matters, and asking about it before using a verification-gated service is reasonable.
Truliv’s Implementation
Truliv requires a liveness check before you can post — not as an ongoing surveillance mechanism, but as a one-time proof of personhood at account creation.
The check: blink and turn your head, under 60 seconds, on a standard smartphone camera. No special hardware required. No biometric data stored. The liveness result is recorded; the biometric sample is discarded immediately.
Every account on Truliv has been verified as a live human at account creation. Not a photo, not a video, not an AI persona. We built Truliv because this technology has existed for years and every social platform chose not to use it.
The 30-day free trial doesn’t require a payment method. The liveness check is the only gate.
Q&A
What is liveness detection?
Liveness detection is the technical process of verifying that a biometric sample comes from a live person present at the time of capture — not a photo, video, or AI-generated forgery. Banks use it for remote account opening. Truliv uses it to verify that every account belongs to a real human before they can post.
Q&A
How does liveness detection work?
Most liveness systems use active challenges: blink when asked, turn your head left, follow a moving target. The system analyzes the response in real time — looking at micro-expressions, depth cues, lighting consistency, and motion patterns that are extremely difficult to fake. The check typically takes 15-60 seconds and runs on a smartphone camera.
Q&A
Can liveness detection be defeated?
Sophisticated attacks exist and are increasing — deepfake-enabled attacks surged 1,600% in early 2025. However, top-tier vendors (iProov, Jumio, Onfido/Entrust, Sumsub) have passed iBeta Level 2 PAD certification at 0% attack success rate. The risk comes from lower-quality implementations: DHS testing found one system accepted 71-77% of fraudulent documents.
Q&A
Does liveness detection store biometric data?
It depends on the implementation. Some systems store facial templates or biometric identifiers. Others — including the approach Truliv uses — process liveness locally and discard all data once the check completes. The liveness result (pass/fail) is stored, not the biometric. Verifying that a provider follows this practice is worth doing before using any verification-gated service.
Q&A
Is liveness detection the same as facial recognition?
No. Facial recognition identifies who you are by matching your face to a database. Liveness detection only determines whether a live human is present — it does not identify you. Truliv's verification uses liveness detection, not facial recognition. The system doesn't know who you are, only that you're alive.
Want to be first on a human-only network?
Try Truliv free — no credit card required.
See plans & pricingWant to learn more?
Frequently asked