Fake account creation: Detection strategies that don’t block real users

Radar
Radar

Combata fraudes com a força da rede da Stripe.

Saiba mais 
  1. Introdução
  2. What is fake account creation?
  3. What motivates fake account creation?
    1. Free trial and promotion abuse
    2. Spam and platform manipulation
    3. API and data scraping
    4. Credential stuffing infrastructure
    5. Payment fraud preparation
  4. What are some red flags for fake account creation during sign-up?
    1. Email signals
    2. Device and network signals
    3. Behavioral signals
    4. Identity coherence signals
  5. How do you detect fake account creation through signal correlation?
  6. What strategies work to prevent fake account creation without blocking real users?
  7. Is it enough to block fake account creation at sign-up?
  8. How Stripe Radar can help

Online account openings are expected to rise by more than 13% annually through 2028, increasing the opportunity for new account fraud. Fake account creation is the process of opening accounts using fabricated or stolen identity signals. It might look like an issue with your sign-up system, but it’s really an economics problem. Attackers create accounts at volume because the cost is low and the potential return is high. Successful prevention efforts need to raise that cost for attackers until the operation is no longer worth it for them.

Below, we’ll explore how to detect fake accounts, why sign-up is a preferred entry point, and how detection at the sign-up level fits into a broader fraud defense.

Highlights

  • Fake accounts are often an entry point infrastructure for downstream abuse, such as free trial farming, application programming interface (API) scraping, and payment fraud.

  • Fake account detection largely depends on correlating device, network, behavioral, and identity signals across sign-up attempts.

  • Systems that detect fake account creation can raise the effort required for malicious sign-ups. They can be paired with monitoring after sign-up to catch fraud that gets through or converts later.

What is fake account creation?

Fake account creation is a type of fraud that involves opening accounts using fake or stolen information. This might include generated email addresses, voice over internet protocol (VoIP) phone numbers—which allow phone calls via virtual numbers instead of landlines—or invented names or combinations of real data that don’t actually belong to the same person.

The main difference between a fake account and a normal one is intent. A legitimate user signs up to use your product. A fake account is created to extract value from the platform or enable some form of abuse.

What motivates fake account creation?

Fake account creation is rarely the end goal. It’s usually the infrastructure that enables other forms of abuse.

Here’s why attackers create fake accounts.

Free trial and promotion abuse

Any sign-up incentive is a target: free trials, credits, referral bonuses, or sign-up rewards. Attackers create accounts at scale to repeatedly claim those benefits. A coordinated campaign can drain a promotional budget in hours.

Spam and platform manipulation

On marketplaces, social platforms, and review sites, fake accounts are raw material for manipulation. They enable fake reviews, inflated follower counts, coordinated engagement, and other forms of authentic activity.

API and data scraping

Web scraping APIs are used to extract data from websites. Authenticated users often receive higher API rate limits than anonymous traffic. Fake accounts allow attackers to distribute scraping activity across multiple authenticated sessions and bypass per-account throttles.

Credential stuffing infrastructure

Attackers running credential stuffing campaigns need infrastructure to do so. Sign-up flows can be used to validate email formats, test which domains accept mail, or create accounts that will later be used to test stolen credentials. In some cases, newly created accounts are also sold or reused as part of larger fraud operations.

Payment fraud preparation

In financial or marketplace environments, fake accounts can be used to test stolen card numbers, run small carding attempts, or establish a temporary business identity that looks legitimate long enough to process fraudulent transactions.

What are some red flags for fake account creation during sign-up?

No single signal reliably identifies a fake account. Detection works by combining signals that, taken together, are difficult to explain as normal user behavior.

These are some red flags that might indicate misintent.

Email signals

  • Disposable email domains: Temporary inbox services are often abused because they make it easy to create accounts without maintaining long-term access. Some well-known providers are easy to block, but smaller services continuously rotate domains to avoid blocklists.

  • Generated alias patterns: Bulk account creation often produces predictable patterns and uses sequential variations such as “user1@,” “user2@,” and so on, or long strings of random characters.

  • Very new domains: Legitimate users rarely sign up using domains registered days earlier. Accounts using registered domains very recently deserve more scrutiny.

Device and network signals

  • Device fingerprint reuse: Browser configuration, screen resolution, installed fonts, and graphic fingerprints often repeat across accounts created by the same automation system, even when internet protocol (IP) addresses and identity details change.

  • Data center or proxy IP addresses: Traffic that originates from cloud providers, virtual private networks (VPNs), or Tor exit nodes carries a higher risk than residential internet service provider (ISP) traffic.

  • Sign-up velocity: A burst of new accounts from a single IP range, autonomous system number (ASN), or device fingerprint can be an early indicator of a coordinated campaign.

Behavioral signals

  • Unrealistic form completion speed: A real person needs time to read and fill out fields. When a sign-up form is completed in fractions of a second, automation is the likely explanation.

  • Perfect input patterns: Humans hesitate, mistype, and correct themselves. Clean, linear input with no pauses or backspaces often hints at scripted form-filling.

  • Mechanical mouse movement: Bots either produce no cursor movement or movement patterns that follow straight, uniform paths rather than the irregular patterns of human interaction.

Identity coherence signals

  • VoIP phone numbers: Numbers provisioned through VoIP services are cheaper and easier to acquire in bulk than mobile carrier numbers, which makes them common in fake account campaigns.

  • Locale inconsistencies: Language settings, address formats, currency preferences, and claimed geographic location should roughly line up. When they don’t, the identity behind the account is less credible.

How do you detect fake account creation through signal correlation?

When it comes to detecting multiaccount fraud during sign-up, individual signals are useful, but the most effective detection comes from correlation across accounts. Even when attackers vary names, emails, and IP addresses, other patterns persist. Instead of evaluating sign-ups one at a time, many systems group accounts into clusters based on shared attributes and ask whether this cluster looks like normal behavior.

Signals commonly used for clustering include:

  • Device fingerprint similarity: Shared browser configurations, screen sizes, or graphic fingerprints

  • Network patterns: IP proximity, ASN ownership, and whether traffic originates from residential networks or cloud infrastructure

  • Behavioral sequences: The order and timing of actions during sign-up, which can fingerprint automation tools

  • Identity overlaps: Similar email roots, reused phone numbers, or partially matching addresses

  • Velocity and acceleration: Sudden bursts of activity from infrastructure that previously produced no traffic

What strategies work to prevent fake account creation without blocking real users?

Effective defenses to fake account creation are layered. They should also be proportionate to the risk signal you’re actually seeing.

Consider the following:

  • Rate limiting and velocity controls: Limiting sign-ups per IP address, device fingerprint, or email domain is a straightforward first layer that stops automation without affecting real users.

  • Behavioral bot detection: Invisible behavioral analysis, such as mouse movement, typing patterns, and interaction timing, can filter bots without presenting any visible challenge to humans. Hard Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHAs) are best reserved for ambiguous cases rather than applied universally.

  • Progressive verification: Instead of forcing every user through the same verification flow, escalate checks only when the risk increases. A user signing up from a residential network with normal interaction patterns might pass without friction, while a data center IP address combined with a disposable email might instigate phone verification.

  • Treat verification as a risk signal: Email and phone verification shouldn’t just be viewed as hurdles. Completing verification with a legitimate mobile carrier number lowers risk and raises the cost of creating fake accounts at scale.

  • Risk-based holds: When signals are ambiguous, accounts can be created in a restricted state, with limited API access, no trial credits, or reduced functionality. Capabilities expand once the account demonstrates legitimate behavior over time.

  • Detection software: There are smart tools to reduce fraud during the sign-up process, such as software that detects fake account creation. You can also consider tools that flag suspicious behavior in the wake of fake account creation, such as Stripe Radar, which uses artificial intelligence (AI) trained on data from millions of global businesses to identify fraud more accurately.

Is it enough to block fake account creation at sign-up?

Sign-up controls are necessary, but they’re not enough in isolation. Attack techniques evolve constantly. VoIP detection loses effectiveness when attackers move to subscriber identity module (SIM) farms. Device fingerprinting weakens as automation tooling improves. Any static detection approach will eventually be worked around.

What sign-up detection does well is raise the cost of entry. A campaign that once required minimal effort might now require more infrastructure, more time, and more money per account. Low-margin abuse, such as free trial farming, if it increases, can make campaigns unprofitable.

Determined fraudulent actors will likely adapt, which is why an effective system treats sign-up defenses as one layer in a broader strategy.

Here’s what a practical, in-depth model looks like:

  • Sign-up controls: Signal correlation, velocity limits, and progressive verification reduce fake account inventory before it’s created.

  • Monitoring after sign-up: Unusual behavior, abnormal payment patterns, suspicious API usage, and coordinated actions can reveal abuse that slips through.

  • Account reputation over time: Accounts that behave normally for months before turning abusive require detection models focused on behavioral drift rather than sign-up signals.

Stopping fake accounts completely isn’t realistic. The objective needs to be to make large-scale abuse expensive, detectable, and unsustainable.

How Stripe Radar can help

Stripe Radar uses AI models to detect and prevent fraud, trained on data from Stripe’s global network. It continuously updates these models based on the latest fraud trends, protecting your business as fraud evolves.

Stripe also offers Radar for Fraud Teams, which allows users to add custom rules addressing fraud scenarios specific to their businesses and access advanced fraud insights.

Radar can help your business:

  • Prevent fraud losses: Stripe processes over $1 trillion in payments annually. This scale uniquely enables Radar to accurately detect and prevent fraud, saving you money.

  • Increase revenue: Radar’s AI models are trained on actual dispute data, customer information, browsing data, and more. This enables Radar to identify risky transactions and reduce false positives, boosting your revenue.

  • Save time: Radar is built into Stripe and requires zero lines of code to set up. You can also monitor your fraud performance, write rules, and more in a single platform, increasing efficiency.

Learn more about Stripe Radar, or get started today.

O conteúdo deste artigo é apenas para fins gerais de informação e educação e não deve ser interpretado como aconselhamento jurídico ou tributário. A Stripe não garante a exatidão, integridade, adequação ou atualidade das informações contidas no artigo. Você deve procurar a ajuda de um advogado competente ou contador licenciado para atuar em sua jurisdição para aconselhamento sobre sua situação particular.

Mais artigos

  • Algo deu errado. Tente novamente ou entre em contato com o suporte.

Vamos começar?

Crie uma conta e comece a aceitar pagamentos sem precisar de contratos nem dados bancários, ou fale conosco para criar um pacote personalizado para sua empresa.
Radar

Radar

Combata fraudes com a força da rede da Stripe.

Documentação do Radar

Use o Stripe Radar para proteger sua empresa contra fraudes.