How to detect fake users and multiaccount sign-up abuse

Radar
Radar

Fight fraud with the strength of the Stripe network.

Learn more 
  1. Introduction
  2. What are fake users and multiaccount sign-up abuse?
  3. Why is multiaccount sign-up abuse so common in sign-up flows?
  4. What signals indicate fake users and multiaccount sign-up abuse?
  5. How do you detect fake users and multiaccount sign-up abuse through link analysis?
  6. What prevention strategies stop multiaccount sign-up abuse without hurting conversion?
  7. How does multiaccount sign-up abuse connect to trial, promo, and subscription fraud?
  8. How Stripe Radar can help

Sign-up flows are built for conversion, and that design creates a predictable opening for sign-up abuse. Fake accounts and multiaccount operations exploit the same low-friction conditions that make signing up fast for everyone else: minimal verification and a system that treats every incoming account as a stranger with no history. Fraud using fake accounts is a real problem; 62% of merchants reported an increase in disputes in 2025 due to fraudulent actors misrepresenting themselves or manipulating their accounts.

Below, we’ll explore how to detect fake users and coordinated multiaccount abuse, how link analysis connects accounts that look different but share underlying infrastructure, and how to target repeat abusers without slowing down legitimate sign-ups.

Highlights

  • Fake user and multiaccount abuse are hard to catch when each individual account looks clean. This makes link analysis across shared attributes an effective option.

  • Staged verification keeps conversion high by reserving friction for accounts with elevated risk scores rather than applying the same checks to every sign-up.

  • Multiaccounting is the common mechanism behind free trial abuse, referral fraud, and subscription fraud. Sign-up detection should be a central part of your broader fraud strategy.

What are fake users and multiaccount sign-up abuse?

Fake users are accounts created with low-trust identity signals, such as disposable emails, synthetic names, virtual phone numbers, or identities that pass basic validation but don’t correspond to a real person with genuine intent. Multiaccount abuse is a related but distinct problem: one actor creating multiple accounts, often with different-looking identities, to bypass limits, policies, or bans you’ve already enforced.

Why is multiaccount sign-up abuse so common in sign-up flows?

The conditions that make sign-up conversion good are the same conditions that make it easy to abuse.

Here’s what drives the problem:

  • Easy sign-up: Minimal verification at sign-up treats each new account as a stranger, and a determined actor exploits this repeatedly. Email confirmation helps stop casual abuse, but it does nothing against actors using alias-capable domains or throwaway inboxes.

  • Automation: Manually creating accounts is tedious. Creating them with a script, rotating credentials, and cycling through residential proxies takes a few hours of setup and can generate hundreds of accounts. The barrier to entry for automation has dropped as tools for email aliasing, virtual number generation, and browser fingerprint spoofing have become widely accessible.

  • Weak verification: Phone verification is a fraud deterrent, but it can be bypassed with Voice over Internet Protocol (VoIP) numbers. Neither email nor phone confirmation on their own establishes that an incoming account is distinct from accounts a business has already seen.

  • Meaningful incentives: When a free trial, referral credit, or promotional discount is attached to a new account, account creation has a measurable dollar value.

The mechanism behind free trial farming, referral fraud, ban evasion, and promo credit harvesting is the same: a business’s sign-up flow has no memory, and bad actors know it.

What signals indicate fake users and multiaccount sign-up abuse?

No single signal is conclusive, but several in combination are highly predictive.

Here are the strongest indicators of fake users and multiaccount sign-up abuse:

  • Device and browser overlap: A shared fingerprint across multiple accounts is a high-signal indicator. Fingerprints can be spoofed, but doing it consistently across many accounts requires effort that abuse operations sometimes skip.

  • Unusual network patterns: Repeated sign-ups from the same Internet Protocol (IP), the same Autonomous System Number (ASN), or a data center range can strongly suggest automation or coordination. Residential proxy traffic is harder to filter, but can show geographic inconsistency.

  • Repetitive email structure: Bad actors can use disposable domains and alias patterns (e.g., the “+tag” suffix in email addresses or dot variations such as j.ohn.doe@domain.com). They might also use a high volume of sign-ups from a single nonmajor domain or addresses that follow a detectable naming pattern across accounts.

  • Identity field similarity: Names, addresses, and phone numbers that are slightly varied but share structural similarities (e.g., sequential numbers, transposed characters, the same base name with different suffixes) can suggest generated rather than real identities.

  • Robotic sign-up velocity and timing: Accounts created in short bursts or at consistent intervals that suggest scripted behavior don’t match how human users sign up.

  • Behavioral patterns after sign-up: Real users explore. Abusive accounts might go directly to the highest-value action, such as activating a trial, claiming a promo code, or instigating a referral, and then stop there. A high ratio of accounts that complete one specific action and go dormant is worth investigating.

Blocking individual signals is necessary but not sufficient. A determined actor rotates inputs, so the more durable approach is link analysis.

Here’s how to do it:

  • Treat identity attributes as graph nodes: An email address, a device fingerprint, a phone number, and a billing address are each different nodes. When two accounts share a node, they’re connected. When a cluster of accounts shares multiple nodes across multiple attributes, you’re likely looking at coordinated abuse.

  • Capture attributes at sign-up, as well as at transaction: Hashing device fingerprints, IP addresses, and identity fields at the moment of sign-up gives you the raw material for link analysis when you need it later.

  • Cluster across time windows: Linking accounts created within the same campaign window surfaces coordinated sign-ups that wouldn’t look suspicious in isolation.

  • Run retroactive analysis on confirmed abuse: When you confirm an account is abusive, review your graph of nodes to find connected accounts you haven’t flagged yet. One confirmed bad actor might expose several more.

  • Use confidence scoring, not binary flags: Assign a risk score based on the number and strength of connections, so that enforcement can be proportional. An account with one weak link might get friction, while an account with four strong links gets blocked.

What prevention strategies stop multiaccount sign-up abuse without hurting conversion?

If your fraud detection system is too stringent, you can block legitimate users. If it’s too lax, abuse can get through. Staged verification sidesteps that problem by making responses proportional to risk.

Here’s how to structure it:

  • Low risk scores = no action: Standard sign-up with immediate access. Users might land here, and nothing about their experience changes.

  • Moderate risk scores = lightweight verification: Add a step that’s easy for a real person and expensive for an automated operation, such as phone confirmation with a real carrier look-up (in addition to standard format validation), or a time-limited email confirmation that batch operations can’t process efficiently.

  • High risk scores = gate the high-value action: Don’t block the sign-up. Gate access to trial activation, promo redemption, or referral payouts until the account has demonstrated baseline legitimate behavior. Logging in from the same device twice over two days, completing a profile, or making an initial purchase all function as soft verification. This kind of delay can be more effective than blocking the account outright. An abusive actor who waits 72 hours for a payout that never arrives learns quickly that their operation doesn’t work. A real user waiting the same period is mildly inconvenienced.

Rate limit by account but also by attribute. If a device fingerprint or IP address has already been associated with three sign-ups in the last 24 hours, new sign-ups from that attribute need to get elevated investigation or a hard pause, even if each individual account looks clean.

How does multiaccount sign-up abuse connect to trial, promo, and subscription fraud?

It’s rare for someone to create multiple accounts just to have them. Understanding the connection between multiaccount sign-up abuse and other types of fraud is what links your sign-up detection to your wider fraud strategy.

Here’s where the overlap shows up:

  • Free trial abuse: If your trial resets on account creation, anyone with a script and a supply of email addresses can access your product indefinitely without paying. Sign-up detection is your first effective line of defense. By the time a trial is being abused, the account already exists.

  • Promotional discount farming: New-user discounts and first-order incentives have a measurable dollar value per account created. That turns your sign-up flow into a direct target for coordinated abuse operations running at scale.

  • Referral fraud: Referral programs that pay out when a new account takes an action are particularly exposed. An actor controlling both the referring account and the new account can generate payouts with no real user acquisition on either side.

  • Subscription fraud: A banned or charged-back user can create a new account and resubscribe if your payment method checks don’t cross-reference identity signals from the flagged account. The sign-up is the reset mechanism, and without link analysis connecting the new account to the old one, enforcement doesn’t carry over.

How Stripe Radar can help

Stripe Radar uses AI models to detect and prevent fraud, trained on data from Stripe’s global network. It continuously updates these models based on the latest fraud trends, protecting your business as fraud evolves.

Stripe also offers Radar for Fraud Teams, which allows users to add custom rules addressing fraud scenarios specific to their businesses and access advanced fraud insights.
Radar can help your business:

  • Prevent fraud losses: Stripe processes over $1 trillion in payments annually. This scale uniquely enables Radar to accurately detect and prevent fraud, saving you money.

  • Increase revenue: Radar’s AI models are trained on actual dispute data, customer information, browsing data, and more. This enables Radar to identify risky transactions and reduce false positives, boosting your revenue.

  • Save time: Radar is built into Stripe and requires zero lines of code to set up. You can also monitor your fraud performance, write rules, and more in a single platform, increasing efficiency.

Learn more about Stripe Radar, or get started today.

The content in this article is for general information and education purposes only and should not be construed as legal or tax advice. Stripe does not warrant or guarantee the accuracy, completeness, adequacy, or currency of the information in the article. You should seek the advice of a competent lawyer or accountant licensed to practise in your jurisdiction for advice on your particular situation.

More articles

  • Something went wrong. Please try again or contact support.

Ready to get started?

Create an account and start accepting payments – no contracts or banking details required. Or, contact us to design a custom package for your business.
Radar

Radar

Fight fraud with the strength of the Stripe network.

Radar docs

Use Stripe Radar to protect your business against fraud.