Follow Stripe on Twitter

Pricing update for Europe

Joe Cruttwell on December 9, 2015

Good news! We’re available in 14 countries across the European Union and, today, we’re lowering our prices in all of them.

Stripe is working to grow the size of the internet economy and new regulations from the EU have enabled us to simplify and unify pricing across the European Union markets we support.

Our new price is 1.4% + €0.25 for European cards and 2.9% + €0.25 for non-European cards. (Instead of €0.25, the fixed fee in the UK will be 20p and in Denmark and Sweden will be 1.8kr.) As before, we expose our fees on a per-charge basis in real time.

The pricing will go into effect immediately and you don’t need to take any action to get the lower price.

If you’ve any questions or feedback (about this change or Stripe in general), please email me.

December 9, 2015

Open-Source Retreat 2016 grantees

Michelle Bu on December 8, 2015

Like many developers, we often contribute to open-source software in bits and pieces over long periods of time. So we started the Open-Source Retreat to help open-source developers make concentrated progress on features and releases with the potential for significant impact.

For 2016’s Retreat, we’re inviting three developers to work on their projects from Stripe’s office in SF:

  • Nik Graf will be working on Belle, a configurable component library for React that is focused on great user experience, accessibility, and compatibility across devices and browsers. Developers still tend to build the same set of components from scratch for web frameworks like React, Ember, and Angular—and often without considering compatibility or UX issues. We’re excited to see Nik expand the number of components for Belle and improve their usability and accessibility during the Retreat.
  • Christopher Allan Webber wants to launch federation tooling for MediaGoblin, a free software media platform that anyone can run. Think of it as a decentralized alternative to major media publishing services such as Flickr, YouTube, SoundCloud, DeviantArt, etc. We’re huge fans of the federated web and would like to see more software in that space. We see MediaGoblin as more than just a way to share media, but a reference application for building on new, emerging standards, such as the ActivityPump API.
  • Pascal Brandt’s focus will be on OpenMRS, a platform that’s widely used to support the delivery of health care in Africa and developing countries. The introduction of a REST API in their latest version has contributed to an explosion in new front-ends being built atop their medical recording system framework. Pascal hopes to create a single, authoritative JavaScript client for OpenMRS.

  • Though there were many more applicants than we could host, we’d like to thank everyone who applied. We’d also especially like to recognize our finalists:

    • Doron Samech: A proposal to bring NetMQ to feature parity with ZeroMQ.
    • Alan Guo Xiang Tan: Improvements to backfilling and automated regression identification for RubyBench, which provides benchmarking for Ruby and Rails.
    • Anne Ogborn: Writing the authoritative book for programming in SWI-Prolog.
    • Nyah Check: A module for OpenMRS that would help migrate existing databases to OpenMRS.
    • Naomi Most: Metapub, a Python library that unites the National Library of Medicine databases with CrossRef metadata search to improve search, text-mining, and cross-referencing for academic articles.
    • We encourage you to explore (and contribute to!) these projects.

      Our three grantees will be working on their proposed projects starting in January and they’ll share their work towards the end of the Retreat. In the meantime, if you have any questions or suggestions, please let me know!

December 8, 2015

Stripe on Fabric

Jack Flintermann on October 21, 2015

If you’re using Fabric, you can now add payments to your mobile app using the Stripe kit for iOS or Android. Fabric enables mobile developers to add additional services (like analytics, ads, and now payments) to their Android and iOS apps through a single integration instead of having to set each of them up manually.

We built this integration for Fabric so that it’s even easier to get started and accept your first payment on Stripe. You can see it in action in this demo app. (You can of course still use our mobile SDKs to integrate Stripe directly, too.)

If you already use Fabric, just enable the Stripe kit in your settings. If you’d like to try Fabric out, head over to their docs.

Feel free to email me if you have any questions or feedback!

October 21, 2015

Instant debit card transfers

Brendan Taylor on October 8, 2015

We’re rolling out a private beta for Connect users to instantly have money transferred to a debit card.

Why might you want this? Many marketplaces, like Kickstarter, Postmates, Instacart, and TaskRabbit, use Connect to get their sellers paid. Rather than having to collect bank account numbers or having sellers wait on their funds, marketplaces can now use instant debit card transfers to get sellers paid faster.

While we’ve had the ability to send money to debit cards for a while, funds used to take two to three days to arrive. Now, they’re basically instant.

Functionality will be limited to U.S. cardholders to begin with—and not all banks support this yet. We currently have coverage for 64% of U.S. banks, including Bank of America and Chase. Money can still be sent to unsupported banks, but it’ll take a day to arrive rather than being instant.

Lyft is using instant transfers to create Express Pay, which lets drivers get paid out instantly rather than just once a week.

We hope you’ll use this feature to get sellers on your marketplaces paid faster. We’re adding more people to our beta as soon as we can. If you’d like to try it out, please email us at

October 8, 2015

Introducing Relay

Siddarth Chandrasekaran on September 14, 2015

Today, mobile e-commerce websites aren’t working: Ten-step shopping carts, mandatory account signup, slow page loads. When we get linked to a shopping cart on our phone, we usually just give up. That shouldn’t be surprising—most mobile shopping sites are fundamentally the same as the desktop sites that preceded them, despite the medium calling for something completely different.

The result has been predictable. Despite mobile devices representing 60% of browsing traffic for shopping sites, they only make up 15% of purchases.

What does work? Native mobile apps, like Postmates or Instacart, with buying experiences designed to let the user transact as quickly as possible, reuse existing payment details across many orders, and finish the entire transaction in the same app they started in.

Over the past year, a number of companies—Twitter, Pinterest, and Spring, to name a few—have worked to bring this kind of experience to e-commerce, pulling products from many stores into the very apps where users are already spending their time.

While this works, experiences like this are hard to build, since stores don’t usually make their products programmatically available.

So, we’re trying out something new. We’re launching Relay, an API for stores to publish their products, and for apps to read them. Relay makes it easier for developers to build great mobile e-commerce experiences, and for stores to participate in them.

It’s powered by a few new objects in the Stripe API: Products, SKUs (product variants), and Orders. Stores can provide product information to Stripe via the dashboard, the API, or by linking their existing e-commerce systems. SAP Hybris (used by stores like Levi’s, Oakley, and Ted Baker) is the first e-commerce integration we’re announcing, but expect more to come later.

For stores, you can use Relay to enable instant purchases in third-party mobile apps: one of our launch partners, Twitter, is using Relay to enable anyone to start selling within tweets. (You can try it out on this Tweet from @WarbyParker.) Or you can submit your products to be shown in a growing number of apps like ShopStyle and Spring.

For app developers, Relay is a set of APIs for building great in-app buying experiences. People can buy products directly within your app rather than getting pushed to third-party websites. Our friends at Wish have made their product catalog accessible starting today via Relay. You can play around with their data and see what kinds of buying experiences you could build.

To get started, browse the API docs or try selling a product on Twitter.

If you’d like to talk to us about it, we’d love to hear from you:

September 14, 2015

Open-Source Retreat 2016

Kyle Conroy on September 3, 2015

We increasingly rely on (and contribute back to!) a lot of open-source software to build Stripe, and we’d like to give back and get more people working on open-source.

Last year, we invited four developers to the Stripe office as part of our first Open-Source Retreat. Our grantees made significant progress on their projects in a relatively short time—from launching a pure-Python TLS 1.2 implementation to releasing a major update to urllib3. Julian Shapiro of Velocity.js wrote about his experience with the program.

Starting January, we’re hosting another Open-Source Retreat at Stripe. Just like last year, we’re looking for existing projects where these grants can make a large difference in spurring the development of a new feature or infrastructure.

We’ll host three to four developers at our office in SF for three months to work full-time on an open-source project. While we’ll ask that they give a couple of internal tech talks over the course of the program, the grant itself is no-strings-attached.

Selection process

We’ll select projects on based on their importance to the broader community, independently of Stripe itself. Applicants from any country are welcome and we’d love to fund people from backgrounds underrepresented in the open-source community. Here are the criteria we’ll use:

  • Impact of our grant. Does our grant have the ability to transform this project’s trajectory? Are you an influencer within the project? Will your ability to focus on it full-time move the project forward in some significant way?
  • Importance of the project. Is this a project that people already use and has attracted a lot of attention? If this project isn’t itself popular yet, how much potential does it have? Is it a project that, while possibly risky, would be particularly exciting if successful?
  • Likelihood of success. Is there a good plan for how these three months will be used? What indicators are there that you’ll be able to pull it off (obvious passion for the project, existing work on the project, previous work on similar projects)?

Selection team

Here's the team that will be reviewing your application:



We want to provide everything needed to focus and have a substantial impact on an open-source project:

  • The program will run from January 15th until April 15th, 2016 at Stripe HQ in the Mission District of San Francisco.
  • We’ll provide $7,500 per month in addition to desk space at our office and meals during the week.


Applications are now closed. Thanks to everyone who applied. We'll be contacting you individually soon!

September 3, 2015

Checkout in more languages

Gabriel Hubert on August 20, 2015

We’re starting to add support for displaying Checkout in your customers’ preferred languages. In addition to English, Checkout now speaks Japanese, German, French, Simplified Chinese, Spanish, Italian, and Dutch.

We tested this feature with over 220,000 customers across different types of businesses and found that displaying a translated Checkout converts significantly better for many companies—in particular, where the rest of their website is translated. For those sites, revenue from non-English speakers increased by between 7 and 12 percent—a huge jump for one tiny checkout change.

Based on our results, we strongly recommend you enable this feature if you already have a website in the eight languages we currently support.

You can opt in to using the translated version with a single line of code. If you’re using the simple integration, just pass data-locale="auto" into the <script> tag. If you have a custom integration, use locale: 'auto' when calling StripeCheckout.configure(). There’s more info in the docs.

This change illustrates part of our broader goal with Checkout: to deliver a constant stream of refinements that automatically improve your checkout flow and help you reach more customers.

If you notice anything surprising, or have questions or feedback, please let me know!

August 20, 2015

Amex Express Checkout

Christian Anderson on August 18, 2015

Starting today, you can add Amex Express Checkout to your site or app with a single code snippet.

American Express has partnered with Stripe to let over 20 million cardholders pay online and in apps using their existing American Express login, rather than entering their full credit card details.

When you integrate, American Express will also securely pass on useful customer information to you such as address, email, and phone number. They'll even automatically keep that info up to date. (You’ll get a customer.updated webhook.) You’ll have the latest information for your customers without having to prompt them to manually update their data on your site.

During our beta, we worked to make accepting Amex Express Checkout as straightforward as possible. To include the button, just configure the helper library with a client_id that you can generate in the Dashboard:

<amex:init client_id="30e73189-96f4-4797-83d4-6730bf6bed19"
  env="production" callback="aecCallbackHandler" />
<script src="

As with other payment types, you’ll get a card token that you can use to create charges. You’ll also see these payments alongside other payment types in the Dashboard or via the API.

If you’re interested, check out the guide to getting started. If you’ve got any questions or feedback, please let me know!

August 18, 2015

Running three hours of Ruby tests in under three minutes

Nelson Elhage on August 13, 2015

At Stripe, we make extensive use of automated testing to help ensure the stability and reliability of our services. We have expansive test coverage for our API and other core services, we run tests on a continuous integration server over every git branch, and we never deploy without green tests.

The size and complexity of our codebase has grown over the past few years—and so has the size of the test suite. As of August 2015, we have over 1400 test files that define nearly 15,000 test cases and make over 130,000 assertions. According to our CI server, the tests would take over three hours if run sequentially.

With a large (and growing) group of engineers waiting for those tests with every change they make, the speed of running tests is critical. We’ve used a number of hosted CI solutions in the past, but as test runtimes crept past 10 minutes, we brought testing in-house to give us more control and room for experimentation.

Recently, we’ve implemented our own distributed test runner that brought the runtime of our tests to just under three minutes. While some of these tactics are specific to our codebase and systems, we hope sharing what we did to improve our test runtimes will help other engineering organizations.

Forking executor

We write tests using minitest, but we've implemented our own plugin to execute tests in parallel across multiple CPUs on multiple different servers.

In order to get maximum parallel performance out of our build servers, we run tests in separate processes, allowing each process to make maximum use of the machine's CPU and I/O capability. (We run builds on Amazon's c4.8xlarge instances, which give us 36 cores each.)

Initially, we experimented with using Ruby's threads instead of multiple processes, but discovered that using a large number of threads was significantly slower than using multiple processes. This slowdown was present even if the ruby threads were doing nothing but monitoring subprocess children. Our current runner doesn’t use Ruby threads at all.

When tests start up, we start by loading all of our application code into a single Ruby process so we don’t have to parse and load all our Ruby code and gem dependencies multiple times. This process then calls fork a number of times to produce N different processes that’ll each have all of the code pre-loaded and ready to go.

Each of those workers then starts executing tests. As they execute tests, our custom executor forks further: Each process forks and executes a single test file’s worth of tests inside the child process. The child process writes the results to the parent over a pipe, and then exits.

This second round of forking provides a layer of isolation between tests: If a test makes changes to global state, running the test inside a throwaway process will clean everything up once that process exits. Isolating state at a per-file level also means that running individual tests on developer machines will behave similarly to the way they behave in CI, which is an important debugging affordance.


The custom forking executor spawns a lot of processes, and creates a number of scratch files on disk. We run all builds at Stripe inside of Docker, which means we don't need to worry about cleaning up all of these processes or this on-disk state. At the end of a build, all of the state—be that in-memory processes or on disk—will be cleaned up by a docker stop, every time.

Managing trees of UNIX processes is notoriously difficult to do reliably, and it would be easy for a system that forks this often to leak zombie processes or stray workers (especially during development of the test framework itself). Using a containerization solution like Docker eliminates that nuisance, and eliminates the need to write a bunch of fiddly cleanup code.

Managing build workers

In order to run each build across multiple machines at once, we need a system to keep track of which servers are currently in-use and which ones are free, and to assign incoming work to available servers.

We run all our tests inside of Jenkins; Rather than writing custom code to manage worker pools, we (ab)use a Jenkins plugin called the matrix build plugin.

The matrix build plugin is designed for projects where you want a "build matrix" that tests a project in multiple environments. For example, you might want to build every release of a library against several versions of Ruby and make sure it works on each of them.

We misuse it slightly by configuring a custom build axis, called BUILD_ROLE, and telling Jenkins to build with BUILD_ROLE=leader, BUILD_ROLE=worker1, BUILD_ROLE=worker2, and so on. This causes Jenkins to run N simultaneous jobs for each build.

Combined with some other Jenkins configuration, we can ensure that each of these builds runs on its own machine. Using this, we can take advantage of Jenkins worker management, scheduling, and resource allocation to accomplish our goal of maintaining a large pool of identical workers and allocating a small number of them for each build.


Once we have a pool of workers running, we decide which tests to run on each node.

One tactic for splitting work—used by several of our previous test runners—is to split tests up statically. You decide ahead of time which workers will run which tests, and then each worker just runs those tests start-to-finish. A simple version of this strategy just hashes each test and take the result modulo the number of workers; Sophisticated versions can record how long each test took, and try to divide tests into group of equal total runtime.

The problem with static allocations is that they’re extremely prone to stragglers. If you guess wrong about how long tests will take, or if one server is briefly slow for whatever reason, it’s very easy for one job to finish far after all the others, which means slower, less efficient, tests.

We opted for an alternate, dynamic approach, which allocates work in real-time using a work queue. We manage all coordination between workers using an nsqd instance. nsq is a super-simple queue that was developed at; we already use it in a few other places, so it was natural to adopt here.

Using the build number provided by Jenkins, we separate distinct test runs. Each run makes use of three queues to coordinate work:

  • The node with BUILD_ROLE=leader writes each test file that needs to be run into the test.<BUILD_NUMBER>.jobs queue.
  • As workers execute tests, they write the results back to the test.<BUILD_NUMBER>.results queue, where they are collected by the leader node.
  • Once the leader has results for each test, it writes "kill" signals to the test.<BUILD_NUMBER>.shutdown queue, one for each worker machine. A thread on each worker pulls off a single event and terminates all work on that node.

Each worker machine forks off a pool of processes after loading code. Each of those processes independently reads from the jobs queue and executes tests. By relying on nsq for coordination even within a single machine, we have no need for a second, machine-local, communication mechanism, which might risk limiting our concurrency across multiple CPUs.

Other than the leader node, all nodes are homogenous; they blindly pull work off the queue and execute it, and otherwise behave identically.

Dynamic allocation has proven to be hugely effective. All of our worker processes across all of our different machines reliably finish within a few seconds of each other, which means we're making excellent use of our available resources.

Because workers only accept jobs as they go, work remains well-balanced even if things go slightly awry: Even if one of the servers starts up slightly slowly, or if there isn't enough capacity to start all four servers right at once, or if the servers happen to be on different-sized hardware, we still tend to see every worker finishing essentially at once.


Reasoning about and understanding performance of a distributed system is always a challenging task. If tests aren't finishing quickly, it's important that we can understand why so we can debug and resolve the issue.

The right visualization can often capture performance characteristics and problems in a very powerful (and visible) way, letting operators spot the problems immediately, without having to pore through reams of log files and timing data.

To this end, we've built a waterfall visualizer for our test runner. The test processes record timing data as they run, and save the result in a central file on the build leader. Some Javascript d3 code can then assemble that data into a waterfall diagram showing when each individual job started and stopped.

Waterfall diagrams of a slow test run and a fast test run.

Each group of blue bars shows tests run by a single process on a single machine. The black lines that drop down near the right show the finish times for each process. In the first visualization, you can see that the first process (and to a lesser extent, the second) took much longer to finish than all the others, meaning a single test was holding up the entire build.

By default, our test runner uses test files as the unit of parallelism, with each process running an entire file at a time. Because of stragglers like the above case, we implemented an option to split individual test files further, distributing the individual test classes in the file instead of the entire file.

If we apply that option to the slow files and re-run, all the "finished" lines collapse into one, indicating that every process on every worker finished at essentially the same time—an optimal usage of resources.

Notice also that the waterfall graphs show processes generally going from slower tests to faster ones. The test runner keeps a persistent cache recording how long each test took on previous runs, and enqueues tests starting with the slowest. This ensures that slow tests start as soon as possible and is important for ensuring an optimal work distribution.

The decision to invest effort in our own testing infrastructure wasn't necessarily obvious: we could have continued to use a third-party solution. However, spending a comparatively small amount of effort allowed the rest of our engineering organization to move significantly faster—and with more confidence. I'm also optimistic this test runner will continue to scale with us and support our growth for several years to come.

If you end up implementing something like this (or have already), send me a note! I'd love to hear what you've done, and what's worked or hasn't for others with similar problems.

August 13, 2015