Andy Brody, April 9, 2014

By now, you've probably been bombarded with announcements about the Heartbleed bug from a number of different sources. People have released vulnerability-checking tools, walkthroughs of the vulnerable code, and commentary about the overall implications of this bug.

The long and short of it is most of the known Internet was vulnerable to Heartbleed. Most SSL bugs only allow attackers to intercept encrypted data. This one was more severe because it also allowed an attacker to read the memory of a remote SSL process, meaning that cryptographic keys could also have been compromised.

While we have no reason to believe that this vulnerability has been used to attack us, we take a very cautious approach to security. Sometimes that's adding stripe.com to the Chrome HSTS pre-loaded list; sometimes that's tuning our ciphers for perfect forward secrecy (which prevents an attacker with your compromised keys from decrypting past SSL sessions). In this case, it was responding under the assumption that public exploits were just hours away.

Our response

One of the most important responsibilities of a security team is to respond to critical vulnerabilities as quickly as possible. With a bug like Heartbleed, there's a limited window between when the vulnerability is announced, public patches are released, and exploit code becomes freely available for any script kiddie to use. The right strategy is sometimes to wait for vendor-supplied packages to be available, but in other cases (such as with the CRIME vulnerability) we've been able to patch faster by building our own packages.

Here was the timeline of our response (all in Pacific time on Monday):

  • 11:29 AM: We were alerted to Heartbleed. We noticed Ubuntu had yet to release packages, so we proactively started building our own.
  • 2:30 PM: Shortly after we finished building our packages, Ubuntu released theirs.
  • 3:45 PM: We had fixes rolled to all our Internet-facing servers.
  • 4:10 PM: The first public exploit code we know of was released.

Since then, we've worked around the clock on rolling our SSL keys, upgrading our internal servers, and revoking the old keys (all now completed). We'll be invalidating all existing login sessions shortly, so don't be surprised if you have to log back into your Stripe account. We are also upgrading our client libraries to support certificate revocation; we'll post an update when this is done.

What you should do

Here are some concrete steps you should take to improve the security of your Stripe account:

  1. Change your password — remember to use a unique password rather than sharing across sites.
  2. Enable two-step verification — this will make it harder for attackers to access your account if your password is compromised.
  3. Rotate your API keys — as an added security measure, we'll start recommending that all our users roll their keys at least every 6 months.

In the coming days, we'll send out an email to all of our users describing these steps in more detail.

If you have any questions or concerns, please don't hesitate to get in touch.

Further reading

  • Matthew Green has a good blog post explaining Heartbleed and its implications.
  • The Hacker News thread about Heartbleed is quite informative.
  • Stripe CTF and Stripe CTF 2.0 both are good ways to get hands-on experience with security vulnerabilities like this.
  • Adam Langley's blog is a great source on SSL internals. (Adam was incidentally one of the co-authors of the Heartbleed patch.)

Dynamic statement descriptions

Jeff Balogh, March 14, 2014

You can now add a per-charge description—a “dynamic descriptor” in the industry jargon—to transactions on your customers’ credit card statements. This description shows up alongside your business name: if RunClub specified “5K Race Ticket” when creating the charge, the customer would see “RUNCLUB 5K RACE TICKET” on their statement. The additional context can help reduce confusion, customer enquiries, and chargebacks.

You can use the description for anything you like: the product purchased, the plan the customer subscribed to, or the name of the seller on a marketplace. To get started, just pass the statement_description parameter when creating a charge:

curl https://api.stripe.com/v1/charge \
  -u sk_test_mkGsLqEW6SLnZa487HYfJVLf: \
  -d customer=cus_6SLnZan487He \
  -d amount=400 \
  -d currency=usd \
  -d statement_description="5K Race Ticket"

You can also specify a statement_description on a plan. If you do so, all subscription charges created will automatically have this description appended to the charge.

We added this feature based on your feedback—if you’ve got other suggestions, we'd love to hear from you.

The new Checkout

Michaël Villar, March 5, 2014

We launched the first version of the Checkout a year ago. From the start, the goal has been to create best possible payments flow. Each site should not have to duplicate the work of device optimization and A/B testing. When you use Checkout, you have a team of Stripe engineers and designers continually optimizing your payment interface. Today, we’ve released a new version of Checkout.

You should really play around with our showcase that demonstrates the UI. But, briefly, the new parts are:

Expanded address support

  • Our revamped address handling now supports separate billing and shipping addresses—one of the most common feature requests.
  • Entering addresses is now streamlined—Stripe will automatically select the user’s country, and we can fill in their city and state when they enter a zip code.
  • We now support integrated billing address verification. This means that users are notified immediately if they’ve mistyped anything.

Even more device optimization

Mobile usage continues to explode. We’ve learned a lot about what works and what doesn’t over the past year, and we’ve redesigned Checkout from scratch for every device. Checkout is gorgeous on Android, iOS, Windows Phone, OS X, Windows, tablet, desktop, and mobile.

Remember me everywhere

We’ve added a small “Remember me” checkbox that allows your customers to save their payment info by providing a mobile phone number. This in turn enables a rapid checkout experience in the future on any site using Checkout. Stripe securely identifies your users via text message so they don’t have to retype their payment information or remember an additional password. We’ve been testing this for the past couple of months—our hypothesis was that it would increase conversion rates—and we’re delighted that it has been confirmed. In a recent test, a customer with their details saved was several times less likely to abandon their purchase than one without. You should watch our demo if you’re curious how it works.


We’ve already tested the above across thousands of sites and millions of transactions. We closely monitor conversion rates across every device and browser on an ongoing basis. Our goal is to identify opportunities to increase your revenue.

Get started

If you’ve already integrated Checkout, you get this new functionality without any additional work on your part. We’ve updated our documentation with info on how to integrate our new features, like shipping addresses and pre-filling email addresses.

If you’re ready to start, learn more about using Checkout or try our tutorial to get up and running in minutes. If you have any feedback, we’d love to hear from you!

Start accepting payments today. Explore Checkout

Events in Melbourne and Sydney

Susan Wu, February 13, 2014

We're running our first set of events in Australia next week. Hopefully you can come by for one or more!

Melbourne Tech Talk: Engineering that scales

Inspire9's Nathan Sampimon will moderate a conversation between Stripe CTO Greg Brockman and 99designs CTO Lachlan Donald about how the two companies have structured their engineering cultures to optimise for growth. Event starts at 6pm.

GitHub and Stripe Drinks in Sydney

In honour of this year’s RubyConf, we're coming together with GitHub to co-host drinks at the King Street Brewhouse on Wednesday 19 February. Come by starting at 8pm for a beer on us and to catch up with Australia's developer community.

Office Hours in Sydney at ATP Innovations/Startmate HQ

Need specific integration help or have questions about getting started with Stripe? Come to our open office hours where we'll help answer any questions you may have. We'll be available from 5-6 PM on a first come, first serve basis.

Sydney Tech Talk: Engineering that scales

Join Greg and Startmate/Blackbird Ventures' Partner Niki Scevak for an interactive conversation about building an engineering organisation that can scale to billions of dollars in transactions. Come prepared with questions, since we'll open the floor for Q&A. Event starts at 6pm.

Keynote at RubyConf

Greg will discuss what the developer community needs to do in order to maintain Ruby's position in the long run. Talk starts at 9am. After the talk, come hang out with us in the conference lobby — we'll be happy to chat about Stripe, Ruby, or anything else that's on your mind.

Have questions about Stripe in Oz? Email me. We're looking forward to seeing you!

135 new currencies

Thairu, February 11, 2014

As the internet’s global reach grows, Stripe users increasingly sell to worldwide audiences. Companies like DailyMotion in Paris, HubSpot in Boston, and Tito in Dublin build products that are popular everywhere, not just in their home markets.

Starting today, businesses using Stripe in the US and Europe can accept payments in 139 currencies. You can create charges in any of these currencies, and we'll automatically handle converting and transferring funds to you in your home currency. Currency conversion incurs a 2% fee atop market exchange rates.

With this change, you can easily tailor your pricing for different geographies. Localized pricing increases checkout completion rates by eliminating uncertainty for your customers and letting them avoid conversion fees.

You don’t need to do anything to enable this in your account—you can simply start passing the currencies throughout the API:

curl https://api.stripe.com/v1/charges \
  -u sk_test_mkGsLqEW6SLnZa487HYfJVLf: \
  -d amount=3000 \
  -d currency=cny \
  -d description="Premium plan" \
  -d customer=cus_X3he9Ex2Aenkc

The currency conversion rate is calculated in real-time and can be immediately checked by retrieving the relevant balance transaction via the API or just by navigating to the charge in the dashboard.

Particular thanks to our beta testers for helping us refine this feature, especially Couchsurfing and Edmodo. As ever, we’d love to hear your feedback!

CTF3 architecture

Greg Brockman, February 4, 2014

In philosophy, CTF3 was the same as our previous CTFs: we gave people a chance to solve problems they normally would only get to read about. However, in terms of infrastructure, this was by far our most complex CTF: we needed to build, run, and test arbitrary distributed systems code. In the course of the week it was live, our 7,500 participants pushed over 640,000 times, meaning we needed a scalable and robust architecture that provided isolation between users.

Participants have released a number of walkthroughs for the actual levels, so we won't be releasing official solutions here. Instead, we'll give you a tour of how we made the systems work. (If you'd prefer to see this in video form, we've just released the video from our CTF3 wrapup.)

As an aside, the architecture for CTF reflects a lot of what we've learned in building Stripe. If you're interested in this kind of thing, we're hiring engineers in San Francisco and remotely within US timezones. I also wrote a Quora post about the problems we're working on. (It turns out we do things besides just building CTFs :).)


CTF3 consisted of five levels. Most of the levels looked pretty similar from a high level: the user would push some code to us, we'd run it in a sandbox environment, and then we'd return a score. The one exception was the Gitcoin level, where we would just validate Git commits people had mined locally (or on their cloud vendor).

Code was submitted to us in the simplest possible way: you just ran git push. On the backend, we received your code via git-shell and used wrappers and commit hooks to implement the CTF-specific logic.

The "wrappers and commit hooks" had lot of moving parts, though. One important design goal was to decouple components and make it possible to horizontally scale any given piece of the system. Stateful pieces were few in number and were constrained to be low volume. In the following sections, we'll go into detail about how all the pieces worked, but here's how things roughly fit together:

Submission pipeline

Wondering what actually happened after you ran git push? The following steps were common between all levels.

  1. You resolved stripe-ctf.com to the public IP for one of our gate frontend servers.

  2. You connected to port 22 on your chosen gate server. An haproxy daemon load-balanced your traffic to one of our submitter boxes. We had three submitter boxes in the pool for much of the event.

    As an optimization, the load balancing used IP stickiness to route you to the same submitter backend on each connection. The submitters were mostly stateless: all that they held was the code you were pushing and convenience tags for each submission. If you'd committed a large blob though, being routed to the same submitter was nice since you wouldn't have to re-upload it on each push.

    In previous CTFs, rather than load balancing, we'd just exposed our machine hostnames (so you'd connect to directly to e.g. level0-01.stripe-ctf.com). In that case, it was hard to drop a machine out of the pool or rebalance traffic. Controlling the load balancing here gave us operational flexibility at the cost of additional constraints on our system design (e.g. haproxy knew only your IP address, so we couldn't do stickiness based on username).

  3. The public-facing sshd on your chosen submitter received the username we'd given you in the web interface, which looked like lvl0-ohngii5M.

    We'd configured our PAM stack to use LDAP. So we could share the user database with the web interface, we put together a quick-and-dirty LDAP server implementation (called fakeuser) to grab usernames directly out of our central database. The users had empty passwords, which (given appropriate settings in sshd.conf and PAM) meant that you could log in without pasting a password or giving us your SSH key. Of course, the downside was that your username became a secret credential.

  4. At this point the sshd ran your user's shell, which was a custom script in /usr/local/bin/login-shell. The shell was pretty simple: it set some environment variables, took out an flock on a per-user file, and then (conceptually) ran a bunch of Ruby code that did all of the level-specific work.

    At first, we'd actually spawn a new Ruby interpreter and load our code on each login. This turned out to be untenable. First of all, loading Bundler plus all our code took a few seconds, which was way too slow for a login session. So we split out the code intended for just the login session into a module we called CTF3NoBundler. This was painful to manage, and meant the no-bundler code couldn't use most of the libraries we were writing over in Bundler-land.

    Even with this split, it still took about 100-200 milliseconds to load our code, which was effectively all CPU time. When we tested continuously running about 20 concurrent logins, the submitter box ground to a halt under the load. We effectively DOSed ourselves through the work of loading the same code over and over again.

    At this point, perhaps the most obvious thing to do would be to rewrite in a faster-loading language. However, there's actually a decent amount of code involved in submission, and there was nothing wrong with the code once it was up and running. So instead, we decided to try a load-once, fork-for-each-login model. We took a look at using Zeus for this purpose. It's a cool tool, but unfortunately it's aimed at development rather than production, and doesn't have the kind of robust failure handling we'd need for something as core as this. So instead, we wrote a simpler implementation based off similar ideas, called Poseidon.

Standard pipeline

Here's the point at which Gitcoin and the standard pipelines diverged. The remainder of the standard submission pipeline looked like the following:

  1. Next, we constructed your user's level repository (that is, the actual repository that you would clone) if it didn't already exist on disk. This lazy assembly meant we didn't have to waste disk space on users until they'd actually fetched some code.

  2. In the case of a pull, we would just run git-shell and be done with it. Pushes had a lot more going on, however.

  3. In order to make submission as easy to test drive as possible, we wanted it to be possible to git push straight from a fresh clone. So before running git-shell, we played some branch renaming tricks.

  4. We then invoked git-shell, which in turn invoked a post-receive hook. The hook was also implemented as a Poseidon client for fast boot.

  5. The post-receive code in the Poseidon master then served as the coordinator of your scoring run. First, it called to a test_case_assigner service, which ran on the singleton colossus server. For this and other services which required synchronous responses, we used the Ruby Thrift abstractions we use internally at Stripe.

    The test_case_assigner simply grabbed some free test case records from the database, marked them as allocated, and then returned the resulting cases. These test cases were originally created by the test_case_generator daemon (running on the testasaurus boxes — ok, we ran out of good names at some point). The generator simply ran our benchmark solution against random test cases. We stored metadata in our database, with the actual blob data stored on S3 so your client could later download it.

  6. Once the post-recieve hook had its test cases, it started listening on two new RabbitMQ queues: one for results and one for output to display to the user. The hook then submitted a build RPC over RabbitMQ. We used RabbitMQ as a buffer for RPCs that we expected might get backed up, or where a synchronous response wasn't needed.

  7. At the other end of the queue was a builder daemon, running on one of our aptly-named build boxes. Upon receiving the RPC, the daemon fetched the code from the relevant submitter's git-daemon into a temporary directory.

    The builder then asked a central build_cacher Thrift service if the built commit was cached. Assuming not, the builder spawned a Docker container with your code mounted at your user's home directory and ran ./build.sh in the container. We then streamed back the first few hundred KB of output.

    The builder then tarred up your output directory and generated a RabbitMQ score RPC for each test case. The score RPC contained a URL to fetch the tarball from an nginx running on the build box. Finally, the builder uploaded the built tarball to S3 and informed the build_cacher about the new SHA1.

    In the cached case, the builder just short-circuited this logic and sent the score RPCs right away.

  8. Each score RPC was serviced by an executor daemon on a worker box. The executor fetched the build product and then spawned a new container with the code mounted into it. It then (finally!) ran your code, again streaming output back to you. Once complete, the executor determined the results of your trial and then sent a result RPC back to the post-receive hook.

  9. The post-receive hook aggregated your results and from there compiled a final result. It sent a single FinalScoreRPC representing the results of the test run to RabbitMQ.

  10. At the other end of the wire, a resulter daemon hung around on the colossus box waiting to consume the FinalScoreRPC. Upon consuming the RPC, it updated your user's high score.

  11. Gitcoin pipeline

    Gitcoin had its own architecture. Since we didn't need to run any of your code (we just needed to validate the purported Gitcoin), we could get by with a lot less complexity.

    Our mining bots

    To clear the level, you just needed to mine faster than our bots. The obvious design is to spawn a new miner for each end-user. However, this would be pretty expensive, as we'd have to be mining hundreds of Gitcoins at any one time.

    So instead, we had miners on a single central repository on the gitcoin box, which produced a steady stream of Gitcoins. Each submitter had a gitcoin daemon whose job was to periodically fetch from the central repository and then release at most one new commit to a machine-local Gitcoin instance.

    We'd started out with a coin release frequency of 25 + rand(20) seconds, but after seeing how many people were struggling to mine that quickly, we dropped the frequency to a flat 90 seconds.

    When you pushed, we had a git update hook which would perform a bunch of sanity checks to ensure it was a valid Gitcoin. Once your commit was accepted, the bots had to stop because our pre-mined Gitcoins wouldn't apply cleanly to your repository.

    Gitcoin bonus round

    In this round, we pitted everyone together in a master Gitcoin instance. Conveniently for us, we didn't have to run our own miners, since people provided plenty of competition against each other.

    The architecture here was a single shared (created using git init --bare --shared=all) global Gitcoin repository on the gitcoin box. The submitters maintained their own clone of this repository.

    On pull, you just hit the submitter repository. On push, the commit was validated by the submitter, which then pushed (via a new SSH connection) to the backend gitcoin box. If the backend push was successful, a Thrift service on the gitcoin box would synchronously push the new commit to all other submitters.

    One consequence of this architecture was that submitting Gitcoins was decently slow — we weren't maintaining persistent connections to the backend gitcoin server, so there was a decent amount of overhead. We compensated for this by tuning the difficulty to ensure the time to mine a coin was large compared to the time to complete a push. By the contest's end, the difficulty was 0000000005, a full 4 (!) orders of magnitude harder than the difficulty we'd started with.

    I hope you had as much fun playing CTF3 as we had building it. If you're curious about any details I didn't cover here, feel free to send me an email.

Multiple subscriptions

Jim Danz, February 4, 2014

Stripe has offered subscription billing functionality from day 1. From day 2, we’ve been hearing requests to support more than one active subscription per customer. Much like we did with cards last July, you can now (finally!) create, retrieve, list, update, and delete subscriptions as first-class API resources on customers:

curl https://api.stripe.com/v1/customers/cus_3R12DmsmM9/subscriptions \
  -u sk_test_mkGsLqEW6SLnZa487HYfJVLf: \
  -d plan=extra_bandwidth_package

Subscriptions can also be managed from the dashboard and there’s full support for per-subscription invoices, invoice items, and discounts.

As with a lot of what we launch, this has been in the works for a while, and many early testers provided great feedback—thanks to all of you, and especially to Michael Belfrage from Udacity. (If you’d like to test out their integration, just enroll in multiple Udacity courses. Personally, I recommend CS101 and CS271.)

Our API documentation has been updated to reflect these changes, but if you’re an existing user looking to see exactly what’s new, this overview might be useful.

Lastly, if you’re interested in testing out new functionality like this in the future, you may find it useful to subscribe to our api-discuss mailing list. (Fortunately, the list also supports multiple subscriptions.) And if you have any feedback, please let me know!

CTF3 wrap-up

Siddarth Chandrasekaran, January 28, 2014

To wrap up Stripe CTF3, we're hosting meetups in San Francisco and London. The architects of CTF3 will go over each of the levels, the motivation behind them, and the ways that participants solved them.

Come by, enjoy the free drinks and snacks, learn to solve the levels, and hang out with fellow participants! People of any technical skill level are welcome.

San Francisco
Thursday, January 30th, 2014,
7:00 PM
Stripe HQ
3180 18th Street
San Francisco, CA - 94110
Friday, January 31st, 2014
6:30 PM
General Assembly
4th Floor, 9 Back Hill
London EC1R 5EN

Hope to see you there!

Open betas in Europe

Michelle Bu, January 22, 2014

Over the last year, we’ve launched betas—and added production users—in Australia, Belgium, Finland, France, Germany, Luxembourg, the Netherlands, and Spain. We’ve also launched fully in the UK and Ireland.

Up to now, we’ve required an invite if you’re based in one of our beta countries. Starting today, anyone in a beta European country (Belgium, Finland, France, Germany, Luxembourg, the Netherlands, and Spain) can sign up without an invite. These countries are still in beta, but we’ve realized that they’re useful to enough people that we should allow anyone to sign up immediately.

We look forward to adding many more countries and continuing to launch across Europe over the course of 2014. As ever, please let us know if you have any feedback!

Accept payments with Stripe. Get started

Stripe CTF3: Distributed Systems

Greg Brockman, January 22, 2014

We’re proud to launch Capture the Flag 3: Distributed Systems. Without further ado, you can now jump in and start playing. If you complete all the levels, we'll send you a special-edition Stripe CTF3 T-shirt.

For those seeking further ado: we’ve found that the best way to teach people to build good systems is by giving them hands-on experience with problems that even expert developers may only occasionally get the chance to solve. We’ve run two previous Capture the Flags, both of which were designed to be an interesting way to get hands-on experience with crafting vulnerabilities.

Problems that follow this pattern—interesting, educational, rarely encountered—occur in many places outside security though, and we've made Capture the Flag 3 focus on distributed systems. There are five levels, each one focused on a different problem in the field. In all cases, the problem is one you’ve likely read about many times but never had a chance to try out in practice.

If you’d like to see how others are doing, we have leaderboards (for those who’ve opted in). You can also create a leaderboard for your group or company if you’d like to compete against your friends. We have CTF community chat on IRC at irc://irc.stripe.com:+6697/#ctf (also available via our web client). If you'd rather use Twitter than IRC, #stripectf is the hashtag for the event.

Above all, we want you to have fun and hopefully learn something in the process. If you get lost, we’ve provided beginners’ guides for each level which should point you in the right direction.

CTF3 will run for a week (so until 11am Pacific on January 29th). Happy hacking!

CTF3 is here. Start playing