Blog Engineering

Follow Stripe on Twitter

Stripe’s payments APIs: the first ten years

Michelle Bu on December 15, 2020 in Engineering

A few years ago, Bloomberg Businessweek published a feature story on Stripe. Four words spanned the center of the cover: “seven lines of code,” suggesting that’s all it took for a business to power payments on Stripe. The assertion was bold—and became a theme and meme for us.

To this day, it’s not entirely clear which seven lines the article referenced. The prevailing theory is that it’s the roughly seven lines of curl it took to create a Charge. However, a search for the seven lines of code ultimately misses the point: the ability to open up a terminal, run this curl snippet, then immediately see a successful credit card payment felt like seven lines of code. It’s unlikely that a developer believed a production-ready payments integration involved literally only seven lines of code. But taking something as complex as credit card processing and reducing the integration to only a few lines of code that, when run, immediately returns a successful Charge object is really quite magical.

Abstracting away the complexity of payments has driven the evolution of our APIs over the last decade. This post provides the context, inflection points, and conceptual frameworks behind our API design. It’s the extreme exception that our approach to APIs makes the cover of a business magazine. This post shares a bit more of how we’ve grown around and beyond those seven lines.

A condensed history of Stripe’s payments APIs

Successful products tend to organically expand over time, resulting in product debt. Similar to tech debt, product debt accumulates gradually, making the product harder to understand for users and change for product teams. For API products, it’s particularly tempting to accrue product debt because it’s hard to get your users to fundamentally restructure their integration; it’s much easier to get them to add a parameter or two to their existing API requests.

In retrospect, we see clearly how our APIs have evolved—and which decisions were pivotal in shaping them. Here are the milestones that defined our payments APIs and led to the PaymentIntents API.

Read more

December 15, 2020

To design and develop an interactive globe

Nick Jones on September 1, 2020 in Engineering

As humans, we’re driven to build models of our world.

A traditional globemaker molds a sphere, mounts it on an axle, balances it with hidden weights, and precisely applies gores—triangular strips of printed earth—to avoid overlap and align latitudes. Cartographers face unenviable trade-offs when making maps. They can either retain the shape of countries, but warp their size—or maintain the size of countries, but contort their shape. In preserving one aspect of our world, they distort another.

Read more

September 1, 2020

Stripe's remote engineering hub, one year in

Jay Shirley on May 28, 2020 in Engineering

Last May, Stripe launched our remote engineering hub, a virtual office coequal with our physical engineering offices in San Francisco, Seattle, Dublin, and Singapore. We set out to hire 100 new remote engineers over the year—and did. They now work across every engineering group at Stripe. Over the last year, we’ve tripled the number of permanently remote engineers, up to 22% of our engineering population. We also hired more remote employees across all other teams, and tripled the number of remote Stripes across the company.

Like many organizations, Stripe has temporarily become fully remote to support our employees and customers during the COVID-19 pandemic. Distributed work isn’t new to Stripe. We’ve had remote employees since inception—and formally began hiring remote engineers in 2013. But as we grew, we developed a heavily office-centric organizational structure. Last year, we set out to rebalance our mix of remote and centralized working by establishing our virtual hub. It’s now the backbone of a new working model for the whole company.

We think our experience might be interesting, particularly for businesses that haven’t been fully distributed from the start or are considering flipping the switch to being fully remote, even after the pandemic. We’ve seen promising gains in how we communicate, build more resilient and relevant products, and reach and retain talented engineers. Here is what we learned.

Read more

May 28, 2020

Similarity clustering to catch fraud rings

Andrew Tausz on February 20, 2020 in Engineering

Stripe enables businesses in many countries worldwide to onboard easily so they can accept payments as quickly as possible. Stripe’s scale makes our platform a common target for payments fraud and cybercrime, so we’ve built a deep understanding of the patterns bad actors use. We take these threats seriously because they harm both our users and our ecosystem; every fraudulent transaction we circumvent keeps anyone impacted from having a bad day.

We provide our risk analysts with automated tools to make informed decisions while sifting legitimate users from potentially fraudulent accounts. One of the most useful tools we’ve developed uses machine learning to identify similar clusters of accounts created by fraudsters trying to scale their operations. Many of these attempts are easy to detect and we can reverse engineer the fingerprints they leave behind to shut them down in real-time. In turn, this allows our analysts to spend more time on sophisticated cases that have the potential to do more harm to our users.

Fraud in the payments ecosystem

Fraud can generally be separated into two large categories: transaction fraud and merchant fraud. Transaction fraud applies to individual charges (such as those protected by Radar), where a fraudster may purchase items with a stolen credit card to resell later.

Merchant fraud occurs when someone signs up for a Stripe account to later defraud cardholders. For example, a fraudster may attempt to use stolen card numbers through their account, so they’ll try to provide a valid website, account activity, and charge activity to appear legitimate. The fraudster hopes to be paid out to their bank account before Stripe finds out. Eventually, the actual cardholders will request a chargeback from their bank for the unauthorized transaction. Stripe will reimburse chargebacks to issuing banks (and by proxy, the cardholder) and attempt to debit the fraudster’s account. However, if they have already been paid out then it may be too late to recover those funds and Stripe ultimately covers those costs as fraud losses.

Fraudsters also may attempt to defraud Stripe at a larger scale by setting up a predatory or scam business. For example, the fraudster will create a Stripe account, claiming to sell expensive apparel or electronics for low prices. Unsuspecting customers think they are getting a great deal, but they never receive the product they ordered. Once again, the fraudster hopes to be paid out before they are shut down or overwhelmed with chargebacks.

Using similarity information to reduce fraud

Fraudsters tend to create Stripe accounts with reused information and attributes. Typically, low-effort fraudsters will not try to hide links to previous accounts, and this activity can be detected immediately at signup. More sophisticated fraudsters will put more work into hiding their tracks in order to prevent any association with prior fraud attempts. Some attributes like name or date of birth are trivial to fabricate, whereas others are more difficult—for example, it requires significant effort to obtain a new bank account.

Linking accounts together via shared attributes is reasonably effective at catching obvious fraud attempts, but we wanted to move from a system based on heuristics to one powered by machine learning models. While heuristics may be effective in certain cases, machine learning models are significantly more effective at learning predictive rules.

Suppose a pair of accounts are assigned a similarity score based on the number of attributes they share. This similarity score could then help predict future behavior: if an account looks similar to a known fraudulent account, there’s a significant likelihood they are more likely to also be fraudulent. The challenge here is to accurately quantify similarity. For example, two accounts who share dates of birth should have a lower similarity score than two accounts who share a bank account.

By training a machine learning model, we remove the need for guesswork and hand-constructed heuristics. Now, we can automatically retrain the model over time as we obtain more data. Automatic retraining enables our models to continually improve in accuracy, adapt to new fraud trends, and learn the signatures of particular adversarial groups.

Choosing a clustering approach

Machine learning tasks are generally classified as either supervised or unsupervised. The goal of supervised learning is to make predictions given an existing dataset of labeled examples (for example, a label that indicates whether an account is fraudulent), whereas in unsupervised learning the usual goal is learn a generative model for the raw data (in other words, to understand the underlying structure of the data). Traditionally, clustering tasks fall into the class of unsupervised learning: unlabeled data needs to be grouped into clusters that capture some understanding of similarity or likeness.

Fortunately, we’re able to use supervised models, which are generally easier to train and may be more accurate. We already have a large body of data demonstrating whether a given account has been created by a fraudster based on the downstream impact (e.g. we observe a significant number of chargebacks and fraud losses). This allows us to confidently label millions of legitimate and illegitimate businesses from our dataset.

In particular, our approach is an example of similarity learning where the objective is to learn a symmetric function based on training data. Over the years, our risk underwriting teams have manually compiled many examples of existing clusters of fraudulent accounts through our investigations of fraud rings, and we can use these reference clusters as training data to learn our similarity function. By sampling edges from these groups, we obtain a dataset consisting of pairs of accounts along with a label for each pair indicating whether or not the two accounts belong to the same cluster. We use intra-cluster edges as positive training examples and inter-cluster edges as negative training examples, where an edge denotes a pair of accounts.

Clusters of accounts used to train predictive models

We use known clusters of accounts to train our predictive models.

Now that we have the labels specified, we must decide what features to use for our model. We want to convert pairs of Stripe accounts into useful model inputs that have predictive power. The feature generation process takes two Stripe accounts and produces a number of features that are defined on the pair. Due to the rich nature of Stripe accounts and their associated data, we can construct an extensive set of features for any given pair. Some examples of the features we’d include are categorical features that store the values of common attributes such as the account’s email domain, any overlap in card numbers used on both accounts, and measures of text similarity.

Using gradient-boosted decision trees

Because of the wide variety of features we can construct from given pairs of accounts, we decided to use gradient-boosted decision trees (GBDTs) to represent our similarity model. In practice, we’ve found GBDTs strike the right balance between being easy to train, having strong predictive power and being robust despite variations in the data. When we started this project we wanted to get something out the door quickly that was effective, had well-understood properties, and was straightforward to fine-tune. The variant that we use, XGBoost, is one of the best performing off-the-shelf models for cases with structured (also known as tabular) data, and we have well-developed infrastructure to train and serve them. You can read more about the infrastructure we use to train machine learning models at Stripe in a previous post.

Now that we have a trained model, we can use it to predict fraudulent activity. Since this model operates on pairs of Stripe accounts, it’s not feasible to feed it all possible pairs of accounts and compute scores across all pairs. Instead, we first generate a candidate set of edges to be scored. We do this by taking recently created Stripe accounts and creating edges between accounts that share certain attributes. Although this isn’t an exhaustive approach, this heuristic works well in practice to prune the set of candidate edges to a reasonable number.

Once the candidate edges are scored, we then filter edges by selecting those with a similarity score above some threshold. We then compute the connected components on the resulting graph. The final output is a set of high-fidelity account clusters which we can analyze, process, or manually inspect together as a unit. In particular, a fraud analyst may want to examine clusters which contain known fraudulent accounts and investigate the remaining accounts in that cluster.

This is an iterative process; as each individual cluster grows, we can quickly identify increasing similarity as fake accounts in a fraudster’s operation are created. And the more fraud rings we detect and shutdown at Stripe, the more accurate our clustering model becomes at identifying new clusters in the future.

Connected edges weighted with similarity scores

Each edge is weighted by a similarity score; we identify clusters by finding connected components in the resulting graph.

Benefits of the clustering system

So far, we’ve discussed the overall structure of the account clustering system. Although we have other models and systems in place to catch fraudulent accounts, using clustering information has the following advantages:

  • We’re even better at catching obvious fraud. It’s difficult for fraudsters to completely separate new accounts from previous accounts they’ve created in the past, or from accounts created by other fraudsters. Whether this is due to reusing basic attribute data or more complex measures of similarity, the account clustering system catches and blocks hundreds of fraudulent accounts weekly with very few false positives.
  • Fraudsters can only use their resources once. Whenever someone decides to defraud Stripe, they need to invest in resources such as stolen IDs and bank accounts, each of which incur monetary cost or inconvenience. In effect, by requiring fraudsters to use a new set of resources every time they create a Stripe account, we slow them down and increase the cost of defrauding Stripe. Clustering is a key tool since it invalidates resources such as bank accounts that have been previously used on fraudulent accounts.
  • Our risk analysts conduct more efficient reviews. When accounts require manual inspection by an analyst, they spend time trying to understand the intentions and motivations of the person behind the account. Analysts focus on the details of the business to sift legitimate users from a set of identified potentially fraudulent accounts. With the help of our clustering technique, analysts can easily identify common patterns and outliers and apply the same judgments to multiple accounts at once with a smaller likelihood of error.
  • Account clusters are a building block for other systems. Understanding whether two accounts are duplicates or measuring their degree of similarity is a useful primitive that extends beyond the use cases described here. For example, we use the similarity model to expand our training sets for models which have sparse training data.

Catching fraud in action

Stripe faces a multitude of threats from fraudsters who attempt to steal money in creative and complex ways. Identifying similarities between accounts and clustering them together enhances our effectiveness and improves our ability to block fraudulent accounts. Clustering accounts together and identifying duplicate attempts to create fraudulent accounts makes life more difficult for fraudsters. One goal of our models is to change the economic model of fraud by raising the required cost for unused bank accounts, IP addresses, devices and other tools they use. This leads to a negative expected value for fraudsters, weakens the underlying supply chain for stolen credentials and user data, and disincentivizes committing fraud at scale.

We often think about fraud as an adversarial game; uncovering fraudulent clusters allows us to tip the game in our favor. Using common tools like XGBoost enabled us to quickly deploy a solution that naturally fit into our machine learning platform and allows us to easily adapt our approach over time. We’re continuing to explore new techniques to catch fraud to ensure Stripe can reliably operate a low-friction global payment network for millions of businesses.

Like this post? Join the Stripe engineering team. View openings

February 20, 2020

Stripe CLI

Tomer Elmalem on November 5, 2019 in Engineering

Building and testing a Stripe integration can require frequent switching between the terminal, your code editor, and the Dashboard. Today, we’re excited to launch the Stripe command-line interface (CLI). It lets you interact with Stripe right from the terminal and makes it easier to build, test, and manage your integration.

Read more

November 5, 2019

Designing accessible color systems

Daryl Koopersmith and Wilson Miner on October 15, 2019 in Engineering

Color contrast is an important aspect of accessibility. Good contrast makes it easier for people with visual impairments to use products, and helps in imperfect conditions like low-light environments or older screens. With this in mind, we recently updated the colors in our user interfaces to be more accessible. Text and icon colors now reliably have legible contrast throughout the Stripe Dashboard and all other products built with our internal interface library.

Read more

October 15, 2019

Fast and flexible observability with canonical log lines

Brandur Leach on July 30, 2019 in Engineering

Logging is one of the oldest and most ubiquitous patterns in computing. Key to gaining insight into problems ranging from basic failures in test environments to the most tangled problems in production, it’s common practice across all software stacks and all types of infrastructure, and has been for decades.

Although logs are powerful and flexible, their sheer volume often makes it impractical to extract insight from them in an expedient way. Relevant information is spread across many individual log lines, and even with the most powerful log processing systems, searching for the right details can be slow and requires intricate query syntax.

We’ve found using a slight augmentation to traditional logging immensely useful at Stripe—an idea that we call canonical log lines. It’s quite a simple technique: in addition to their normal log traces, requests also emit one long log line at the end that includes many of their key characteristics. Having that data colocated in single information-dense lines makes queries and aggregations over it faster to write, and faster to run.

Out of all the tools and techniques we deploy to help get insight into production, canonical log lines in particular have proven to be so useful for added operational visibility and incident response that we’ve put them in almost every service we run—not only are they used in our main API, but there’s one emitted every time a webhook is sent, a credit card is tokenized by our PCI vault, or a page is loaded in the Stripe Dashboard.

Structured logging

Just like in many other places in computing, logging is used extensively in APIs and web services. In a payments API, a single request might generate a log trace that looks like this:

[2019-03-18 22:48:32.990] Request started [2019-03-18 22:48:32.991] User authenticated [2019-03-18 22:48:32.992] Rate limiting ran [2019-03-18 22:48:32.998] Charge created [2019-03-18 22:48:32.999] Request finished

Structured logging augments the practice by giving developers an easy way to annotate lines with additional data. The use of the word structured is ambiguous—it can refer to a natively structured data format like JSON, but it often means that log lines are enhanced by appending key=value pairs (sometimes called logfmt, even if not universally). The added structure makes it easy for developers to tag lines with extra information without having to awkwardly inject it into the log message itself.

An enriched form of the trace above might look like:

[2019-03-18 22:48:32.990] Request started http_method=POST http_path=/v1/charges request_id=req_123 [2019-03-18 22:48:32.991] User authenticated auth_type=api_key key_id=mk_123 user_id=usr_123 [2019-03-18 22:48:32.992] Rate limiting ran rate_allowed=true rate_quota=100 rate_remaining=99 [2019-03-18 22:48:32.998] Charge created charge_id=ch_123 permissions_used=account_write team=acquiring [2019-03-18 22:48:32.999] Request finished alloc_count=9123 database_queries=34 duration=0.009 http_status=200

The added structure also makes the emitted logs machine readable (the key=value convention is designed to be a compromise between machine and human readability), which makes them ingestible for a number of different log processing tools, many of which provide the ability to query production logs in near real-time.

For example, we might want to know what the last requested API endpoints were. We could figure that out using a log processing system like Splunk and its built-in query language:

“Request started” | head

Or whether any API requests have recently been rate limited:

“Rate limiting ran” allowed=false

Or gather statistics on API execute duration over the last hour:

“Request finished” earliest=-1h | stats count p50(duration) p95(duration) p99(duration)

In practice, it would be much more common to gather these sorts of simplistic vitals from dashboards generated from metrics systems like Graphite and statsd, but they have limitations. The emitted metrics and dashboards that interpret them are designed in advance, and in a pinch they’re often difficult to query in creative or unexpected ways. Where logging really shines in comparison to these systems is flexibility.

Logs are usually overproducing data to the extent that it’s possible to procure just about anything from them, even information that no one thought they’d need. For example, we could check to see which API path is the most popular:

“Request started” | stats count by http_path

Or let’s say we see that the API is producing 500s (internal server errors). We could check the request duration on the errors to get a good feel as to whether they’re likely caused by database timeouts:

“Request finished” status=500 | stats count p50(duration) p95(duration) p99(duration)

Sophisticated log processing systems tend to also support visualizing information in much the same way as a metrics dashboard, so instead of reading through raw log traces we can have our system graph the results of our ad-hoc queries. Visualizations are more intuitive to interpret, and can make it much easier for us to understand what’s going on.

Canonical log lines: one line per request per service

Although logs offer additional flexibility in the examples above, we’re still left in a difficult situation if we want to query information across the lines in a trace. For example, if we notice there’s a lot of rate limiting occurring in the API, we might ask ourselves the question, “Which users are being rate limited the most?” Knowing the answer helps differentiate between legitimate rate limiting because users are making too many requests, and accidental rate limiting that might occur because of a bug in our system.

The information on whether a request was rate limited and which user performed it is spread across multiple log lines, which makes it harder to query. Most log processing systems can still do so by collating a trace’s data on something like a request ID and querying the result, but that involves scanning a lot of data, and it’s slower to run. It also requires more complex syntax that’s harder for a human to remember, and is more time consuming for them to write.

We use canonical log lines to help address this. They’re a simple idea: in addition to their normal log traces, requests (or some other unit of work that’s executing) also emit one long log line at the end that pulls all its key telemetry into one place. They look something like this:

[2019-03-18 22:48:32.999] canonical-log-line alloc_count=9123 auth_type=api_key database_queries=34 duration=0.009 http_method=POST http_path=/v1/charges http_status=200 key_id=mk_123 permissions_used=account_write rate_allowed=true rate_quota=100 rate_remaining=99 request_id=req_123 team=acquiring user_id=usr_123

This sample shows the kind of information that a canonical line might contain include:

  • The HTTP request verb, path, and response status.
  • The authenticated user and related information like how they authenticated (API key, password) and the ID of the API key they used.
  • Whether rate limiters allowed the request, and statistics like their allotted quota and what portion remains.
  • Timing information like the total request duration, and time spent in database queries.
  • The number of database queries issued and the number of objects allocated by the VM.

We call the log line canonical because it’s the authoritative line for a particular request, in the same vein that the IETF’s canonical link relation specifies an authoritative URL.

Canonical lines are an ergonomic feature. By colocating everything that’s important to us, we make it accessible through queries that are easy for people to write, even under the duress of a production incident. Because the underlying logging system doesn’t need to piece together multiple log lines at query time they’re also cheap for computers to retrieve and aggregate, which makes them fast to use. The wide variety of information being logged provides almost limitless flexibility in what can be queried. This is especially valuable during the discovery phase of an incident where it’s understood that something’s wrong, but it’s still a mystery as to what.

Getting insight into our rate limiting problem above becomes as simple as:

canonical-log-line rate_allowed=false | stats count by user_id

If only one or a few users are being rate limited, it’s probably legitimate rate limiting because they’re making too many requests. If it’s many distinct users, there’s a good chance that we have a bug.

As a slightly more complex example, we could visualize the performance of the charges endpoint for a particular user over time while also making sure to filter out 4xx errors caused by the user. 4xx errors tend to short circuit quickly, and therefore don’t tell us anything meaningful about the endpoint’s normal performance characteristics. The query to do so might look something like this:

canonical-log-line user=usr_123 http_method=POST http_path=/v1/charges http_status!=4* | timechart p50(duration) p95(duration) p99(duration)
API request durations

API request durations at the 50th, 95th, and 99th percentiles: generated on-the-fly from log data.

Implementation in middleware and beyond

Logging is such a pervasive technique and canonical log lines are a simple enough idea that implementing them tends to be straightforward regardless of the tech stack in use.

The implementation in Stripe’s main API takes the form of a middleware with a post-request step that generates the log line. Modules that execute during the lifecycle of the request decorate the request’s environment with information intended for the canonical log line, which the middleware will extract when it finishes.

Here’s a greatly simplified version of what that looks like:

class CanonicalLineLogger def call(env) # Call into the core application and inner middleware status, headers, body = @app.call(env) # Emit the canonical line using response status and other # information embedded in the request environment log_canonical_line(status, env) # Return results upstream [status, headers, body] end end

Over the years our implementation has been hardened to maximize the chance that canonical log lines are emitted for every request, even if an internal failure or other unexpected condition occurred. The line is logged in a Ruby ensure block just in case the middleware stack is being unwound because an exception was thrown from somewhere below. The logging statement itself is wrapped in its own begin/rescue block so that any problem constructing a canonical line will never fail a request, and also so someone is notified immediately in case there is. They’re such an important tool for us during incident response that it’s crucial that any problems with them are fixed promptly—not having them would be like flying blind.

Warehousing history

A problem with log data is that it tends to be verbose. This means long-term retention in anything but cold storage is expensive, especially when considering that the chances it’ll be used again are low. Along with being useful in an operational sense, the succinctness of canonical log lines also make them a convenient medium for archiving historical requests.

At Stripe, canonical log lines are used by engineers so often for introspecting production that we’ve developed muscle memory around the naming of particular fields. So for a long time we’ve made an effort to keep that naming stable—changes are inconvenient for the whole team as everyone has to relearn it. Eventually, we took it a step further and formalized the contract by codifying it with a protocol buffer.

Along with emitting canonical lines to the logging system, the API also serializes data according to that contract and sends it out asynchronously to a Kafka topic. A consumer reads the topic and accumulates the lines into batches that are stored to S3. Periodic processes ingest those into Presto archives and Redshift, which lets us easily perform long-term analytics that can look at months’ worth of data.

In practice, this lets us measure almost everything we’d ever want to. For example, here’s a graph that tracks the adoption of major Go versions over time from API requests that are issued with our official API libraries:

Go language versions over time

Go version usage measured over time. Data is aggregated from an archive of canonical log lines ingested into a data warehouse.

Better yet, because these warehousing tools are driven by SQL, engineers and non-engineers alike can aggregate and analyze the data. Here’s the source code for the query above:

SELECT
    DATE_TRUNC('week', created) AS week,
    REGEXP_SUBSTR(language_version, '\\d*\\.\\d*') AS major_minor,
    COUNT(DISTINCT user)
FROM events.canonical_log_lines
WHERE created > CURRENT_DATE - interval '2 months'
    AND language = 'go'
GROUP BY 1, 2
ORDER BY 1, 3 DESC

Product leverage

We already formalized the schema of our canonical log lines with a protocol buffer to use in analytics, so we took it a step further and started using this data to drive parts of the Stripe product itself. A year ago we introduced our Developer Dashboard which gives users access to high-level metrics on their API integrations.

Developer Dashboard sample chart

The Developer Dashboard shows the number of successful API requests for this Stripe account. Data is generated from canonical log lines archived to S3.

The charts produced for this dashboard are also produced from canonical log lines. A MapReduce backend crunches archives stored in S3 to create visualizations tailored to specific users navigating their dashboards. As with our analytics tools, the schema codified in the protocol buffer definition ensures a stable contract so they’re not broken.

Canonical lines are still useful even if they’re never used to power products, but because they contain such a rich trove of historical data, they make an excellent primary data source for this sort of use.

Sketching a canonical logging pipeline

Canonical log lines are well-suited for practically any production environment, but let’s take a brief look at a few specific technologies that might be used to implement a full pipeline for them.

In most setups, servers log to their local disk and those logs are sent by local collector agents to a central processing system for search and analysis. The Kubernetes documentation on logging suggests the use of Elasticsearch, or when on GCP, Google’s own Stackdriver Logging. For an AWS-based stack, a conventional solution is CloudWatch. All three require an agent like fluentd to handle log transmission to them from server nodes. These solutions are common, but far from exclusive—log processing is a thriving ecosystem with dozens of options to choose from, and it’s worth setting aside some time to evaluate and choose the one that works best for you.

Emitting to a data warehouse requires a custom solution, but not one that’s unusual or particularly complex. Servers should emit canonical log data into a stream structure, and asynchronously to keep user operations fast. Kafka is far and away the preferred stream of choice these days, but it’s not particularly cheap or easy to run, so in a smaller-scale setup something like Redis streams are a fine substitute. A group of consumers cooperatively reads the stream and bulk inserts its contents into a warehouse like Redshift or BigQuery. Just like with log processors, there are many data warehousing solutions to choose from.

Flexible, lightweight observability

To recap the key elements of canonical log lines and why we find them so helpful:

  • A canonical line is one line per request per service that collates each request’s key telemetry.
  • Canonical lines are not as quick to reference as metrics, but are extremely flexible and easy to use.
  • We emit them asynchronously into Kafka topics for ingestion into our data warehouse, which is very useful for analytics.
  • The stable contract provided by canonical lines even makes them a great fit to power user-facing products! We use ours to produce the charts on Stripe’s Developer Dashboard.

They’ve proven to be a lightweight, flexible, and technology-agnostic technique for observability that’s easy to implement and very powerful. Small and large organizations alike will find them useful for getting visibility into production services, garner insight through analytics, and even shape their products.

Like this post? Join the Stripe engineering team. View openings

July 30, 2019

The secret life of DNS packets: investigating complex networks

Jeff Jo on May 21, 2019 in Engineering

DNS is a critical piece of infrastructure used to facilitate communication across networks. It’s often described as a phonebook: in its most basic form, DNS provides a way to look up a host’s address by an easy-to-remember name. For example, looking up the domain name stripe.com will direct clients to the IP address 53.187.159.182, where one of Stripe’s servers is located. Before any communication can take place, one of the first things a host must do is query a DNS server for the address of the destination host. Since these lookups are a prerequisite for communication, maintaining a reliable DNS service is extremely important. DNS issues can quickly lead to crippling, widespread outages, and you could find yourself in a real bind.

It’s important to establish good observability practices for these systems so when things go wrong, you can clearly understand how they’re failing and act quickly to minimize any impact. Well-instrumented systems provide visibility into how they operate; establishing a monitoring system and gathering robust metrics are both essential to effectively respond to incidents. This is critical for post-incident analysis when you’re trying to understand the root cause and prevent recurrences in the future.

In this post, I’ll describe how we monitor our DNS systems and how we used an array of tools to investigate and fix an unexpected spike in DNS errors that we encountered recently.

DNS infrastructure at Stripe

At Stripe, we operate a cluster of DNS servers running Unbound, a popular open-source DNS resolver that can recursively resolve DNS queries and cache the results. These resolvers are configured to forward DNS queries to different upstream destinations based on the domain in the request. Queries that are used for service discovery are forwarded to our Consul cluster. Queries for domains we configure in Route 53 and any other domains on the public Internet are forwarded to our cluster’s VPC resolver, which is a DNS resolver that AWS provides as part of their VPC offering. We also run resolvers locally on every host, which provides an additional layer of caching.

Unbound runs locally on every host as well as on the DNS servers.

Unbound exposes an extensive set of statistics that we collect and feed into our metrics pipeline. This provides us with visibility into metrics like how many queries are being served, the types of queries, and cache hit ratios.

We recently observed that for several minutes every hour, the cluster’s DNS servers were returning SERVFAIL responses for a small percentage of internal requests. SERVFAIL is a generic response that DNS servers return when an error occurs, but it doesn’t tell us much about what caused the error.

Without much to go on initially, we found another clue in the request list depth metric. (You can think of this as Unbound’s internal todo list, where it keeps track of all the DNS requests it needs to resolve.)

An increase in this metric indicates that Unbound is unable to process messages in a timely fashion, which may be caused by an increase in load. However, the metrics didn’t show a significant increase in the number of DNS queries, and resource consumption didn’t appear to be hitting any limits. Since Unbound resolves queries by contacting external nameservers, another explanation could be that these upstream servers were taking longer to respond.

Tracking down the source

We followed this lead by logging into one of the DNS servers and inspecting Unbound’s request list.

$ unbound-control dump_requestlist
thread #0
#   type cl name    seconds    module status
  0    A IN s3.amazonaws.com. - iterator wait for 10.0.0.2
  1  PTR IN 3.8.25.104.in-addr.arpa. - iterator wait for 10.0.0.2
  2  PTR IN 5.101.25.104.in-addr.arpa. - iterator wait for 10.0.0.2
  3  PTR IN 5.156.25.104.in-addr.arpa. - iterator wait for 10.0.0.2
  4  PTR IN 123.71.25.104.in-addr.arpa. - iterator wait for 10.0.0.2
  5  PTR IN 212.28.24.104.in-addr.arpa. - iterator wait for 10.0.0.2
  6  PTR IN 18.81.25.104.in-addr.arpa. - iterator wait for 10.0.0.2
  7  PTR IN 103.78.24.104.in-addr.arpa. - iterator wait for 10.0.0.2
  8  PTR IN 22.43.25.104.in-addr.arpa. - iterator wait for 10.0.0.2
  9  PTR IN 24.17.25.104.in-addr.arpa. - iterator wait for 10.0.0.2
 10  PTR IN 21.100.25.104.in-addr.arpa. - iterator wait for 10.0.0.2
...
...

This confirmed that requests were accumulating in the request list. We noticed some interesting details: most of the entries in the list corresponded to reverse DNS lookups (PTR records) and they were all waiting for a response from 10.0.0.2, which is the IP address of the VPC resolver.

We then used tcpdump to capture the DNS traffic on one of the servers to get a better sense of what was happening and try to identify any patterns. We wanted to make sure we captured the traffic during one of these spikes, so we configured tcpdump to write data to files over a period of time. We split the files across 60 second collection intervals to keep file sizes small, which made it easier to work with them.

# Capture all traffic on port 53 (DNS traffic)
# Write data to files in 60 second intervals for 30 minutes
# and format the filenames with the current time
$ tcpdump -n -tt -i any -W 30 -G 60 -w '%FT%T.pcap' port 53

The packet captures revealed that during the hourly spike, 90% of requests made to the VPC resolver were reverse DNS queries for IPs in the 104.16.0.0/12 CIDR range. The vast majority of these queries failed with a SERVFAIL response. We used dig to query the VPC resolver with a few of these addresses and confirmed that it took longer to receive responses.

By looking at the source IPs of clients making the reverse DNS queries, we noticed they were all coming from hosts in our Hadoop cluster. We maintain a database of when Hadoop jobs start and finish, so we were able to correlate these times to the hourly spikes. We finally narrowed down the source of the traffic to one job that analyzes network activity logs and performs a reverse DNS lookup on the IP addresses found in those logs.

One more surprising detail we discovered in the tcpdump data was that the VPC resolver was not sending back responses to many of the queries. During one of the 60-second collection periods the DNS server sent 257,430 packets to the VPC resolver. The VPC resolver replied back with only 61,385 packets, which averages to 1,023 packets per second. We realized we may be hitting the AWS limit for how much traffic can be sent to a VPC resolver, which is 1,024 packets per second per interface. Our next step was to establish better visibility in our cluster to validate our hypothesis.

Counting packets

AWS exposes its VPC resolver through a static IP address relative to the base IP of the VPC, plus two (for example, if the base IP is 10.0.0.0, then the VPC resolver will be at 10.0.0.2). We need to track the number of packets sent per second to this IP address. One tool that can help us here is iptables, since it keeps track of the number of packets matched by a rule.

We created a rule that matches traffic headed to the VPC resolver IP address and added it to the OUTPUT chain, which is a set of iptables rules that are applied to all packets sent from the host.

# Create a new chain called VPC_RESOLVER
$ iptables -N VPC_RESOLVER

# Match packets destined to VPC resolver and jump to the new chain
$ iptables -A OUTPUT -d 10.0.0.2 -j VPC_RESOLVER

# Add an empty rule to the new chain to help parse the output
$ iptables -A VPC_RESOLVER

We configured the rule to jump to a new chain called VPC_RESOLVER and added an empty rule to that chain. Since our hosts could contain other rules in the OUTPUT chain, we added this rule to isolate matches and make it a little easier to parse the output.

Listing the rules, we see the number of packets sent to the VPC resolver in the output:

$ iptables -L -v -n -x

Chain OUTPUT (policy ACCEPT 41023 packets, 2569001 bytes)
  pkts   bytes target     prot opt in     out     source               destination
 41023 2569001 VPC_RESOLVER  all  --  *      *       0.0.0.0/0            10.0.0.2

Chain VPC_RESOLVER (1 references)
  pkts   bytes target     prot opt in     out     source               destination
 41023 2569001            all  --  *      *       0.0.0.0/0            0.0.0.0/0

With this, we wrote a simple service that reads the statistics from the VPC_RESOLVER chain and reports this value through our metrics pipeline.

while :
do
  PACKET_COUNT=$(iptables -L VPC_RESOLVER 1 -x -n -v | awk '{ print $1 }')
  report-metric $PACKET_COUNT "vpc_resolver.packet_count"
  sleep 1
done

Once we started collecting this metric, we could see that the hourly spikes in SERVFAIL responses lined up with periods where the servers were sending too much traffic to the VPC resolver.

Traffic amplification

The data we saw from iptables (the number of packets per second sent to the VPC resolver) indicated a significant increase in traffic to the VPC resolvers during these periods, and we wanted to better understand what was happening. Taking a closer look at the shape of the traffic coming into the DNS servers from the Hadoop job, we noticed the clients were sending the request five times for every failed reverse lookup. Since the reverse lookups were taking so long or being dropped at the server, the local caching resolver on each host was timing out and continually retrying the requests. On top of this, the DNS servers were also retrying requests, leading to request volume amplifying by an average of 7x.

Spreading the load

One thing to remember is that the VPC resolver limit is imposed per network interface. Instead of performing the reverse lookups solely on our DNS servers, we could instead distribute the load and have each host contact the VPC resolver independently. With Unbound running on each host we can easily control this behavior. Unbound allows you to specify different forwarding rules per DNS zone. Reverse queries use the special domain in-addr.arpa, so configuring this behavior was a matter of adding a rule that forwards requests for this zone to the VPC resolver.

We knew that reverse lookups for private addresses stored in Route 53 would likely return faster than reverse lookups for public IPs that required communication with an external nameserver. So we decided to create two forwarding configurations, one for resolving private addresses (the 10.in-addr.arpa. zone) and one for all other reverse queries (the .in-addr.arpa. zone). Both rules were configured to send requests to the VPC resolver. Unbound calculates retry timeouts based on a smoothed average of historical round trip times to upstream servers and maintains separate calculations per forwarding rule. Even if two rules share the same upstream destination the retry timeouts are computed independently, which helps isolate the impact of inconsistent query performance on timeout calculations.

After applying the forwarding configuration change to the local Unbound resolvers on the Hadoop nodes we saw that the hourly load spike to the VPC resolvers had gone away, eliminating the surge of SERVFAILS we were seeing:

Adding the VPC resolver packet rate metric gives us a more complete picture of what’s going on in our DNS infrastructure. It alerts us if we approach any resource limits and points us in the right direction when systems are unhealthy. Some other improvements we’re considering include collecting a rolling tcpdump of DNS traffic and periodically logging the output of some of Unbound’s debugging commands, such as the contents of the request list.

Visibility into complex systems

When operating such a critical piece of infrastructure like DNS, it’s crucial to understand the health of the various components of the system. The metrics and command line tools that Unbound provides gives us great visibility into one of the core components of our DNS systems. As we saw in this scenario, these types of investigations often uncover areas where monitoring can be improved, and it’s important to address these gaps to better prepare for incident response. Gathering data from multiple sources allows you to see what’s going on in the system from different angles, which can help you narrow in on the root cause during an investigation. This information will also identify if the remediations you put in place have the intended effect. As these systems grow to handle more scale and increase in complexity, how you monitor them must also evolve to understand how different components interact with each other and build confidence that your systems are operating effectively.

Like this post? Join the Stripe engineering team. View openings

May 21, 2019

Railyard: how we rapidly train machine learning models with Kubernetes

Rob Story on May 7, 2019 in Engineering

Stripe uses machine learning to respond to our users’ complex, real-world problems. Machine learning powers Radar to block fraud, and Billing to retry failed charges on the network. Stripe serves millions of businesses around the world, and our machine learning infrastructure scores hundreds of millions of predictions across many machine learning models. These models are powered by billions of data points, with hundreds of new models being trained each day. Over time, the volume, quality of data, and number of signals have grown enormously as our models continuously improve in performance.

Running infrastructure at this scale poses a very practical data science and ML problem: how do we give every team the tools they need to train their models without requiring them to operate their own infrastructure? Our teams also need a stable and fast ML pipeline to continuously update and train new models as they respond to a rapidly changing world. To solve this, we built Railyard, an API and job manager for training these models in a scalable and maintainable way. It’s powered by Kubernetes, a platform we’ve been working with since late 2017. Railyard enables our teams to independently train their models on a daily basis with a centrally managed ML service.

In many ways, we’ve built Railyard to mirror our approach to products for Stripe’s users: we want teams to focus on their core work training and developing machine learning models rather than operating infrastructure. In this post, we’ll discuss Railyard and best practices for operating machine learning infrastructure we’ve discovered while building this system.

Effective machine learning infrastructure for organizations

We’ve been running Railyard in production for a year and a half, and our ML teams have converged on it as their common training environment. After training tens of thousands of models on this architecture over that period, here are our biggest takeaways:

  • Build a generic API, not tied to any single machine learning framework. Teams have extended Railyard in ways we did not anticipate. We first focused on classifiers, but teams have since adopted the system for applications such as time series forecasting and word2vec style embeddings.
  • A fully managed Kubernetes cluster reduces operational burden across an organization. Railyard interacts directly with the Kubernetes API (as opposed to a higher level abstraction), but the cluster is operated entirely by another team. We’re able to learn from their domain knowledge to keep the cluster running reliably so we can focus on ML infrastructure.
  • Our Kubernetes cluster gives us great flexibility to scale up and out. We can easily scale our cluster volume when we need to train more models, or quickly add new instance types when we need additional compute resources.
  • Centrally tracking model state and ownership allows us to easily observe and debug training jobs. We’ve moved from asking, “Did you save the output of your job anywhere so we can look at?” to “What’s your job ID? We’ll figure the rest out.” We observe aggregate metrics and track the overall performance of training jobs across the cluster.
  • Building an API for model training enables us to use it everywhere. Teams can call our API from any service, scheduler, or task runner. We now use Railyard to train models using an Airflow task definition as part of a larger graph of data jobs.

The Railyard architecture

In the early days of model training at Stripe, an engineer or data scientist would SSH into an EC2 instance and manually launch a Python process to train a model. This served Stripe’s needs at the time, but had a number of challenges and open questions for our Machine Learning Infrastructure team to address as the company grew:

  • How do we scale model training from ad-hoc Python processes on shared EC2 instances to automatically training hundreds of models a day?
  • How do we build an interface that is generic enough to support multiple training libraries, frameworks, and paradigms while remaining expressive and concise?
  • What metrics and metadata do we want to track for each model run?
  • Where should training jobs be executed?
  • How do we scale different compute resource needs (CPU, GPU, memory) for different model types?

Our goal when designing this system was to enable our data scientists to think less about how their machine learning jobs are run on our infrastructure, and instead focus on their core inquiry. Machine learning workflows typically involve multiple steps that include loading data, training models, serializing models, and persisting evaluation data. Because Stripe runs its infrastructure in the cloud, we can manage these processes behind an API: this reduces cognitive burden for our data science and engineering teams and moves local processes to a collaborative, shared environment. After a year and a half of iteration and collaboration with teams across Stripe, we’ve converged on the following system architecture for Railyard. Here’s a high-level overview:

Railyard runs on a Kubernetes cluster and pairs jobs with the right instance type.

Railyard provides a JSON API and is a Scala service that manages job history, state, and provenance in a Postgres database. Jobs are executed and coordinated using the Kubernetes API, and our Kubernetes cluster provides multiple instance types with different compute resources. The cluster can pair jobs with the right instance type: for example, most jobs default to our high-CPU instances, data-intensive jobs run on high-memory instances, and specialized training jobs like deep learning run on GPU instances.

We package the Python code for model training using Subpar, a Google library that creates a standalone executable that includes all dependencies in one package. This is included in a Docker container, deployed to the AWS Elastic Container Registry, and executed as a Kubernetes job. When Railyard receives an API request, it runs the matching training job and logs are streamed to S3 for inspection. A given job will run through multiple steps, including fetching training and holdout data, training the model, and serializing the trained model and evaluation data to S3. These training results are persisted in Postgres and exposed in the Railyard API.

Railyard’s API design

The Railyard API allows you to specify everything you need to train a machine learning model, including data sources and model parameters. In designing this API we needed to answer the following question: how do we provide a generic interface for multiple training frameworks while remaining expressive and concise for users?

We iterated on a few designs with multiple internal customers to understand each use case. Some teams only needed ad-hoc model training and could simply use SQL to fetch features, while others needed to call an API programmatically hundreds of times a day using features stored in S3. We explored a number of different API concepts, arriving at two extremes on either end of the design spectrum.

On one end, we explored designing a custom DSL to specify the entire training job by encoding scikit-learn components directly in the API itself. Users could include scikit-learn pipeline components in the API specification and would not need to write any Python code themselves.

On the other end of the spectrum we reviewed designs to allow users to write their own Python classes for their training code with clearly defined input and output interfaces. Our library would be responsible for both the necessary inputs to train models (fetching, filtering, and splitting training and test data) and the outputs of the training pipeline (serializing the model, and writing evaluation and label data). The user would otherwise be responsible for writing all training logic.

In the end, any DSL-based approach ended up being too inflexible: it either tied us to a given machine learning framework or required that we continuously update the API to keep pace with changing frameworks or libraries. We converged on the following split: our API exposes fields for changing data sources, data filters, feature names, labels, and training parameters, but the core logic for a given training job lives entirely in Python.

Here’s an example of an API request to the Railyard service:

{
  // What does this model do?
  "model_description": "A model to predict fraud",
  // What is this model called?
  "model_name": "fraud_prediction_model",
  // What team owns this model?
  "owner": "machine-learning-infrastructure",
  // What project is this model for?
  "project": "railyard-api-blog-post",
  // Which team member is training this model?
  "trainer": "robstory",
  "data": {
    "features": [
      {
        // Columns we’re fetching from Hadoop Parquet files
        "names": ["created_at", "charge_type", "charge_amount",
                  "charge_country", "has_fraud_dispute"],
        // Our data source is S3
        "source": "s3",
        // The path to our Parquet data
        "path": "s3://path/to/parquet/fraud_data.parq"
      }
    ],
    // The canonical date column in our dataset
    "date_column": "created_at",
    // Data can be filtered multiple times
    "filters": [
      // Filter out data before 2018-01-01
      {
        "feature_name": "created_at",
        "predicate": "GtEq",
        "feature_value": {
          "string_val": "2018-01-01"
        }
      },
      // Filter out data after 2019-01-01
      {
        "feature_name": "created_at",
        "predicate": "LtEq",
        "feature_value": {
          "string_val": "2019-01-01"
        }
      },
      // Filter for charges greater than $10.00
      {
        "feature_name": "charge_amount",
        "predicate": "Gt",
        "feature_value": {
          "float_val": 10.00
        }
      },
      // Filter for charges in the US or Canada
      {
        "feature_name": "charge_country",
        "predicate": "IsIn",
        "feature_value": {
          "string_vals": ["US", "CA"]
        }
      }
    ],
    // We can specify how to treat holdout data
    "holdout_sampling": {
      "sampling_function": "DATE_RANGE",
      // Split holdout data from 2018-10-01 to 2019-01-01
      // into a new dataset
      "date_range_sampling": {
        "date_column": "created_at",
        "start_date": "2018-10-01",
        "end_date": "2019-01-01"
      }
    }
  },
  "train": {
    // The name of the Python workflow we're training
    "workflow_name": "StripeFraudModel",
    // The list of features we're using in our classifier
    "classifier_features": [
      "charge_type", "charge_amount", "charge_country"
    ],
    "label": "is_fraudulent",
    // We can include hyperparameters in our model
    "custom_params": {
      "objective": "reg:linear",
      "max_depth": 6,
      "n_estimators": 500,
      "min_child_weight": 50,
      "learning_rate": 0.02
    }
  }
}

We learned a few lessons while designing this API:

  • Be flexible with model parameters. Providing a free-form custom_params field that accepts any valid JSON was very important for our users. We validate most of the API request, but you can’t anticipate every parameter a machine learning engineer or data scientist needs for all of the model types they want to use. This field is most frequently used to include a model’s hyperparameters.
  • Not providing a DSL was the right choice (for us). Finding the sweet spot for expressiveness in an API for machine learning is difficult, but so far the approach outlined above has worked out well for our users. Many users only need to change dates, data sources, or hyperparameters when retraining. We haven’t gotten any requests to add more DSL-like features to the API itself.

The Python workflow

Stripe uses Python for all ML model training because of its support for many best-in-class ML libraries and frameworks. When the Railyard project started we only had support for scikit-learn, but have since added XGBoost, PyTorch, and FastText. The ML landscape changes very quickly and we needed a design that didn’t pick winners or constrain users to specific libraries. To enable this extensibility, we defined a framework-agnostic workflow that presents an API contract with users: we pass data in, you pass a trained model back out, and we’ll score and serialize the model for you. Here’s what a minimal Python workflow looks like:

class StripeFraudModel(StripeMLWorkflow):
  # A basic model training workflow: all workflows inherit
  # Railyard’s StripeMLWorkflow class
  def train(self, training_dataframe, holdout_dataframe):
    # Construct an estimator using specified hyperparameters
    estimator = xgboost.XGBRegressor(**self.custom_params)

    # Serialize the trained model once training is finished;
    # we're using an in-house serialization library.
    serializable_estimator = stripe_ml.make_serializable(estimator)

    # Train our model
    fitted_model = serializable_estimator.fit(
      training_dataframe[self.classifier_features],
      training_dataframe[self.classifier_label]
    )

    # Hand our fitted model back to Railyard to serialize
    return fitted_model

Teams start adopting Railyard with an API specification and a workflow that defines a train method to train a classifier with the data fetched from the API request. The StripeMLWorkflow interface supports extensive customization to adapt to different training approaches and model types. You can preprocess your data before it gets passed in to the train function, define your own data fetching implementation, specify how you want training/holdout data to be scored, and run any other Python code you need. For example, some of our deep learning models have custom data fetching code to stream batches of training data for model training. When your training job finishes you’ll end up with two output: a model identifier for your serialized model that can be put into production, and your evaluation data in S3.

If you build a machine learning API specification, here are a few things to keep in mind:

  • Interfaces are important. Users will want to load and transform data in ways you didn’t anticipate, train models using unsupported patterns, and write out unfamiliar types of evaluation data. It’s important to provide standard API interfaces like fetch_data, preprocess, train, and write_evaluation_data that specify some standard data containers (e.g., Pandas DataFrame and Torch Dataset) but are flexible in how they are generated and used.
  • Users should not need to think about model serialization or persistence. Reducing their cognitive burden makes their lives easier and gives them more time to be creative and focus on modeling and feature engineering. Data scientists and ML engineers already have enough to think about between feature engineering, modeling, evaluation, and more. They should be able to train and hand over their model to your scoring infrastructure without ever needing to think about how it gets serialized or persisted.
  • Define metrics for each step of the training workflow. Make sure you’re gathering fine-grained metrics for each training step: data loading, model training, model serialization, evaluation data persistence, etc. We store high-level success and failure metrics that can be examined by team, project, or the individual machine performing the training. On a functional level,our team uses these metrics to debug and profile long-running or failed jobs, and provide feedback to the appropriate team when there’s a problem with a given training job. And on a collaborative level, these metrics have changed how our team operates. Moving from a reactive stance (“My model didn’t train, can you help?”) to a proactive one (“Hey, I notice your model didn’t train, here’s what happened”) has helped us be better partners to the many teams we work with.

Scaling Kubernetes

Railyard coordinates hundreds of machine learning jobs across our cluster, so effective resource management across our instances is crucial. The first version of Railyard simply ran individual subprocesses from the Scala service that manages all jobs across our cluster. We would get a request, start Java’s ProcessBuilder, and kick off a subprocess to build a Python virtualenv and train the model. This basic implementation allowed us to quickly iterate on our API in our early days, but managing subprocesses wasn’t going to scale very well. We needed a proper job management system that met a few requirements:

  • Scaling the cluster quickly for different resource/instance types
  • Routing models to specific instances based on their resource needs
  • Job queueing to prioritize resources for pending work

Luckily, our Orchestration team had been working hard to build a reliable Kubernetes cluster and suggested this new cluster would be a good platform for Railyard’s needs. It was a great fit; a fully managed Kubernetes cluster provides all of the pieces we needed to meet our system’s requirements.

Containerizing Railyard

To run Railyard jobs on Kubernetes, we needed a way to reliably package our Python code into a fully executable binary. We use Google’s Subpar library which allows us to package all of our Python requirements and source code into a single .par file for execution. The library also includes support for the Bazel build system out of the box. Over the past few years, Stripe has been moving many of its builds to Bazel; we appreciate its speed, correctness, and flexibility in a multi-language environment.

With Subpar you can define an entrypoint to your Python executable and Bazel will build your .par executable to bundle into a Dockerfile:

par_binary(
    name = "railyard_train",
    srcs = ["@.../ml:railyard_srcs"],
    data = ["@.../ml:railyard_data"],
    main = "@.../ml:railyard/train.py",
    deps = all_requirements,
)

With the Subpar package built, the Kubernetes command only needs to execute it with Python:

command: ["sh"]
args: ["-c", "python /railyard_train.par"]

Within the Dockerfile we package up any other third-party dependencies that we need for model training, such as the CUDA runtime to provide GPU support for our PyTorch models. After our Docker image is built, we deploy it to AWS’s Elastic Container Repository so our Kubernetes cluster can fetch and run the image.

Running diverse workloads

Some machine learning tasks can benefit from a specific instance type with resources optimized for a given workload. For example, a deep learning task may be best suited for a GPU instance while fraud models that employ huge datasets should be paired with high-memory instances. To support these mixed workloads we added a new top-level field to the Railyard API request to specify the compute resource for jobs running on Kubernetes:

{
    "compute_resource": "GPU"
}

Railyard supports training models on CPU, GPU, or memory-optimized instances. Models for our largest datasets can require hundreds of gigabytes of memory to train, while our smaller models can train quickly on smaller (and less expensive) instance types.

Scheduling and distributing jobs

Railyard exerts a fine-grained level of control on how Kubernetes distributes jobs across the cluster. For each request, we look at the requested compute resource and set both a Kubernetes Toleration and an Affinity to specify the type of node that we would like to run on. These parameters effectively tell the Kubernetes cluster:

  • the affinity, or which nodes the job should run on
  • the toleration, or which nodes should be reserved for specific tasks

Kubernetes will use the affinity and toleration properties for a given Kubernetes pod to compute how jobs should be best distributed across or within each node.

Kubernetes supports per-job CPU and memory requirements to ensure that workloads don’t experience resource starvation due to neighboring jobs on the same host. In Railyard, we determine limits for all jobs based on their historic and future expected usage of resources. In the case of high-memory or GPU training jobs, these limits are set so that each job gets an entire node to itself; if all nodes are occupied, then the scheduler will place the job in a queue. Jobs with less intensive resource requirements are scheduled on nodes to run in parallel.

With these parameters in place, we can lean on the Kubernetes resource scheduler to balance our jobs across available nodes. Given a set of job and resource requests, the scheduler will intelligently distribute those jobs to nodes across the cluster.

One year later: running at scale

Moving our training jobs to a Kubernetes cluster has enabled us to rapidly spin up new resources for different models and expand the cluster to support more training jobs. We can use a single command to expand the cluster and new instance types only require a small configuration change. When the memory requirements of running jobs outgrew our CPU-optimized instance types, we started training on memory-optimized instances the very next day; when we observe a backlog of jobs, we can immediately expand the cluster to process the queue. Model training on Kubernetes is available to any data scientist or engineer at Stripe: all that’s needed is a Python workflow and an API request and they can start training models on any resource type in the cluster.

To date, we’ve trained almost 100,000 models on Kubernetes, with new models trained each day. Our fraud models automatically retrain on a regular basis using Railyard and Kubernetes, and we’re steadily moving more of Stripe’s models onto an automated retraining cycle. Radar’s fraud model is built on hundreds of distinct ML models and has a dedicated service that trains and deploys all of those models on a daily cadence. Other models retrain regularly using an Airflow task that uses the Railyard API.

We’ve learned a few key considerations for scaling Kubernetes and effectively managing instances:

  • Instance flexibility is really important. Teams can have very different machine learning workloads. In any given day we might train thousands of time series forecasts, a long-running word embedding model, or a fraud model with hundreds of gigabytes of data. The ability to quickly add new instance types and expand the cluster are equally important for scalability.
  • Managing memory-intensive workflows is hard. Even using various instance sizes and a managed cluster, we still sometimes have jobs that run out of memory and are killed. This is a downside to providing so much flexibility in the Python workflow: modelers are free to write memory-intensive workflows. Kubernetes allows us to proactively kill jobs that are consuming too many resources, but it still results in a failed training job for the modeler. We’re thinking about ways to better manage this, including smart retry behavior to automatically reschedule failed jobs on higher-capacity instances and moving to distributed libraries like dask-ml.
  • Subpar is an excellent solution for packaging Python code. Managing Python dependencies can be tricky, particularly when you’d like to bundle them as an executable that can be shipped to different instances. If we were to build this from scratch again we would probably take a look at Facebook’s XARs, but Subpar is very compatible with Bazel and it’s been running well in production for over a year.
  • Having a good Kubernetes team is a force multiplier. Railyard could not have been a success without the support of our Orchestration team, which manages our Kubernetes cluster and pushes the platform forward for the whole organization. If we had to manage and operate the cluster in addition to building our services, we would have needed more engineers and taken significantly longer to ship.

Building ML infrastructure

We’ve learned that building common machine learning infrastructure enables teams across Stripe to operate independently and focus on their local ML modeling goals. Over the last year we’ve used Railyard to train thousands of models spanning use cases from forecasting to deep learning. This system has enabled us to build rich functionality for model evaluation and design services to optimize hyperparameters for our models at scale.

While there is a wealth of information available on data science and machine learning from the modeling perspective, there isn’t nearly as much published about how companies build and operate their production machine learning infrastructure. Uber, Airbnb, and Lyft have all discussed how their infrastructure operates, and we’re following their lead in introducing the design patterns that have worked for us. We plan to share more lessons from our ML architecture in the months ahead. In the meantime, we’d love to hear from you: please let us know which lessons are most useful and if there are any specific topics about which you’d like to hear more.

Like this post? Join the Stripe engineering team. View openings

May 7, 2019

Stripe’s fifth engineering hub is Remote

David Singleton on May 2, 2019 in Engineering

Stripe has engineering hubs in San Francisco, Seattle, Dublin, and Singapore. We are establishing a fifth hub that is less traditional but no less important: Remote. We are doing this to situate product development closer to our customers, improve our ability to tap the 99.74% of talented engineers living outside the metro areas of our first four hubs, and further our mission of increasing the GDP of the internet.

Stripe will hire over a hundred remote engineers this year. They will be deployed across every major engineering workstream at Stripe.

Read more

May 2, 2019