Infrastructure Engineer, Streaming Data Platform
Our users trust us with their businesses and livelihoods, and every request that Stripe handles is critical. We process billions of dollars every year for millions of users, from the largest enterprises to a startup making their first sale. We invest deeply in the reliability of our infrastructure to earn their trust.
You’ll join one of the teams behind the streaming platforms used by the rest of engineering, such as our event bus, stream processing systems, real-time analytics, or asynchronous processing platforms such as task queues and workflow engines. You’ll make decisions with a significant impact on Stripe. There is a lot of work to do to make Stripe engineers’ work easier and our platforms even more reliable than they are today, and we’d love for you to be a part of it. We’re close to the people using our systems and we constantly get feedback that we use to make them better.
We have a few dozen infrastructure engineers today spread across several different teams, and you’ll work with other infrastructure engineers as well as the product engineers who use the systems we build.
We’re looking for people with a strong background (or interest!) in Data. We’d love to hear from you whether you’re a seasoned systems developer, or whether you’ve just learned you might like working with real-time systems. Many of our infrastructure engineers work remotely, and we’d be happy to talk to you about the possibility of working remote.
- Design, build, and maintain streaming data infrastructure systems such as Kafka, Flink and Pinot used by all of Stripe’s engineering teams
- Design alerting and testing systems to ensure the accuracy and timeliness of these pipelines. (e.g., improve instrumentation, optimize logging, etc)
- Debug production issues across services and levels of the stack
- Plan for the growth of Stripe’s infrastructure
- Build a great customer experience for developers using your infrastructure
- Work with teams to build and continue to evolve data models and data flows to enable data driven decision-making
- Identify the shared data needs across Stripe, understand their specific requirements, and build efficient and scalable data pipelines to meet the various needs to enable data-driven decisions across Stripe
We’re looking for someone who has:
- A strong engineering background and is interested in Data
- Experience developing, maintaining and debugging distributed systems built with open source tools
- Experience building infrastructure as a product centered around users needs
- Experience optimizing the end to end performance of distributed systems
- Experience with scaling distributed systems in a rapidly moving environment
- Experience managing and designing data pipelines
- Can follow the flow of data through various pipelines to debug data issues
- Experience working on stream processing systems such as Kafka, Kinesis, Flink or Beam
- Experience working on real-time analytic systems as Presto, Druid or Pinot
- Experience with orchestration platforms such as Cadence
- Experience with Java, Scala and Ruby
- Experience with Lambda Architecture systems
It’s not expected that you’ll have deep expertise in every dimension above, but you should be interested in learning any of the areas that are less familiar.
At Stripe, we're looking for people with passion, grit, and integrity. You're encouraged to apply even if your experience doesn't precisely match the job description. Your skills and passion will stand out—and set you apart—especially if your career has taken some extraordinary twists and turns. At Stripe, we welcome diverse perspectives and people who think rigorously and aren't afraid to challenge assumptions. Join us.