Data Engineer San Francisco
As a platform company powering businesses all over the world, Stripe processes payments, runs marketplaces, detects fraud, helps entrepreneurs start an internet business from anywhere in the world. Stripe’s Data Engineers will work to manage all of that data for both our internal and external users.
While we don’t have as much data as Twitter or Facebook we care a great deal about the quality of our data. Because every record in our data warehouse can be vitally important for the businesses that use Stripe, we’re looking for people with a strong background in big data systems to help us scale while maintaining correct and complete data. You’ll be working with a variety of internal teams, some engineering and some business, to help them solve their data needs. Your work will give teams visibility into how Stripe’s products are being used and where we can improve to serve our users needs better.
- Work with teams to build and continue to evolve data models and data flows to enable data driven decision-making
- Design alerting and testing to ensure the accuracy and timeliness of these pipelines. (e.g., improve instrumentation, optimize logging, etc)
- Identify the shared data needs across Stripe, understand their specific requirements, and build efficient and scalable data pipelines to meet the various needs to enable data-driven decisions across Stripe
- Work with our data platform team to identify and integrate new tools into our data stack. For example we’re currently evaluating Presto for use as an ad-hoc query tool.
You might be a fit for this role if you:
- Have a strong engineering background and are interested in data. You’ll be writing production Scala and Python code along with occasional ad-hoc SQL queries
- Have experience in writing and debugging ETL jobs using a distributed data framework (Hadoop/Spark etc…)
- Have experience managing and designing data pipelines
- Can follow the flow of data through various pipelines to debug data issues
- Have experience with Scalding or Spark
- Have experience with Airflow or other similar scheduling tools
It’s not expected that you’ll have deep expertise in every dimension above, but you should be interested in learning any of the areas that are less familiar.
Some things you might work on:
- Write a unified user data model that gives a complete view of our users across a varied set of products like Stripe Connect and Stripe Atlas
- Continuing to lower the latency and bridge the gap between our production systems and our data warehouse
- Working on our customer support data pipeline to help us track our time to response for our users and our total support ticket volume to help us staff our support team appropriately
- Embed with our billing team to create billing pipelines that enable more granular bills and help our better users understand their costs.
We look forward to hearing from you.