Sign in
An image of the Stripe logo
Create account
Sign in
Home
Payments
Business operations
Financial services
Developer tools
No-code
All products
Home
Payments
Business operations
Home
Payments
Business operations
Financial services
Developer tools
Support
Overview
Fraud detection
Stripe data
    Overview
    Get started
    Access data in your Stripe Dashboard with Sigma
    Access data in your data warehouse with Data Pipeline
    Data freshness
    Query data
    Write queries
    Query transactional data
    Query Billing data
    Query Connect data
    Query Issuing data
    Schedule queries
Financial reports
Payment authentication reports
Revenue recognition
Startup incorporation
Climate
Identity
Tax
Financial Connections
Account
Dashboard
HomeBusiness operationsStripe data

Access data in your data warehouse with Data Pipeline

Sync your Stripe account with Snowflake or Amazon Redshift.

Data Pipeline is a no-code product that sends all your Stripe data and reports to Snowflake or Amazon Redshift. This allows you to centralize your Stripe data with other business data to close your books faster and get more detailed business insights.

With Data Pipeline, you can:

  • Automatically export your complete Stripe data in a fast and reliable manner.
  • Stop relying on third-party extract, transform, and load (ETL) pipelines or home-built API integrations.
  • Combine data from all your Stripe accounts into one data warehouse.
  • Integrate Stripe data with your other business data for more complete business insights.

Database support

Data Pipeline currently supports Snowflake (deployed on AWS) and Amazon Redshift data regions. For additional information on supported instances, view the table below.

AWS RegionSnowflakeAmazon Redshift RA3 (with encryption)Amazon Redshift DS2/DC2
us-west-2 (Oregon)
us-east-2 (Ohio)
us-east-1 (N. Virginia)
us-west-1 (N. California)
Coming soonComing soon
ca-central-1 (Central Canada)Coming soonComing soonComing soon

If you’re using another data warehouse besides Snowflake or Amazon Redshift, let us know at data-pipeline@stripe.com.

You can access your non-US Stripe data in Snowflake or Amazon Redshift as long as you export the data to a warehouse region Stripe supports. Data Pipeline doesn’t support any non-AWS instances yet, such as Google Cloud Storage (GCS), or Microsoft Azure.

Because of data localization requirements, Stripe doesn’t offer Data Pipeline services to customers, merchants, or users in India.

Get started

When you subscribe to Data Pipeline, Stripe sends a data share to your Snowflake or Amazon Redshift account. After you accept the data share, you can access your core Stripe data in Snowflake or Amazon Redshift within 12 hours. After the initial load, your Stripe data refreshes regularly.

First, send all your up-to-date Stripe data and reports through the Stripe Dashboard:

  1. Click the Subscribe button in the Data Pipeline settings of the Stripe Dashboard.
  2. From the modal, select Snowflake, then click Continue.
  3. Enter your Snowflake Account Identifier and your AWS region, then click Continue.
  4. Confirm the information, then click Subscribe to start creation of your data share. After you subscribe in the Stripe Dashboard, your data is available in Snowflake within 12 hours.

Next, after 48 hours, access your data share from your Snowflake account:

  1. Navigate to your Snowflake account to accept the Stripe data share.
  2. In Snowflake, have an ACCOUNTADMIN navigate to Data > Shared Data. In the Ready to Get section, navigate to a share entitled SHARE_[ACCOUNT_IDENTIFIER] from one of three Stripe accounts, depending on your data warehouse region:
    • GSWUDFY_STRIPE_AWS_US_EAST_1: data warehouses in us-east-1
    • JZA07263: data warehouses in us-west-2
    • VM70738: data warehouses in us-east-2
    Then, click Get shared data to accept the share.
  3. In the modal that opens, give the database a name (for example: Stripe), select the roles to grant access to (for example: SYSADMIN), then click Get Data.
  4. Confirm that you can view your Stripe data in Data From Direct Shares and Databases. You can now query your Stripe data directly in Snowflake.

Query Stripe data in your data warehouse

In Snowflake and Amazon Redshift, your data is available as secure views. To query your data, follow the steps below.

View your available Stripe data by navigating to Views in the database you created. For each table, you can also see the available columns by clicking on the table and navigating to Columns.

Database schemas

Your warehouse data is split into two database schemas, based on the API mode used to create the data.

Schema nameDescription
STRIPEData populated from live mode
STRIPE_TESTMODEData populated from test mode

Example use case

In some cases, you might want to combine information from your proprietary data with Stripe data. The following schema shows an orders table that lists data about an order for a company:

dateorder_noStripe_txn_nocustomer_namepriceitems
3/20/20231bt_xcVXgHcBfi83m94John Smith51 book

The table above doesn’t contain data regarding transaction fees or payouts because that data is contained solely within Stripe. In Stripe, the balance_transactions table contains the following information, but lacks proprietary data regarding customer names and items purchased:

idamountavailable_onfeenetautomatic_transfer_id
bt_xcVXgHcBfi83m945003/20/202350450po_rC4ocAkjGy8zl3j

To access your proprietary data alongside your Stripe data, combine the orders table with Stripe’s balance_transactions table:

select orders.date, orders.order_no, orders.stripe_txn_no, bts.amount, bts.fee, bts.automatic_transfer_id from mycompany.orders join stripe.balance_transactions bts on orders.stripe_txn_no = bts.id;

After it completes, the following information is available:

dateorder_noStripe_txn_noamountfeeautomatic_transfer_id
3/20/20231bt_xcVXgHcBfi83m9450050po_rC4ocAkjGy8zl3j

Financial reports in Data Pipeline

To speed up your financial close, you can access Stripe’s financial reports directly in your data warehouse.

At this time, financial reports aren’t available for Amazon Redshift.

Financial report templates have a FINANCIAL_REPORT prefix and are available as views in your data warehouse.

Generating financial reports in Snowflake

Generating financial reports from Data Pipeline requires setting a few custom variables. These are the same variables you set when generating the report through the dashboard or API:

You can format your dates with varying levels of precision:

START_DATE = ‘2021-09-01’;

START_DATE = ‘2021-09-01 00:00:00’;

START_DATE = ‘2021-09-01 00:00:00.000’;

  • START_DATE (varchar)—The starting date of the report (inclusive).
  • END_DATE (varchar)—The ending date of the report (exclusive).
  • TIMEZONE (varchar)—The time zone of non-UTC datetime columns.

To set these variables and run the report query:

  1. Create a new worksheet.
  2. Set the database schema and required variables to your desired values.
-- set schema based on the name you gave your Stripe database use schema db_name.stripe; -- set financial report template variables set (TIMEZONE, START_DATE, END_DATE) = ('UTC', '2021-09-01', '2021-10-01');

Run these lines of code separately before attempting to query tables that require them. Otherwise, you might receive an error that a session variable doesn’t exist.

If you’re using the Snowflake Connector for Python, set the session parameter TIMEZONE. You can do this using the command ALTER SESSION SET TIMEZONE = 'UTC'.

  1. After running the code that sets the necessary variables, query the view of the report you want to generate. For example, running:
select * from FINANCIAL_REPORT_BALANCE_CHANGE_FROM_ACTIVITY_ITEMIZED;

Yields the same results as you would find for the itemized balance change from activity report on the Stripe Dashboard or through the API:

Unsubscribing from Data Pipeline

If you currently have an active Data Pipeline subscription and want to cancel it, you can unsubscribe from Data Pipeline in the settings page of the Stripe Dashboard by clicking Unsubscribe. After you unsubscribe, you lose access to your data share immediately. To maintain your data tables, copy them to your local data warehouse instance before unsubscribing.

See also

  • Query transaction data
  • Query Billing data
  • Sigma and Data Pipeline for Connect platforms
  • Query Issuing data
Was this page helpful?
Questions? Contact us.
Watch our developer tutorials.
Check out our product changelog.
Powered by Markdoc
You can unsubscribe at any time. Read our privacy policy.
On this page
Get started
Query Stripe data in your data warehouse
Financial reports in Data Pipeline
Unsubscribing from Data Pipeline
Stripe Shell
Test mode
Welcome to the Stripe Shell! Stripe Shell is a browser-based shell with the Stripe CLI pre-installed. Login to your Stripe account and press Control + Backtick on your keyboard to start managing your Stripe resources in test mode. - View supported Stripe commands: - Find webhook events: - Listen for webhook events: - Call Stripe APIs: stripe [api resource] [operation] (e.g. )
The Stripe Shell is best experienced on desktop.
$