Python Guide: Loading Stripe Data to Azure Synapse with dlt
Library
Join our Slack community or book a call with our support engineer Violetta.
This page provides technical documentation on how to use the open-source Python library, dlt
, to load data from Stripe
, a comprehensive payments platform, into Azure Synapse
, a limitless analytics service. Stripe
offers a simple API and easy integration, allowing businesses to scale faster with transparent pricing across 135+ currencies. Azure Synapse
combines enterprise data warehousing and Big Data analytics, making it an ideal destination for the data processed by Stripe
. With dlt
, you can seamlessly transfer this valuable data from Stripe
to Azure Synapse
for insightful analysis. For more information on Stripe
, visit https://stripe.com.
dlt
Key Features
- Azure Synapse Integration:
dlt
provides seamless integration with Azure Synapse, a powerful analytics service. You can easily install the DLT library with Synapse dependencies using the commandpip install dlt[synapse]
. Read more - Governance Support:
dlt
pipelines offer robust governance support through key mechanisms like pipeline metadata utilization, schema enforcement and curation, and schema change alerts. These contribute to better data management practices, compliance adherence, and overall data governance. Read more - Data Lineage and Schema Lineage: Understanding the lineage of your data is crucial for maintaining data integrity and trust.
dlt
supports data and schema lineage, providing visibility into the lifecycle of your data. Read more - Staging Support: Azure Synapse supports Azure Blob Storage as a file staging destination.
dlt
first uploads Parquet files to the blob container, and then instructs Synapse to read the Parquet file and load its data into a Synapse table. Read more - Data Extraction: Extracting data with
dlt
is simple and scalable. It leverages iterators, chunking, and parallelization for efficient data processing. It also utilizes implicit extraction Directed Acyclic Graphs (DAGs) for efficient API calls for data enrichments or transformations. Read more
Getting started with your pipeline locally
0. Prerequisites
dlt
requires Python 3.8 or higher. Additionally, you need to have the pip
package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.
1. Install dlt
First you need to install the dlt
library with the correct extras for Azure Synapse
:
pip install "dlt[synapse]"
The dlt
cli has a useful command to get you started with any combination of source and destination. For this example, we want to load data from Stripe
to Azure Synapse
. You can run the following commands to create a starting point for loading data from Stripe
to Azure Synapse
:
# create a new directory
mkdir stripe_analytics_pipeline
cd stripe_analytics_pipeline
# initialize a new pipeline with your source and destination
dlt init stripe_analytics synapse
# install the required dependencies
pip install -r requirements.txt
The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt
:
pandas>=2.0.0
stripe>=5.0.0
dlt[synapse]>=0.3.5
You now have the following folder structure in your project:
stripe_analytics_pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── stripe_analytics/ # folder with source specific files
│ └── ...
├── stripe_analytics_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)
2. Configuring your source and destination credentials
The dlt
cli will have created a .dlt
directory in your project folder. This directory contains a config.toml
file and a secrets.toml
file that you can use to configure your pipeline. The automatically created version of these files look like this:
generated config.toml
# put your configuration values here
[runtime]
log_level="WARNING" # the system log level of dlt
# use the dlthub_telemetry setting to enable/disable anonymous usage data reporting, see https://dlthub.com/docs/telemetry
dlthub_telemetry = true
generated secrets.toml
# put your secret values and credentials here. do not share this file and do not push it to github
[sources.stripe_analytics]
stripe_secret_key = "stripe_secret_key" # please set me up!
[destination.synapse]
create_indexes = false
default_table_index_type = "heap"
staging_use_msi = false
[destination.synapse.credentials]
database = "database" # please set me up!
password = "password" # please set me up!
username = "username" # please set me up!
host = "host" # please set me up!
port = 1433
connect_timeout = 15
driver = "driver" # please set me up!
2.1. Adjust the generated code to your usecase
3. Running your pipeline for the first time
The dlt
cli has also created a main pipeline script for you at stripe_analytics_pipeline.py
, as well as a folder stripe_analytics
that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.
The main pipeline script will look something like this:
from typing import Optional, Tuple
import dlt
from pendulum import DateTime, datetime
from stripe_analytics import (
ENDPOINTS,
INCREMENTAL_ENDPOINTS,
incremental_stripe_source,
metrics_resource,
stripe_source,
)
def load_data(
endpoints: Tuple[str, ...] = ENDPOINTS + INCREMENTAL_ENDPOINTS,
start_date: Optional[DateTime] = None,
end_date: Optional[DateTime] = None,
) -> None:
"""
This demo script uses the resources with non-incremental
loading based on "replace" mode to load all data from provided endpoints.
Args:
endpoints: A tuple of endpoint names to retrieve data from. Defaults to most popular Stripe API endpoints.
start_date: An optional start date to limit the data retrieved. Defaults to None.
end_date: An optional end date to limit the data retrieved. Defaults to None.
"""
pipeline = dlt.pipeline(
pipeline_name="stripe_analytics",
destination='synapse',
dataset_name="stripe_updated",
)
source = stripe_source(
endpoints=endpoints, start_date=start_date, end_date=end_date
)
load_info = pipeline.run(source)
print(load_info)
def load_incremental_endpoints(
endpoints: Tuple[str, ...] = INCREMENTAL_ENDPOINTS,
initial_start_date: Optional[DateTime] = None,
end_date: Optional[DateTime] = None,
) -> None:
"""
This demo script demonstrates the use of resources with incremental loading, based on the "append" mode.
This approach enables us to load all the data
for the first time and only retrieve the newest data later,
without duplicating and downloading a massive amount of data.
Make sure you're loading objects that don't change over time.
Args:
endpoints: A tuple of incremental endpoint names to retrieve data from.
Defaults to Stripe API endpoints with uneditable data.
initial_start_date: An optional parameter that specifies the initial value for dlt.sources.incremental.
If parameter is not None, then load only data that were created after initial_start_date on the first run.
Defaults to None. Format: datetime(YYYY, MM, DD).
end_date: An optional end date to limit the data retrieved.
Defaults to None. Format: datetime(YYYY, MM, DD).
"""
pipeline = dlt.pipeline(
pipeline_name="stripe_analytics",
destination='synapse',
dataset_name="stripe_incremental",
)
# load all data on the first run that created before end_date
source = incremental_stripe_source(
endpoints=endpoints,
initial_start_date=initial_start_date,
end_date=end_date,
)
load_info = pipeline.run(source)
print(load_info)
# # load nothing, because incremental loading and end date limit
# source = incremental_stripe_source(
# endpoints=endpoints,
# initial_start_date=initial_start_date,
# end_date=end_date,
# )
# load_info = pipeline.run(source)
# print(load_info)
#
# # load only the new data that created after end_date
# source = incremental_stripe_source(
# endpoints=endpoints,
# initial_start_date=initial_start_date,
# )
# load_info = pipeline.run(source)
# print(load_info)
def load_data_and_get_metrics() -> None:
"""
With the pipeline, you can calculate the most important metrics
and store them in a database as a resource.
Store metrics, get calculated metrics from the database, build dashboards.
Supported metrics:
Monthly Recurring Revenue (MRR),
Subscription churn rate.
Pipeline returns both metrics.
Use Subscription and Event endpoints to calculate the metrics.
"""
pipeline = dlt.pipeline(
pipeline_name="stripe_analytics",
destination='synapse',
dataset_name="stripe_metrics",
)
# Event is an endpoint with uneditable data, so we can use 'incremental_stripe_source'.
source_event = incremental_stripe_source(endpoints=("Event",))
# Subscription is an endpoint with editable data, use stripe_source.
source_subs = stripe_source(endpoints=("Subscription",))
# convert dates to the timestamp format
source_event.resources["Event"].apply_hints(
columns={
"created": {"data_type": "timestamp"},
}
)
source_subs.resources["Subscription"].apply_hints(
columns={
"created": {"data_type": "timestamp"},
}
)
load_info = pipeline.run(data=[source_subs, source_event])
print(load_info)
resource = metrics_resource()
load_info = pipeline.run(resource)
print(load_info)
if __name__ == "__main__":
# load only data that was created during the period between the Jan 1, 2024 (incl.), and the Feb 1, 2024 (not incl.).
load_data(start_date=datetime(2024, 1, 1), end_date=datetime(2024, 2, 1))
# load only data that was created during the period between the May 3, 2023 (incl.), and the March 1, 2024 (not incl.).
load_incremental_endpoints(
endpoints=("Event",),
initial_start_date=datetime(2023, 5, 3),
end_date=datetime(2024, 3, 1),
)
# load Subscription and Event data, calculate metrics, store them in a database
load_data_and_get_metrics()
Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:
python stripe_analytics_pipeline.py
4. Inspecting your load result
You can now inspect the state of your pipeline with the dlt
cli:
dlt pipeline stripe_analytics info
You can also use streamlit to inspect the contents of your Azure Synapse
destination for this:
# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline stripe_analytics show
5. Next steps to get your pipeline running in production
One of the beauties of dlt
is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:
The Deploy section will show you how to deploy your pipeline to
- Deploy with Github Actions:
dlt
supports deployment using Github Actions. This provides a free CI/CD runner for your pipeline. - Deploy with Airflow: You can deploy your
dlt
pipeline using Airflow. This guide provides detailed instructions on deploying with Google Composer, a managed Airflow environment. - Deploy with Google Cloud Functions:
dlt
can also be deployed using Google Cloud Functions. This allows you to run your pipeline on Google's serverless platform. - Other Deployment Options: There are other ways to deploy your
dlt
pipeline as well. Check out the deployment guide for more options.
The running in production section will teach you about:
- Monitor your pipeline:
dlt
provides tools to monitor your pipeline and ensure it's running smoothly in production. You can check the status of your pipelines, view logs, and inspect the data being loaded. Learn how to monitor your pipeline. - Set up alerts: With
dlt
, you can set up alerts to notify you of any issues or changes in your pipeline. This feature ensures that you are always informed about the status of your pipelines. Find out how to set up alerts. - Set up tracing: Tracing is a powerful tool in
dlt
that allows you to track the execution of your pipelines. It provides detailed information about the execution of your pipeline, helping you to identify any potential issues. Learn how to set up tracing.
Available Sources and Resources
For this verified source the following sources and resources are available
Source incremental_stripe_source
This source provides detailed transactional and subscription data from Stripe's payment platform.
Resource Name | Write Disposition | Description |
---|---|---|
Event | append | This resource retrieves significant activities in a Stripe account. It includes detailed information about various transactions like payments, invoices, subscriptions, etc. |
Source stripe_source
"Stripe source provides transactional data, subscription details, and key business metrics from Stripe platform."
Resource Name | Write Disposition | Description |
---|---|---|
Metrics | append | This resource provides key metrics for the Stripe account, such as churn rate, creation date, and monthly recurring revenue (MRR). |
Subscription | replace | This resource includes detailed information about subscriptions in the Stripe account, including billing details, discount coupons, invoice settings, and more. |
Additional pipeline guides
- Load data from Attio to Supabase in python with dlt
- Load data from GitHub to PostgreSQL in python with dlt
- Load data from Slack to Timescale in python with dlt
- Load data from MySQL to Microsoft SQL Server in python with dlt
- Load data from Spotify to AWS Athena in python with dlt
- Load data from Braze to AWS S3 in python with dlt
- Load data from Attio to YugabyteDB in python with dlt
- Load data from Google Cloud Storage to AlloyDB in python with dlt
- Load data from IFTTT to Google Cloud Storage in python with dlt
- Load data from DigitalOcean to Azure Cloud Storage in python with dlt