Loading Google Analytics Data to Redshift Using Python's dlt
Library
Join our Slack community or book a call with our support engineer Violetta.
This technical documentation provides guidance on how to load data from Google Analytics
, a platform that gathers data from your websites and apps to generate reports offering business insights, to Redshift
, a fully managed, petabyte-scale data warehouse service in the cloud. The process is facilitated using an open source Python library called dlt
. dlt
allows you to handle data ranging from a few hundred gigabytes to over a petabyte. Detailed information about the Google Analytics
source can be found at https://analytics.google.com. The objective is to enable efficient data management and provide insights for improved decision-making.
dlt
Key Features
Google Analytics: Google Analytics is a service for web analytics that tracks and provides data regarding user engagement with your website or application. This Google Analytics
dlt
verified source loads data using “Google Analytics API” to the destination of your choice.Governance Support in
dlt
Pipelines:dlt
pipelines offer robust governance support through three key mechanisms: pipeline metadata utilization, schema enforcement and curation, and schema change alerts.Extracting data with
dlt
: Extracting data withdlt
is simple - you simply decorate your data-producing functions with loading or incremental extraction metadata, which enablesdlt
to extract and load by your custom logic.dlt
offers scalable data extraction by leveraging iterators, chunking, and parallelization techniques.Amazon Redshift: Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. The
dlt
library provides a simple, declarative interface for loading data into Redshift.Data Lineage:
dlt
provides robust data lineage features, allowing for easy tracking and tracing of data throughout its lifecycle. This helps ensure data integrity and facilitates compliance with data governance policies.
Getting started with your pipeline locally
0. Prerequisites
dlt
requires Python 3.8 or higher. Additionally, you need to have the pip
package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.
1. Install dlt
First you need to install the dlt
library with the correct extras for Redshift
:
pip install "dlt[redshift]"
The dlt
cli has a useful command to get you started with any combination of source and destination. For this example, we want to load data from Google Analytics
to Redshift
. You can run the following commands to create a starting point for loading data from Google Analytics
to Redshift
:
# create a new directory
mkdir google_analytics_pipeline
cd google_analytics_pipeline
# initialize a new pipeline with your source and destination
dlt init google_analytics redshift
# install the required dependencies
pip install -r requirements.txt
The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt
:
google-analytics-data
google-api-python-client
google-auth-oauthlib
requests_oauthlib
dlt[redshift]>=0.3.25
You now have the following folder structure in your project:
google_analytics_pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── google_analytics/ # folder with source specific files
│ └── ...
├── google_analytics_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)
2. Configuring your source and destination credentials
The dlt
cli will have created a .dlt
directory in your project folder. This directory contains a config.toml
file and a secrets.toml
file that you can use to configure your pipeline. The automatically created version of these files look like this:
generated config.toml
# put your configuration values here
[runtime]
log_level="WARNING" # the system log level of dlt
# use the dlthub_telemetry setting to enable/disable anonymous usage data reporting, see https://dlthub.com/docs/telemetry
dlthub_telemetry = true
[sources.google_analytics]
property_id = 0 # please set me up!
queries =
["a", "b", "c"] # please set me up!
generated secrets.toml
# put your secret values and credentials here. do not share this file and do not push it to github
[sources.google_analytics.credentials]
client_id = "client_id" # please set me up!
client_secret = "client_secret" # please set me up!
refresh_token = "refresh_token" # please set me up!
project_id = "project_id" # please set me up!
[destination.redshift.credentials]
database = "database" # please set me up!
password = "password" # please set me up!
username = "username" # please set me up!
host = "host" # please set me up!
port = 5439
connect_timeout = 15
2.1. Adjust the generated code to your usecase
3. Running your pipeline for the first time
The dlt
cli has also created a main pipeline script for you at google_analytics_pipeline.py
, as well as a folder google_analytics
that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.
The main pipeline script will look something like this:
""" Loads the pipeline for Google Analytics V4. """
import time
from typing import Any
import dlt
from google_analytics import google_analytics
# this can also be filled in config.toml and be left empty as a parameter.
QUERIES = [
{
"resource_name": "sample_analytics_data1",
"dimensions": ["browser", "city"],
"metrics": ["totalUsers", "transactions"],
},
{
"resource_name": "sample_analytics_data2",
"dimensions": ["browser", "city", "dateHour"],
"metrics": ["totalUsers"],
},
]
def simple_load() -> Any:
"""
Just loads the data normally. Incremental loading for this pipeline is on,
the last load time is saved in dlt_state, and the next load of the pipeline will have the last load as a starting date.
Returns:
Load info on the pipeline that has been run.
"""
# FULL PIPELINE RUN
pipeline = dlt.pipeline(
pipeline_name="dlt_google_analytics_pipeline",
destination='redshift',
full_refresh=False,
dataset_name="sample_analytics_data",
)
# Google Analytics source function - taking data from QUERIES defined locally instead of config
# TODO: pass your google analytics property id as google_analytics(property_id=123,..)
data_analytics = google_analytics(queries=QUERIES)
info = pipeline.run(data=data_analytics)
print(info)
return info
def simple_load_config() -> Any:
"""
Just loads the data normally. QUERIES are taken from config. Incremental loading for this pipeline is on,
the last load time is saved in dlt_state, and the next load of the pipeline will have the last load as a starting date.
Returns:
Load info on the pipeline that has been run.
"""
# FULL PIPELINE RUN
pipeline = dlt.pipeline(
pipeline_name="dlt_google_analytics_pipeline",
destination='redshift',
full_refresh=False,
dataset_name="sample_analytics_data",
)
# Google Analytics source function - taking data from QUERIES defined locally instead of config
data_analytics = google_analytics()
info = pipeline.run(data=data_analytics)
print(info)
return info
def chose_date_first_load(start_date: str = "2000-01-01") -> Any:
"""
Chooses the starting date for the first pipeline load. Subsequent loads of the pipeline will be from the last loaded date.
Args:
start_date: The string version of the date in the format yyyy-mm-dd and some other values.
More info: https://developers.google.com/analytics/devguides/reporting/data/v1/rest/v1beta/DateRange
Returns:
Load info on the pipeline that has been run.
"""
# FULL PIPELINE RUN
pipeline = dlt.pipeline(
pipeline_name="dlt_google_analytics_pipeline",
destination='redshift',
full_refresh=False,
dataset_name="sample_analytics_data",
)
# Google Analytics source function
data_analytics = google_analytics(start_date=start_date)
info = pipeline.run(data=data_analytics)
print(info)
return info
if __name__ == "__main__":
start_time = time.time()
simple_load()
end_time = time.time()
print(f"Time taken: {end_time-start_time}")
Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:
python google_analytics_pipeline.py
4. Inspecting your load result
You can now inspect the state of your pipeline with the dlt
cli:
dlt pipeline dlt_google_analytics_pipeline info
You can also use streamlit to inspect the contents of your Redshift
destination for this:
# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline dlt_google_analytics_pipeline show
5. Next steps to get your pipeline running in production
One of the beauties of dlt
is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:
The Deploy section will show you how to deploy your pipeline to
- Deploy with Github Actions: Github Actions is a CI/CD runner that you can use for free.
dlt
provides a simple commanddlt deploy <script>.py github-action --schedule "*/30 * * * *"
. Learn more about the process in this guide. - Deploy with Airflow: Google Composer is a managed Airflow environment provided by Google.
dlt
can create an Airflow DAG for your pipeline script that you can customize. Find more details in this guide. - Deploy with Google Cloud Functions: Google Cloud Functions is a serverless execution environment for building and connecting cloud services.
dlt
offers a simple way to deploy your pipelines using Google Cloud Functions. Follow this guide to learn more. - Other Deployment Options:
dlt
supports a variety of deployment options to suit your needs. Check out other methods in this guide.
The running in production section will teach you about:
- Monitor your Pipeline:
dlt
allows you to monitor your data pipeline in real-time, helping you to identify any potential issues and ensure optimal performance. Learn more about it here. - Set up Alerts: With
dlt
, you can set up alerts to be notified about any changes or issues in your data pipeline. This feature allows you to respond quickly and effectively to any problems. Find out how to set up alerts here. - Enable Tracing: Tracing in
dlt
provides detailed information about the execution of your data pipeline. This can be incredibly useful for debugging and performance tuning. Learn how to set up tracing here.
Additional pipeline guides
- Load data from Capsule CRM to Databricks in python with dlt
- Load data from Notion to CockroachDB in python with dlt
- Load data from Zendesk to AWS Athena in python with dlt
- Load data from Bitbucket to MotherDuck in python with dlt
- Load data from Chess.com to Neon Serverless Postgres in python with dlt
- Load data from Looker to Azure Synapse in python with dlt
- Load data from Vimeo to MotherDuck in python with dlt
- Load data from Braze to PostgreSQL in python with dlt
- Load data from Box Platform API to MotherDuck in python with dlt
- Load data from Box Platform API to Dremio in python with dlt