Load Data from google analytics
to duckdb
using dlt
in Python
Join our Slack community or book a call with our support engineer Violetta.
This page provides technical documentation on how to load data from Google Analytics
into DuckDB
using the open-source Python library, dlt
. Google Analytics
is a service that gathers data from your websites and applications, generating reports to offer business insights. On the other hand, DuckDB
is a swift, in-process analytical database with a feature-rich SQL dialect and deep integrations into client APIs. By using dlt
, you can effectively bridge these two platforms, transferring valuable data from Google Analytics
to DuckDB
for further analysis. For more details about the source, visit Google Analytics.
dlt
Key Features
Pipeline Metadata:
dlt
pipelines leverage metadata to provide governance capabilities. This metadata includes load IDs, which consist of a timestamp and pipeline name. Load IDs enable incremental transformations and data vaulting by tracking data loads and facilitating data lineage and traceability. Read more about lineage.Schema Enforcement and Curation:
dlt
empowers users to enforce and curate schemas, ensuring data consistency and quality. Schemas define the structure of normalized data and guide the processing and loading of data. By adhering to predefined schemas, pipelines maintain data integrity and facilitate standardized data handling practices. Read more: Adjust a schema docs.Schema evolution:
dlt
enables proactive governance by alerting users to schema changes. When modifications occur in the source data’s schema, such as table or column alterations,dlt
notifies stakeholders, allowing them to take necessary actions, such as reviewing and validating the changes, updating downstream processes, or performing impact analysis.Scaling and finetuning:
dlt
offers several mechanism and configuration options to scale up and finetune pipelines. This includes running extraction, normalization, and load in parallel, writing sources and resources that are run in parallel via thread pools and async execution, and finetuning the memory buffers, intermediary file sizes and compression options. Read more about performance.Advanced topics:
dlt
is a constantly growing library that supports many features and use cases needed by the community. You can join the Slack community to find recent releases or discuss what you can build withdlt
.
Getting started with your pipeline locally
0. Prerequisites
dlt
requires Python 3.8 or higher. Additionally, you need to have the pip
package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.
1. Install dlt
First you need to install the dlt
library with the correct extras for DuckDB
:
pip install "dlt[duckdb]"
The dlt
cli has a useful command to get you started with any combination of source and destination. For this example, we want to load data from Google Analytics
to DuckDB
. You can run the following commands to create a starting point for loading data from Google Analytics
to DuckDB
:
# create a new directory
mkdir google_analytics_pipeline
cd google_analytics_pipeline
# initialize a new pipeline with your source and destination
dlt init google_analytics duckdb
# install the required dependencies
pip install -r requirements.txt
The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt
:
google-analytics-data
google-api-python-client
google-auth-oauthlib
requests_oauthlib
dlt[duckdb]>=0.3.25
You now have the following folder structure in your project:
google_analytics_pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── google_analytics/ # folder with source specific files
│ └── ...
├── google_analytics_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)
2. Configuring your source and destination credentials
The dlt
cli will have created a .dlt
directory in your project folder. This directory contains a config.toml
file and a secrets.toml
file that you can use to configure your pipeline. The automatically created version of these files look like this:
generated config.toml
# put your configuration values here
[runtime]
log_level="WARNING" # the system log level of dlt
# use the dlthub_telemetry setting to enable/disable anonymous usage data reporting, see https://dlthub.com/docs/telemetry
dlthub_telemetry = true
[sources.google_analytics]
property_id = 0 # please set me up!
queries =
["a", "b", "c"] # please set me up!
generated secrets.toml
# put your secret values and credentials here. do not share this file and do not push it to github
[sources.google_analytics.credentials]
client_id = "client_id" # please set me up!
client_secret = "client_secret" # please set me up!
refresh_token = "refresh_token" # please set me up!
project_id = "project_id" # please set me up!
2.1. Adjust the generated code to your usecase
3. Running your pipeline for the first time
The dlt
cli has also created a main pipeline script for you at google_analytics_pipeline.py
, as well as a folder google_analytics
that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.
The main pipeline script will look something like this:
""" Loads the pipeline for Google Analytics V4. """
import time
from typing import Any
import dlt
from google_analytics import google_analytics
# this can also be filled in config.toml and be left empty as a parameter.
QUERIES = [
{
"resource_name": "sample_analytics_data1",
"dimensions": ["browser", "city"],
"metrics": ["totalUsers", "transactions"],
},
{
"resource_name": "sample_analytics_data2",
"dimensions": ["browser", "city", "dateHour"],
"metrics": ["totalUsers"],
},
]
def simple_load() -> Any:
"""
Just loads the data normally. Incremental loading for this pipeline is on,
the last load time is saved in dlt_state, and the next load of the pipeline will have the last load as a starting date.
Returns:
Load info on the pipeline that has been run.
"""
# FULL PIPELINE RUN
pipeline = dlt.pipeline(
pipeline_name="dlt_google_analytics_pipeline",
destination='duckdb',
full_refresh=False,
dataset_name="sample_analytics_data",
)
# Google Analytics source function - taking data from QUERIES defined locally instead of config
# TODO: pass your google analytics property id as google_analytics(property_id=123,..)
data_analytics = google_analytics(queries=QUERIES)
info = pipeline.run(data=data_analytics)
print(info)
return info
def simple_load_config() -> Any:
"""
Just loads the data normally. QUERIES are taken from config. Incremental loading for this pipeline is on,
the last load time is saved in dlt_state, and the next load of the pipeline will have the last load as a starting date.
Returns:
Load info on the pipeline that has been run.
"""
# FULL PIPELINE RUN
pipeline = dlt.pipeline(
pipeline_name="dlt_google_analytics_pipeline",
destination='duckdb',
full_refresh=False,
dataset_name="sample_analytics_data",
)
# Google Analytics source function - taking data from QUERIES defined locally instead of config
data_analytics = google_analytics()
info = pipeline.run(data=data_analytics)
print(info)
return info
def chose_date_first_load(start_date: str = "2000-01-01") -> Any:
"""
Chooses the starting date for the first pipeline load. Subsequent loads of the pipeline will be from the last loaded date.
Args:
start_date: The string version of the date in the format yyyy-mm-dd and some other values.
More info: https://developers.google.com/analytics/devguides/reporting/data/v1/rest/v1beta/DateRange
Returns:
Load info on the pipeline that has been run.
"""
# FULL PIPELINE RUN
pipeline = dlt.pipeline(
pipeline_name="dlt_google_analytics_pipeline",
destination='duckdb',
full_refresh=False,
dataset_name="sample_analytics_data",
)
# Google Analytics source function
data_analytics = google_analytics(start_date=start_date)
info = pipeline.run(data=data_analytics)
print(info)
return info
if __name__ == "__main__":
start_time = time.time()
simple_load()
end_time = time.time()
print(f"Time taken: {end_time-start_time}")
Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:
python google_analytics_pipeline.py
4. Inspecting your load result
You can now inspect the state of your pipeline with the dlt
cli:
dlt pipeline dlt_google_analytics_pipeline info
You can also use streamlit to inspect the contents of your DuckDB
destination for this:
# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline dlt_google_analytics_pipeline show
5. Next steps to get your pipeline running in production
One of the beauties of dlt
is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:
The Deploy section will show you how to deploy your pipeline to
- Deploy with Github Actions: Github Actions is a CI/CD runner that is basically free to use. You can deploy your pipeline with a cron schedule expression. Learn how to do it here.
- Deploy with Airflow and Google Composer: Google Composer is a managed Airflow environment provided by Google. It creates an Airflow DAG for your pipeline script that you should customize. Learn more about it here.
- Deploy with Google Cloud Functions: Google Cloud Functions is a lightweight, event-based, asynchronous compute solution that allows you to create small, single-purpose functions that respond to cloud events. Learn how to deploy with Google Cloud Functions here.
- Other Deployment Options: Apart from the above-mentioned methods, there are several other ways to deploy your pipeline. Learn more about them here.
The running in production section will teach you about:
- Monitor Your Pipeline:
dlt
provides a comprehensive suite of monitoring tools to help you keep track of your pipeline's performance and health. You can monitor your pipeline in real-time, identify bottlenecks, and troubleshoot issues quickly. Learn more about how to monitor your pipeline. - Set Up Alerts: Stay informed about your pipeline's status with
dlt
's alerting capabilities. You can set up alerts to notify you of any critical issues or changes in your pipeline, ensuring you can respond promptly to any problems. Discover how to set up alerts. - Set Up Tracing:
dlt
allows you to trace your pipeline's execution, providing you with detailed insights into its performance and behavior. This feature is particularly useful for debugging and optimizing your pipeline. Find out how to set up tracing.
Additional pipeline guides
- Load data from Rest API to EDB BigAnimal in python with dlt
- Load data from Zuora to MotherDuck in python with dlt
- Load data from Slack to PostgreSQL in python with dlt
- Load data from Oracle Database to EDB BigAnimal in python with dlt
- Load data from Braze to AWS Athena in python with dlt
- Load data from Clubhouse to BigQuery in python with dlt
- Load data from AWS S3 to Azure Synapse in python with dlt
- Load data from Fivetran to CockroachDB in python with dlt
- Load data from Imgur to Google Cloud Storage in python with dlt
- Load data from Sentry to AlloyDB in python with dlt