Loading Data from Looker
to ClickHouse
with dlt
in Python
Join our Slack community or book a call with our support engineer Violetta.
Looker
is a modern data platform that enables businesses to explore, analyze, and share real-time business analytics easily. It provides powerful tools for data visualization, dashboards, and interactive reports. Looker
helps businesses make data-driven decisions by connecting directly to their databases and allowing users to create custom queries and visualizations without needing extensive SQL knowledge. ClickHouse
is a fast open-source column-oriented database management system that allows generating analytical data reports in real-time using SQL queries. This documentation covers how to load data from Looker
to ClickHouse
using the open-source python library called dlt
. For further information about Looker
, visit their website.
dlt
Key Features
- Scalability via iterators, chunking, and parallelization:
dlt
offers scalable data extraction by leveraging iterators, chunking, and parallelization techniques. This approach allows for efficient processing of large datasets by breaking them down into manageable chunks. Learn more - Implicit extraction DAGs:
dlt
incorporates the concept of implicit extraction DAGs to handle the dependencies between data sources and their transformations automatically. Learn more - Governance Support:
dlt
pipelines offer robust governance support through pipeline metadata, schema enforcement, and schema change alerts. Learn more - Schema Evolution:
dlt
enables proactive governance by alerting users to schema changes, allowing them to take necessary actions. Learn more - Data Types:
dlt
supports a wide range of data types including text, double, bool, timestamp, date, time, bigint, binary, complex, decimal, and wei. Learn more
Getting started with your pipeline locally
dlt-init-openapi
0. Prerequisites
dlt
and dlt-init-openapi
requires Python 3.9 or higher. Additionally, you need to have the pip
package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.
1. Install dlt and dlt-init-openapi
First you need to install the dlt-init-openapi
cli tool.
pip install dlt-init-openapi
The dlt-init-openapi
cli is a powerful generator which you can use to turn any OpenAPI spec into a dlt
source to ingest data from that api. The quality of the generator source is dependent on how well the API is designed and how accurate the OpenAPI spec you are using is. You may need to make tweaks to the generated code, you can learn more about this here.
# generate pipeline
# NOTE: add_limit adds a global limit, you can remove this later
# NOTE: you will need to select which endpoints to render, you
# can just hit Enter and all will be rendered.
dlt-init-openapi looker --url https://raw.githubusercontent.com/dlt-hub/openapi-specs/main/open_api_specs/Business/looker.yaml --global-limit 2
cd looker_pipeline
# install generated requirements
pip install -r requirements.txt
The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt
:
dlt>=0.4.12
You now have the following folder structure in your project:
looker_pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── rest_api/ # The rest api verified source
│ └── ...
├── looker/
│ └── __init__.py # TODO: possibly tweak this file
├── looker_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)
1.1. Tweak looker/__init__.py
This file contains the generated configuration of your rest_api. You can continue with the next steps and leave it as is, but you might want to come back here and make adjustments if you need your rest_api
source set up in a different way. The generated file for the looker source will look like this:
Click to view full file (169 lines)
from typing import List
import dlt
from dlt.extract.source import DltResource
from rest_api import rest_api_source
from rest_api.typing import RESTAPIConfig
@dlt.source(name="looker_source", max_table_nesting=2)
def looker_source(
base_url: str = dlt.config.value,
) -> List[DltResource]:
# source configuration
source_config: RESTAPIConfig = {
"client": {
"base_url": base_url,
},
"resources":
[
# Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set.
{
"name": "resourceget_iam_policy",
"table_name": "audit_config",
"endpoint": {
"data_selector": "auditConfigs",
"path": "/v1/{resource}:getIamPolicy",
"params": {
# the parameters below can optionally be configured
# "$.xgafv": "OPTIONAL_CONFIG",
# "access_token": "OPTIONAL_CONFIG",
# "alt": "OPTIONAL_CONFIG",
# "callback": "OPTIONAL_CONFIG",
# "fields": "OPTIONAL_CONFIG",
# "key": "OPTIONAL_CONFIG",
# "oauth_token": "OPTIONAL_CONFIG",
# "prettyPrint": "OPTIONAL_CONFIG",
# "quotaUser": "OPTIONAL_CONFIG",
# "upload_protocol": "OPTIONAL_CONFIG",
# "uploadType": "OPTIONAL_CONFIG",
# "options.requestedPolicyVersion": "OPTIONAL_CONFIG",
},
"paginator": "auto",
}
},
# Lists Instances in a given project and location.
{
"name": "instances",
"table_name": "instance",
"endpoint": {
"data_selector": "instances",
"path": "/v1/{parent}/instances",
"params": {
"parent": "FILL_ME_IN", # TODO: fill in required path parameter
# the parameters below can optionally be configured
# "$.xgafv": "OPTIONAL_CONFIG",
# "access_token": "OPTIONAL_CONFIG",
# "alt": "OPTIONAL_CONFIG",
# "callback": "OPTIONAL_CONFIG",
# "fields": "OPTIONAL_CONFIG",
# "key": "OPTIONAL_CONFIG",
# "oauth_token": "OPTIONAL_CONFIG",
# "prettyPrint": "OPTIONAL_CONFIG",
# "quotaUser": "OPTIONAL_CONFIG",
# "upload_protocol": "OPTIONAL_CONFIG",
# "uploadType": "OPTIONAL_CONFIG",
# "pageSize": "OPTIONAL_CONFIG",
# "pageToken": "OPTIONAL_CONFIG",
},
"paginator": "auto",
}
},
# Lists information about the supported locations for this service.
{
"name": "locations",
"table_name": "location",
"primary_key": "name",
"write_disposition": "merge",
"endpoint": {
"data_selector": "locations",
"path": "/v1/{name}/locations",
"params": {
"name": "FILL_ME_IN", # TODO: fill in required path parameter
# the parameters below can optionally be configured
# "$.xgafv": "OPTIONAL_CONFIG",
# "access_token": "OPTIONAL_CONFIG",
# "alt": "OPTIONAL_CONFIG",
# "callback": "OPTIONAL_CONFIG",
# "fields": "OPTIONAL_CONFIG",
# "key": "OPTIONAL_CONFIG",
# "oauth_token": "OPTIONAL_CONFIG",
# "prettyPrint": "OPTIONAL_CONFIG",
# "quotaUser": "OPTIONAL_CONFIG",
# "upload_protocol": "OPTIONAL_CONFIG",
# "uploadType": "OPTIONAL_CONFIG",
# "filter": "OPTIONAL_CONFIG",
# "pageSize": "OPTIONAL_CONFIG",
# "pageToken": "OPTIONAL_CONFIG",
},
"paginator": "auto",
}
},
# Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.
{
"name": "",
"table_name": "operation",
"primary_key": "name",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/v1/{name}",
"params": {
"name": "FILL_ME_IN", # TODO: fill in required path parameter
# the parameters below can optionally be configured
# "$.xgafv": "OPTIONAL_CONFIG",
# "access_token": "OPTIONAL_CONFIG",
# "alt": "OPTIONAL_CONFIG",
# "callback": "OPTIONAL_CONFIG",
# "fields": "OPTIONAL_CONFIG",
# "key": "OPTIONAL_CONFIG",
# "oauth_token": "OPTIONAL_CONFIG",
# "prettyPrint": "OPTIONAL_CONFIG",
# "quotaUser": "OPTIONAL_CONFIG",
# "upload_protocol": "OPTIONAL_CONFIG",
# "uploadType": "OPTIONAL_CONFIG",
},
"paginator": "auto",
}
},
# Lists operations that match the specified filter in the request. If the server doesn't support this method, it returns `UNIMPLEMENTED`.
{
"name": "operations",
"table_name": "operation",
"primary_key": "name",
"write_disposition": "merge",
"endpoint": {
"data_selector": "operations",
"path": "/v1/{name}/operations",
"params": {
"name": "FILL_ME_IN", # TODO: fill in required path parameter
# the parameters below can optionally be configured
# "$.xgafv": "OPTIONAL_CONFIG",
# "access_token": "OPTIONAL_CONFIG",
# "alt": "OPTIONAL_CONFIG",
# "callback": "OPTIONAL_CONFIG",
# "fields": "OPTIONAL_CONFIG",
# "key": "OPTIONAL_CONFIG",
# "oauth_token": "OPTIONAL_CONFIG",
# "prettyPrint": "OPTIONAL_CONFIG",
# "quotaUser": "OPTIONAL_CONFIG",
# "upload_protocol": "OPTIONAL_CONFIG",
# "uploadType": "OPTIONAL_CONFIG",
# "filter": "OPTIONAL_CONFIG",
# "pageSize": "OPTIONAL_CONFIG",
# "pageToken": "OPTIONAL_CONFIG",
},
"paginator": "auto",
}
},
]
}
return rest_api_source(source_config)
2. Configuring your source and destination credentials
dlt-init-openapi
will try to detect which authentication mechanism (if any) is used by the API in question and add a placeholder in your secrets.toml
.
The dlt
cli will have created a .dlt
directory in your project folder. This directory contains a config.toml
file and a secrets.toml
file that you can use to configure your pipeline. The automatically created version of these files look like this:
generated config.toml
[runtime]
log_level="INFO"
[sources.looker]
# Base URL for the API
base_url = "https://looker.googleapis.com/"
generated secrets.toml
[sources.looker]
# secrets for your looker source
# example_api_key = "example value"
2.1. Adjust the generated code to your usecase
At this time, the dlt-init-openapi
cli tool will always create pipelines that load to a local duckdb
instance. Switching to a different destination is trivial, all you need to do is change the destination
parameter in looker_pipeline.py
to clickhouse and supply the credentials as outlined in the destination doc linked below.
3. Running your pipeline for the first time
The dlt
cli has also created a main pipeline script for you at looker_pipeline.py
, as well as a folder looker
that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.
The main pipeline script will look something like this:
import dlt
from looker import looker_source
if __name__ == "__main__":
pipeline = dlt.pipeline(
pipeline_name="looker_pipeline",
destination='duckdb',
dataset_name="looker_data",
progress="log",
export_schema_path="schemas/export"
)
source = looker_source()
info = pipeline.run(source)
print(info)
Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:
python looker_pipeline.py
4. Inspecting your load result
You can now inspect the state of your pipeline with the dlt
cli:
dlt pipeline looker_pipeline info
You can also use streamlit to inspect the contents of your ClickHouse
destination for this:
# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline looker_pipeline show
5. Next steps to get your pipeline running in production
One of the beauties of dlt
is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:
The Deploy section will show you how to deploy your pipeline to
- Deploy with Github Actions: Learn how to deploy your
dlt
pipeline using Github Actions for automated workflows. Follow the steps here. - Deploy with Airflow: Use Airflow and Google Composer to manage and schedule your
dlt
pipelines. Detailed instructions can be found here. - Deploy with Google Cloud Functions: Discover how to deploy
dlt
pipelines using Google Cloud Functions for serverless execution. Check out the guide here. - Explore other deployment options: For more deployment strategies and options, visit the comprehensive guide here.
The running in production section will teach you about:
- How to Monitor your pipeline: Learn how to effectively monitor your
dlt
pipelines to ensure smooth operation and quick identification of issues. Read more - Set up alerts: Configure alerts to stay informed about the status and performance of your
dlt
pipelines, allowing you to respond promptly to any issues. Read more - Set up tracing: Implement tracing to gain detailed insights into the execution of your
dlt
pipelines, helping you to diagnose and resolve problems efficiently. Read more
Available Sources and Resources
For this verified source the following sources and resources are available
Source Looker
Streams Looker data including configurations, operations, and audit logs.
Resource Name | Write Disposition | Description |
---|---|---|
instance | append | Represents an instance in Looker, containing configuration details and status of the instance. |
audit_config | append | Contains audit configurations, which track user activities and changes within Looker. |
location | append | Details regarding the geographical location and associated settings of the Looker instance. |
operation | append | Represents various operations performed within Looker, including their status and metadata. |
Additional pipeline guides
- Load data from Capsule CRM to AlloyDB in python with dlt
- Load data from Chargebee to Azure Cosmos DB in python with dlt
- Load data from Qualtrics to AWS S3 in python with dlt
- Load data from PostgreSQL to Dremio in python with dlt
- Load data from HubSpot to Redshift in python with dlt
- Load data from Chess.com to Microsoft SQL Server in python with dlt
- Load data from Spotify to Supabase in python with dlt
- Load data from Fivetran to Azure Cosmos DB in python with dlt
- Load data from Attio to AWS S3 in python with dlt
- Load data from Google Analytics to ClickHouse in python with dlt