Skip to main content

Load Data from ClickHouse Cloud to DuckDB Using dlt in Python

Need help deploying these pipelines, or figuring out how to run them in your data stack?

Join our Slack community or book a call with our support engineer Violetta.

ClickHouse Cloud is a high-performance, scalable cloud-based data warehousing solution designed for real-time analytics. It enables businesses to run complex queries on large datasets with exceptional speed and efficiency. Integrating ClickHouse Cloud with DuckDB, a fast in-process analytical database, can enhance your data processing capabilities. DuckDB supports a feature-rich SQL dialect with deep integrations into client APIs. Using the open-source Python library dlt, you can seamlessly load data from ClickHouse Cloud to DuckDB, ensuring efficient data management and advanced analytics. For more information, visit the ClickHouse Cloud website.

dlt Key Features

  • Automated maintenance: With schema inference and evolution and alerts, and with short declarative code, maintenance becomes simple. Learn more
  • Run it where Python runs: On Airflow, serverless functions, notebooks. No external APIs, backends or containers, scales on micro and large infra alike. Learn more
  • User-friendly interface: A declarative interface that removes knowledge obstacles for beginners while empowering senior professionals. Learn more
  • Getting started guide: Dive into our Getting started guide for a quick intro to the essentials of dlt.
  • Community support: Ask questions and share how you use the library on Slack or report problems and make feature requests on GitHub. Learn more

Getting started with your pipeline locally

OpenAPI Source Generator dlt-init-openapi

This walkthrough makes use of the dlt-init-openapi generator cli tool. You can read more about it here. The code generated by this tool uses the dlt rest_api verified source, docs for this are here.

0. Prerequisites

dlt and dlt-init-openapi requires Python 3.9 or higher. Additionally, you need to have the pip package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.

1. Install dlt and dlt-init-openapi

First you need to install the dlt-init-openapi cli tool.

pip install dlt-init-openapi

The dlt-init-openapi cli is a powerful generator which you can use to turn any OpenAPI spec into a dlt source to ingest data from that api. The quality of the generator source is dependent on how well the API is designed and how accurate the OpenAPI spec you are using is. You may need to make tweaks to the generated code, you can learn more about this here.

# generate pipeline
# NOTE: add_limit adds a global limit, you can remove this later
# NOTE: you will need to select which endpoints to render, you
# can just hit Enter and all will be rendered.
dlt-init-openapi clickhouse_cloud --url https://raw.githubusercontent.com/dlt-hub/openapi-specs/main/open_api_specs/Business/click_house_cloud.yaml --global-limit 2
cd clickhouse_cloud_pipeline
# install generated requirements
pip install -r requirements.txt

The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt:

dlt>=0.4.12

You now have the following folder structure in your project:

clickhouse_cloud_pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── rest_api/ # The rest api verified source
│ └── ...
├── clickhouse_cloud/
│ └── __init__.py # TODO: possibly tweak this file
├── clickhouse_cloud_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)

1.1. Tweak clickhouse_cloud/__init__.py

This file contains the generated configuration of your rest_api. You can continue with the next steps and leave it as is, but you might want to come back here and make adjustments if you need your rest_api source set up in a different way. The generated file for the clickhouse_cloud source will look like this:

Click to view full file (229 lines)

from typing import List

import dlt
from dlt.extract.source import DltResource
from rest_api import rest_api_source
from rest_api.typing import RESTAPIConfig


@dlt.source(name="clickhouse_cloud_source", max_table_nesting=2)
def clickhouse_cloud_source(
base_url: str = dlt.config.value,
) -> List[DltResource]:

# source configuration
source_config: RESTAPIConfig = {
"client": {
"base_url": base_url,
},
"resources":
[
# Returns a list of all organization activities.
{
"name": "organization_id_activities",
"table_name": "activity",
"primary_key": "id",
"write_disposition": "merge",
"endpoint": {
"data_selector": "result",
"path": "/v1/organizations/:organizationId/activities",
"paginator": "auto",
}
},
# Returns a single organization activity by ID.
{
"name": "organization_id_activities_activity_id",
"table_name": "activity_id",
"primary_key": "requestId",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/v1/organizations/:organizationId/activities/:activityId",
"paginator": "auto",
}
},
# Returns a list of all keys in the organization.
{
"name": "organization_id_keys",
"table_name": "api_key",
"primary_key": "id",
"write_disposition": "merge",
"endpoint": {
"data_selector": "result",
"path": "/v1/organizations/:organizationId/keys",
"paginator": "auto",
}
},
# Returns a list of all backups for the service. The most recent backups comes first in the list.
{
"name": "organization_id_services_service_id_backups",
"table_name": "backup",
"primary_key": "id",
"write_disposition": "merge",
"endpoint": {
"data_selector": "result",
"path": "/v1/organizations/:organizationId/services/:serviceId/backups",
"paginator": "auto",
}
},
# Returns a single backup info.
{
"name": "organization_id_services_service_id_backups_backup_id",
"table_name": "backup_id",
"primary_key": "requestId",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/v1/organizations/:organizationId/services/:serviceId/backups/:backupId",
"paginator": "auto",
}
},
# Returns list of all organization invitations.
{
"name": "organization_id_invitations",
"table_name": "invitation",
"primary_key": "id",
"write_disposition": "merge",
"endpoint": {
"data_selector": "result",
"path": "/v1/organizations/:organizationId/invitations",
"paginator": "auto",
}
},
# Returns details for a single organization invitation.
{
"name": "organization_id_invitations_invitation_id",
"table_name": "invitation_id",
"primary_key": "requestId",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/v1/organizations/:organizationId/invitations/:invitationId",
"paginator": "auto",
}
},
# Returns a single key details.
{
"name": "organization_id_keys_key_id",
"table_name": "key_id",
"primary_key": "requestId",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/v1/organizations/:organizationId/keys/:keyId",
"paginator": "auto",
}
},
# Returns a list of all members in the organization.
{
"name": "organization_id_members",
"table_name": "member",
"endpoint": {
"data_selector": "result",
"path": "/v1/organizations/:organizationId/members",
"paginator": "auto",
}
},
# Returns a list with a single organization associated with the API key in the request.
{
"name": "",
"table_name": "organization",
"primary_key": "id",
"write_disposition": "merge",
"endpoint": {
"data_selector": "result",
"path": "/v1/organizations",
"paginator": "auto",
}
},
# Returns details of a single organization. In order to get the details, the auth key must belong to the organization.
{
"name": "organization_id",
"table_name": "organization_id",
"primary_key": "requestId",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/v1/organizations/:organizationId",
"paginator": "auto",
}
},
# Information required to set up a private endpoint
{
"name": "organization_id_services_service_id_private_endpoint_config",
"table_name": "private_endpoint_config",
"primary_key": "requestId",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/v1/organizations/:organizationId/services/:serviceId/privateEndpointConfig",
"paginator": "auto",
}
},
# Information required to set up a private endpoint
{
"name": "organization_id_private_endpoint_config",
"table_name": "private_endpoint_config",
"primary_key": "requestId",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/v1/organizations/:organizationId/privateEndpointConfig",
"params": {
"Cloud provider identifier": "FILL_ME_IN", # TODO: fill in required query parameter
"Cloud provider region": "FILL_ME_IN", # TODO: fill in required query parameter

},
"paginator": "auto",
}
},
# Returns prometheus metrics for a service. Please contact support to enable this feature.
{
"name": "organization_id_services_service_id_prometheus",
"table_name": "prometheu",
"endpoint": {
"path": "/v1/organizations/:organizationId/services/:serviceId/prometheus",
"paginator": "auto",
}
},
# Returns a list of all services in the organization.
{
"name": "organization_id_services",
"table_name": "service",
"primary_key": "id",
"write_disposition": "merge",
"endpoint": {
"data_selector": "result",
"path": "/v1/organizations/:organizationId/services",
"paginator": "auto",
}
},
# Returns a service that belongs to the organization
{
"name": "organization_id_services_service_id",
"table_name": "service_id",
"primary_key": "requestId",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/v1/organizations/:organizationId/services/:serviceId",
"paginator": "auto",
}
},
# Returns a single organization member details.
{
"name": "organization_id_members_user_id",
"table_name": "user_id",
"primary_key": "requestId",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/v1/organizations/:organizationId/members/:userId",
"paginator": "auto",
}
},
]
}

return rest_api_source(source_config)

2. Configuring your source and destination credentials

info

dlt-init-openapi will try to detect which authentication mechanism (if any) is used by the API in question and add a placeholder in your secrets.toml.

  • If you know your API needs authentication, but none was detected, you can learn more about adding authentication to the rest_api here.
  • OAuth detection currently is not supported, but you can supply your own authentication mechanism as outlined here.

The dlt cli will have created a .dlt directory in your project folder. This directory contains a config.toml file and a secrets.toml file that you can use to configure your pipeline. The automatically created version of these files look like this:

generated config.toml


[runtime]
log_level="INFO"

[sources.clickhouse_cloud]
# Base URL for the API
base_url = "https://api.clickhouse.cloud"

generated secrets.toml


[sources.clickhouse_cloud]
# secrets for your clickhouse_cloud source
# example_api_key = "example value"

2.1. Adjust the generated code to your usecase

Further help setting up your source and destinations

At this time, the dlt-init-openapi cli tool will always create pipelines that load to a local duckdb instance. Switching to a different destination is trivial, all you need to do is change the destination parameter in clickhouse_cloud_pipeline.py to duckdb and supply the credentials as outlined in the destination doc linked below.

  • Read more about setting up the rest_api source in our docs.
  • Read more about setting up the DuckDB destination in our docs.

3. Running your pipeline for the first time

The dlt cli has also created a main pipeline script for you at clickhouse_cloud_pipeline.py, as well as a folder clickhouse_cloud that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.

The main pipeline script will look something like this:


import dlt

from clickhouse_cloud import clickhouse_cloud_source


if __name__ == "__main__":
pipeline = dlt.pipeline(
pipeline_name="clickhouse_cloud_pipeline",
destination='duckdb',
dataset_name="clickhouse_cloud_data",
progress="log",
export_schema_path="schemas/export"
)
source = clickhouse_cloud_source()
info = pipeline.run(source)
print(info)

Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:

python clickhouse_cloud_pipeline.py

4. Inspecting your load result

You can now inspect the state of your pipeline with the dlt cli:

dlt pipeline clickhouse_cloud_pipeline info

You can also use streamlit to inspect the contents of your DuckDB destination for this:

# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline clickhouse_cloud_pipeline show

5. Next steps to get your pipeline running in production

One of the beauties of dlt is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:

The Deploy section will show you how to deploy your pipeline to

  • Deploy with GitHub Actions: Learn how to set up a CI/CD pipeline using GitHub Actions to automate your deployments. Read more
  • Deploy with Airflow and Google Composer: Discover how to use Airflow and Google Composer for managing and scheduling your dlt pipelines. Read more
  • Deploy with Google Cloud Functions: Follow this guide to deploy your dlt pipeline using Google Cloud Functions for a serverless experience. Read more
  • More Deployment Options: Explore additional methods and best practices for deploying your dlt pipelines. Read more

The running in production section will teach you about:

  • How to Monitor your pipeline: Learn how to effectively monitor your dlt pipeline in production to ensure smooth operation and quickly identify any issues. How to Monitor your pipeline
  • Set up alerts: Set up alerts to get notified about important events and potential issues in your dlt pipeline, helping you maintain its reliability. Set up alerts
  • Set up tracing: Implement tracing to get detailed insights into the execution of your dlt pipeline, helping you debug and optimize performance. And set up tracing

Available Sources and Resources

For this verified source the following sources and resources are available

Source ClickHouse Cloud

Streams various organizational, user activity, and configuration data from ClickHouse Cloud.

Resource NameWrite DispositionDescription
activityappendLogs and tracks user activities within the ClickHouse Cloud platform.
api_keyappendStores API keys used for authenticating and authorizing API requests.
invitation_idappendUnique identifiers for invitations sent to users for accessing the platform.
organization_idappendUnique identifiers for different organizations using the ClickHouse Cloud service.
prometheuappendStores Prometheus monitoring data for performance and health metrics.
invitationappendContains details of invitations sent to users for joining the platform.
activity_idappendUnique identifiers for specific activities logged within the platform.
memberappendInformation about members of various organizations within ClickHouse Cloud.
private_endpoint_configappendConfiguration settings for private endpoints used to access ClickHouse Cloud securely.
serviceappendDetails about various services provided by ClickHouse Cloud.
service_idappendUnique identifiers for different services within the platform.
backupappendInformation about backups created for data stored in ClickHouse Cloud.
user_idappendUnique identifiers for users accessing the ClickHouse Cloud platform.
key_idappendUnique identifiers for API keys used within the platform.
backup_idappendUnique identifiers for backups created within the platform.
organizationappendDetails about organizations using ClickHouse Cloud, including names and contact information.

Additional pipeline guides

This demo works on codespaces. Codespaces is a development environment available for free to anyone with a Github account. You'll be asked to fork the demo repository and from there the README guides you with further steps.
The demo uses the Continue VSCode extension.

Off to codespaces!

DHelp

Ask a question

Welcome to "Codex Central", your next-gen help center, driven by OpenAI's GPT-4 model. It's more than just a forum or a FAQ hub – it's a dynamic knowledge base where coders can find AI-assisted solutions to their pressing problems. With GPT-4's powerful comprehension and predictive abilities, Codex Central provides instantaneous issue resolution, insightful debugging, and personalized guidance. Get your code running smoothly with the unparalleled support at Codex Central - coding help reimagined with AI prowess.