dltHub

Lightweight Python code to move data

We focus on the needs & constraints of Python-first data platform teams: how to write any data source, achieve data democracy, modernise legacy systems and reduce cloud costs.

Trusted By

6M+

PyPi Downloads

8,000+

OSS companies in production

600+

Snowflake customers in production

OPEN SOURCE

pip install dlt and go

dlt (data load tool) is the most popular production-ready open source Python library for moving data. It loads data from various and often messy data sources into well-structured, live datasets.

Unlike other non-Python solutions, with dlt library, there's no need to use any backends or containers. We do not replace your data platform, deployments, or security models. Simply import it into your favorite AI code editor, or add it to your Jupyter Notebook. You can load data from any source that produces Python data structures, including APIs, files, databases, and more.

import dlt
from dlt.sources.filesystem import filesystem

resource = filesystem(
    bucket_url="s3://example-bucket",
    file_glob="*.csv"
)

pipeline = dlt.pipeline(
    pipeline_name="filesystem_example",
    destination="duckdb",
    dataset_name="filesystem_data",
)

pipeline.run(resource)
An image with the command "pip install "dlt[hub]" in the middle, and logos of REST API sources around it
DLTHUB CONTEXT

Made for LLMs: Data source to Live Reports in Python

dltHub Context is a hub of AI-native context assets, including skills, commands, hooks, AGENT.md, coding files and more, allowing you and an LLM to code any dlt pipeline from any REST API to any dlt destination - within minutes.

We already support more than 10,100 sources, and see a clear path toward hundreds of thousands. Go from writing pipeline code to ingesting data and delivering reports via Notebooks, all in one flow, with outputs tailored to data users.

DLTHUB VISION

From Open Source EL to Data Infrastructure That Feels Like Python

dlt makes extracting and loading data simple and Pythonic. With dltHub, we’re taking the next step - extending into ELT, storage, and runtime.

dltHub transforms complex data workflows into something any Python developer can run: deploy pipelines, transformations, and notebooks.

We’re building dltHub in close collaboration with users in highly regulated industries like finance and healthcare - where governance, security, and compliance (like BCBS 239 for risk reporting) are non-negotiable. dltHub brings those guarantees while preserving Pythonic simplicity, complete data lineage, observability, and quality control - all in a platform that feels as natural as writing code.

Our goal is to make dltHub available to individual developers, small teams, and enterprises alike. The first release - dltHub for individual developers - is coming in Q1 2026.

Quotation mark icon{testimonial.author?.name}

The current machine learning revolution has been enabled by the Cambrian explosion of Python open-source tools that have become so accessible that a wide range of practitioners can use them. As a simple-to-use Python library, dlt is the first tool that this new wave of people can use. By leveraging this library, we can extend the machine learning revolution into enterprise data.

Quotation mark icon
Julien Chaumond
CTO/Co-Founder at Hugging Face
Quotation mark icon{testimonial.author?.name}

Python and machine learning under security constraints are key to our success. We found that our cloud ETL provider could not meet our needs. dlt is a lightweight yet powerful open source tool we can run together with Snowflake. Our event streaming and batch data loading performs at scale and low cost. Now anyone who knows Python can self-serve to fulfil their data needs.

Quotation mark icon
Maximilian Eber
CPTO & Co-Founder at Taktile

Get started building