Getting started

What is dlt?
dlt is an open-source Python library that loads data from various, often messy data sources into well-structured datasets. It provides lightweight Python interfaces to extract, load, inspect and transform the data. dlt and dlt docs are built ground up to be used with LLMs: LLM-native workflow will take you pipeline code to data in a notebook for over 5,000 sources.
dlt is designed to be easy to use, flexible, and scalable:
- dlt extracts data from REST APIs, SQL databases, cloud storage, Python data structures, and many more
- dlt infers schemas and data types, normalizes the data, and handles nested data structures.
- dlt supports a variety of popular destinations and has an interface to add custom destinations to create reverse ETL pipelines.
- dlt automates pipeline maintenance with incremental loading, schema evolution, and schema and data contracts.
- dlt supports Python and SQL data access, transformations and supports pipeline inspection and visualizing data in Marimo Notebooks.
- dlt can be deployed anywhere Python runs, be it on Airflow, serverless functions, or any other cloud deployment of your choice.
To get started with dlt, install the library using pip (use clean virtual environment for your experiments!):
pip install dlt
If you'd like to try out dlt without installing it on your machine, check out the Google Colab demo or use our simple marimo / wasm based playground on this docs page.
Load data with dlt from …
- REST APIs
- SQL databases
- Cloud storages or files
- Python data structures
Use dlt's REST API source to extract data from any REST API. Define the API endpoints you'd like to fetch data from, the pagination method, and authentication, and dlt will handle the rest:
import dlt
from dlt.sources.rest_api import rest_api_source
source = rest_api_source({
"client": {
"base_url": "https://api.example.com/",
"auth": {
"token": dlt.secrets["your_api_token"],
},
"paginator": {
"type": "json_link",
"next_url_path": "paging.next",
},
},
"resources": ["posts", "comments"],
})
pipeline = dlt.pipeline(
pipeline_name="rest_api_example",
destination="duckdb",
dataset_name="rest_api_data",
)
load_info = pipeline.run(source)
# print load info and posts table as data frame
print(load_info)
print(pipeline.dataset().posts.df())
LLMs are great at generating REST API pipelines!
- Follow LLM tutorial and start with one of 5,000+ sources
- Follow the REST API source tutorial to learn more about the source configuration and pagination methods.
Use the SQL source to extract data from databases like PostgreSQL, MySQL, SQLite, Oracle, and more.
from dlt.sources.sql_database import sql_database
source = sql_database(
"mysql+pymysql://rfamro@mysql-rfam-public.ebi.ac.uk:4497/Rfam"
)
pipeline = dlt.pipeline(
pipeline_name="sql_database_example",
destination="duckdb",
dataset_name="sql_data",
)
load_info = pipeline.run(source)
# print load info and the "family" table as data frame
print(load_info)
print(pipeline.dataset().family.df())
Follow the SQL source tutorial to learn more about the source configuration and supported databases.
The Filesystem source extracts data from AWS S3, Google Cloud Storage, Google Drive, Azure, or a local file system.
from dlt.sources.filesystem import filesystem
resource = filesystem(
bucket_url="s3://example-bucket",
file_glob="*.csv"
)
pipeline = dlt.pipeline(
pipeline_name="filesystem_example",
destination="duckdb",
dataset_name="filesystem_data",
)
load_info = pipeline.run(resource)
# print load info and the "example" table as data frame
print(load_info)
print(pipeline.dataset().example.df())
Follow the filesystem source tutorial to learn more about the source configuration and supported storage services.
dlt can load data from Python generators or directly from Python data structures:
import dlt
@dlt.resource(table_name="foo_data")
def foo():
for i in range(10):
yield {"id": i, "name": f"This is item {i}"}
pipeline = dlt.pipeline(
pipeline_name="python_data_example",
destination="duckdb",
)
load_info = pipeline.run(foo)
# print load info and the "foo_data" table as data frame
print(load_info)
print(pipeline.dataset().foo_data.df())
Check out the Python data structures tutorial to learn about dlt fundamentals and advanced usage scenarios.