Postgres
Install dlt with PostgreSQLโ
To install the dlt library with PostgreSQL dependencies, run:
pip install "dlt[postgres]"
Setup Guideโ
1. Initialize a project with a pipeline that loads to Postgres by running:
dlt init chess postgres
2. Install the necessary dependencies for Postgres by running:
pip install -r requirements.txt
This will install dlt with the postgres
extra, which contains the psycopg2
client.
3. After setting up a Postgres instance and psql
/ query editor, create a new database by running:
CREATE DATABASE dlt_data;
Add the dlt_data
database to .dlt/secrets.toml
.
4. Create a new user by running:
CREATE USER loader WITH PASSWORD '<password>';
Add the loader
user and <password>
password to .dlt/secrets.toml
.
5. Give the loader
user owner permissions by running:
ALTER DATABASE dlt_data OWNER TO loader;
You can set more restrictive permissions (e.g., give user access to a specific schema).
6. Enter your credentials into .dlt/secrets.toml
.
It should now look like this:
[destination.postgres.credentials]
database = "dlt_data"
username = "loader"
password = "<password>" # replace with your password
host = "localhost" # or the IP address location of your database
port = 5432
connect_timeout = 15
You can also pass a database connection string similar to the one used by the psycopg2
library or SQLAlchemy. The credentials above will look like this:
# keep it at the top of your toml file! before any section starts
destination.postgres.credentials="postgresql://loader:<password>@localhost/dlt_data?connect_timeout=15"
To pass credentials directly, use the explicit instance of the destination
pipeline = dlt.pipeline(
pipeline_name='chess',
destination=dlt.destinations.postgres("postgresql://loader:<password>@localhost/dlt_data"),
dataset_name='chess_data'
)
Write dispositionโ
All write dispositions are supported.
If you set the replace
strategy to staging-optimized
, the destination tables will be dropped and replaced by the staging tables.
Data loadingโ
dlt
will load data using large INSERT VALUES statements by default. Loading is multithreaded (20 threads by default).
Fast loading with arrow tables and csvโ
You can use arrow tables and csv to quickly load tabular data. Pick the csv
loader file format
like below
info = pipeline.run(arrow_table, loader_file_format="csv")
In the example above arrow_table
will be converted to csv with pyarrow and then streamed into postgres with COPY command. This method skips the regular
dlt
normalizer used for Python objects and is several times faster.
Supported file formatsโ
- insert-values is used by default.
- csv is supported
Supported column hintsโ
postgres
will create unique indexes for all columns with unique
hints. This behavior may be disabled.
Table and column identifiersโ
Postgres supports both case sensitive and case insensitive identifiers. All unquoted and lowercase identifiers resolve case-insensitively in SQL statements. Case insensitive naming conventions like the default snake_case will generate case insensitive identifiers. Case sensitive (like sql_cs_v1) will generate case sensitive identifiers that must be quoted in SQL statements.
Additional destination optionsโ
The Postgres destination creates UNIQUE indexes by default on columns with the unique
hint (i.e., _dlt_id
). To disable this behavior:
[destination.postgres]
create_indexes=false
Setting up csv
formatโ
You can provide non-default csv settings via configuration file or explicitly.
[destination.postgres.csv_format]
delimiter="|"
include_header=false
or
from dlt.destinations import postgres
from dlt.common.data_writers.configuration import CsvFormatConfiguration
csv_format = CsvFormatConfiguration(delimiter="|", include_header=False)
dest_ = postgres(csv_format=csv_format)
Above we set csv
file without header, with | as a separator.
You'll need those setting when importing external files
dbt supportโ
This destination integrates with dbt via dbt-postgres.
Syncing of dlt
stateโ
This destination fully supports dlt state sync.
Additional Setup guidesโ
- Load data from Braze to Azure Cosmos DB in python with dlt
- Load data from GitHub to EDB BigAnimal in python with dlt
- Load data from Cisco Meraki to EDB BigAnimal in python with dlt
- Load data from Imgur to CockroachDB in python with dlt
- Load data from Pinterest to YugabyteDB in python with dlt
- Load data from IBM Db2 to CockroachDB in python with dlt
- Load data from Braze to Supabase in python with dlt
- Load data from Capsule CRM to Timescale in python with dlt
- Load data from Apple App-Store Connect to YugabyteDB in python with dlt
- Load data from AWS S3 to CockroachDB in python with dlt