Create new destination
You can use @dlt.destination
decorator and implement a sink function. This is a perfect way to implement reverse ETL components that push data back to REST APIs.
dlt
can import destinations from external python modules. Below we show how to quickly add a dbapi based destination. dbapi
is a standardized interface to access
databases in Python. If you used ie. postgres (ie. psycopg2
) you are already familiar with it.
๐งช This guide is not comprehensive. The internal interfaces are still evolving. Besides reading info below, you should check out source code of existing destinations
0. Prerequisitesโ
Destinations are implemented in python packages under: dlt.destinations.impl.<destination_name>
. Generally a destination consists of the following modules:
__init__.py
- this module contains the destination capabilities<destination_name>.py
- this module contains the job client and load job implementations for the destinationconfiguration.py
- this module contains the destination and credentials configuration classessql_client.py
- this module contains the SQL client implementation for the destination, this is a wrapper overdbapi
that provides consistent interface todlt
for executing queriesfactory.py
- this module contains aDestination
subclass that is the entry point for the destination.
1. Copy existing destination to your dlt
projectโ
Initialize a new project with dlt init
dlt init github postgres
This adds github
verified source (it produces quite complicated datasets and that good for testing, does not require credentials to use) and postgres
credentials (connection-string-like) that we'll repurpose later.
Clone dlt repository to a separate folder. In the repository look for dlt/destinations/impl folder and copy one of the destinations to your project. Pick your starting point:
- postgres - a simple destination without staging storage support and COPY jobs
- redshift - based on postgres, adds staging storage support and remote COPY jobs
- snowflake - a destination supporting additional authentication schemes, local and remote COPY jobs and no support for direct INSERTs
Below we'll use postgres as starting point.
2. Adjust the destination configuration and credentialsโ
dbapi
based destinations use ConnectionStringCredentials
as a credentials base which accepts SQLAlchemy style connection strings. Typically you should derive from it to change the drivername
and make desired properties (like host
or password
) mandatory.
We keep config and credentials in configuration.py
. You should:
- rename the classes properly to match your destination name
- if you need more properties (ie. look at
iam_role
inredshift
credentials) then add them, remember about typing. Behind the hood credentials and configs are dataclasses. - adjust
__init__
arguments in yourDestination
class infactory.py
to match the new credentials and config classes - expose the configuration type in
spec
attribute infactory.py
๐ก Each destination implements
Destination
abstract class defined in reference.py.
๐ก See how
snowflake
destination adds additional authorization methods and configuration options.
3. Set the destination capabilitiesโ
dlt
needs to know a few things about the destination to correctly work with it. Those are stored in capabilities()
function in __init__.py
.
- supported loader file formats both for direct and staging loading (see below)
escape_identifier
a function that escapes database identifiers ie. table or column name. Look indlt.common.data_writers.escape
module to see how this is implemented for existing destinations.escape_literal
a function that escapes string literal. it is only used if destination supports insert-values loader format (also see existing implementations indlt.common.data_writers.escape
)decimal_precision
precision and scale of decimal/numeric types. also used to create right decimal types in loader files ie. parquetwei_precision
precision and scale of decimal/numeric to store very large (up to 2**256) integers. specify maximum precision for scale 0max_identifier_length
max length of table and schema/dataset namesmax_column_identifier_length
max length of column namenaming_convention
a name or naming convention module that maps the input alphabet (ie. JSON identifiers) to destination alphabet. leave the default - it is very conservativemax_query_length
,is_max_query_length_in_bytes
,max_text_data_type_length
,is_max_text_data_type_length_in_bytes
- tellsdlt
the maximum length of text query and of text data types.supports_transactions
tells if destination supports transactionstimestamp_precision
sets fidelity of timestamp/datetime type: 0 - 9 (from seconds to nanoseconds), default is 6supports_ddl_transactions
tells if the destination supports ddl transactions.alter_add_multi_column
tells if destination can add multiple columns in ALTER statementsupports_truncate_command
tells dlt if truncate command is used, otherwise it will use DELETE to clear tables.schema_supports_numeric_precision
whether numeric data types support precision/scale configurationmax_rows_per_insert
max number of rows supported per insert statement, used withinsert-values
loader file format (set toNone
for no limit). E.g. MS SQL has a limit of 1000 rows per statement, but most databases have no limit and the statement is divided according tomax_query_length
.
Supported loader file formatsโ
Specify which loader file formats your destination will support directly and via storage staging. Direct support means that destination is able to load a local file or supports INSERT command. Loading via staging is using filesystem
to send load package to a (typically) bucket storage and then load from there.
๐ก the insert-values data format generates large INSERT statement that are executed on the destination. If you have any other option for local loading, avoid using this format. It is typically slower and requires the use of bullet-proof
escape_literal
function.
preferred_loader_file_format
- a file format that will be used by default to load data from local file system. Set toNone
if direct loading is not supported.supported_loader_file_formats
- file formats that can be loaded from local file system to destination. Set to[]
if direct loading is not supported.preferred_staging_file_format
- a file format that will be used by default whenstaging
is enabled. Set toNone
if destination can't load from staging.supported_staging_file_formats
- file formats that are supported to be loaded staging storage. Set to[]
if destination can't load from staging.
๐ก Mind that for each file type you'll need to implement a load job (which in most cases is a
COPY
command to which you pass a file path and file type)
๐ก Postgres does not support staging and any other file format beyond insert-values. Check the
snowflake
capabilities for a destination that supports all possible formats.
Escape identifiers and literalsโ
The default escape_identifier
function identifier escapes "
and '\' and quotes identifier with "
. This is standard SQL behavior. Mind that if you use default naming convention, dlt
normalizes identifiers to an alphabet that does not accept any special characters. Users are able to change the naming convention in the configuration so correct escape function is still important.
๐ก postgres destination that you modify is using standard implementation that you may keep.
You should avoid providing a custom escape_literal
function by not enabling insert-values
for your destination.
Enable / disable case sensitive identifiersโ
Specify if destination supports case sensitive identifiers by setting has_case_sensitive_identifiers
to True
(or False
if otherwise). Some case sensitive destinations (ie. Snowflake or Postgres) support case insensitive identifiers via. case folding ie. Snowflake considers all upper case identifiers as case insensitive (set casefold_identifier
to str.upper
), Postgres does the same with lower case identifiers (str.lower
).
Some case insensitive destinations (ie. Athena or Redshift) case-fold (ie. lower case) all identifiers and store them as such. In that case set casefold_identifier
to str.lower
as well.
4. Adjust the SQL clientโ
sql client is a wrapper over dbapi
and its main role is to provide consistent interface for executing SQL statements, managing transactions and (probably the most important) to help handling errors via classifying exceptions. Here's a few things you should pay attention to:
- When opening the connection: add current dataset name to search path, set session timezone to UTC.
- Transactions: typically to begin a transaction, you need to disable the auto-commit (like
postgres
implementation does) execute_query
: dlt uses%s
to represent dbi api query parameters. seeduckdb sql_client for a crude way to align your
dbapi` client if it uses other parameter placeholders.execute_fragments
: if yourdbapi
client does not provide a method to join SQL fragments without full string copy, just deletepostgres
override. The base class just joins strings.
Fully qualified namesโ
When created, sql_client
is bound to particular dataset name (which typically corresponds to a database schema). Most of the database engines follow usual rules of qualifying and quoting ("schema"."table"."column") but there are exceptions like BigQuery
or Motherduck
. You have full control over generating identifiers via:
fully_qualified_dataset_name
returns a fully qualified dataset name.make_qualified_table_name
same but for a given table name
dbapi
exceptionsโ
dlt
must be able to distinct a few error cases for the loading to work properly. Unfortunately error reporting is not very well defined by dbapi
and even the existing exception tree is not used consistently across implementations.
_make_database_exception
method wraps incoming Exception
in one of exception types required by dlt
:
DatabaseUndefinedRelation
: raised when schema or table thatdlt
tries to reference is undefined. It is important to detect this case exactly: via specificdbapi
exceptions (like in case ofpostgres
andduckdb
) or via detecting proper category of exceptions and inspecting the error codes or messages (see.redshift
andsnowflake
)DatabaseTerminalException
: errors during loading that will permanently fail a job and should not retry.IntegrityError
,ProgrammingError
and most of theDataError
belong to this class. (example: decimal value out of range, insert NULL in non NULL columns)DatabaseTransientException
: all other exceptions. we also includeSyntaxError
(if exists in particulardbapi
implementation) here
๐ก How this works in practice: we have a set of tests for all relevant error cases in test_sql_client.py, this way we make sure that new sql_client behaves correctly.
What base class assumesโ
- that
INFORMATION_SCHEMA
exists from which we can take basic information onSCHEMATA
andCOLUMNS
CREATE SCHEMA
andDROP SCHEMA
(see howBigQuery
overrides that)DELETE
orTRUNCATE
is available to clear tables without droppingDROP TABLE
only for CLI command (pipeline drop
)
5. Adjust the job clientโ
Job client is responsible for creating/starting load jobs and managing the schema updates. Here we'll adjust the SqlJobClientBase
base class which uses the sql_client
to manage the destination. Typically only a few methods needs to be overridden by a particular implementation. The job client code customarily resides in a file with name <destination_name>.py
ie. postgres.py
and is exposed in factory.py
by the client_class
property on the destination class.
Database type mappingsโ
You must map dlt
data types to destination data types. For this you can implement a subclass of TypeMapper
. You can specify there dicts to map dlt
data types to destination data types, with or without precision. A few tricks to remember:
- the database types must be exactly those as used in
INFORMATION_SCHEMA.COLUMNS
- decimal precision and scale are filled from the capabilities (in all our implementations)
- until now all destinations could handle binary types
- we always try to map the
complex
type intoJSON
type in the destination. if that does not work you can try mapping into a string. See how we do that for various destinations. - the reverse mapping of types is sometimes tricky ie. you may not able to detect complex types (your destination lacks JSON support). this is not really needed during schema updates and loading (just for testing) so in general you should be fine.
Table and column hintsโ
You can map hints present for tables and columns (ie. cluster
, sort
, partition
) to generate specific DDL for columns and tables. See _get_column_def_sql
in various destinations.
You can also add hints (ie indexes, partition clauses) to tables via _get_table_update_sql
- see BigQuery
implementation for a good example.
Participate in staging dataset merge and replaceโ
dlt
supports merging and transactional replace via staging dataset living along the destination dataset. SqlJobClientBase
participates in this mechanism by default. In essence: each time when a job is completed, dlt
checks which table got updated and if there are no remaining jobs for that table and its child and parent tables (all together called table chain). If table chain is fully loaded, dlt
executes SQL transformations that move/merge data from staging dataset to destination dataset (that, as you can expect, happens also via jobs, of type sql
that are dynamically created).
Generated SQL is quite simple and we were able to run it on all existing destinations (we may introduce sqlglot
to handle future cases). The SQL used requires:
- SELECT, INSERT, DELETE/TRUNCATE statements
- WINDOW functions for merge.
In case of destinations that do not allow the data modifications you can opt out from both replace and merge:
- override
get_truncate_destination_table_dispositions
method and return empty list so your tables are never truncated - override
get_stage_dispositions
and return empty list to opt out from any operations on staging dataset.
What base class assumesโ
- DDL to create and add column to tables is available
- it is possible to SELECT data
- it is possible to INSERT data (in order to complete package and store the updated schema)
๐ก talk to us on slack if your destination is fully read only.
6. Implement load jobsโ
Load jobs make sure that all files in load package are loaded to destination. dlt
creates single job per file and makes sure that it transitions to completed
state. (look for LoadJob
)
The file name of the job is used as the job id and both sync and async execution is supported. The executor is multi-threaded. Each job starts in separate thread and its completion status is checked from the main thread.
Jobs are typically very simple and just execute INSERT or COPY commands. They do not replace nor merge data themselves.
Enable insert jobsโ
If you use insert-values loader file format then derive your job client from InsertValuesJobClient
. postgres.py
does exactly that.
Look at snowflake.py
for a destination that does not use insert-values.
Copy jobs from local and remote filesโ
dlt
allows to chain two destinations to create a storage stage (typically on a bucket). The staging destination (currently filesystem
) will copy new files, complete the corresponding jobs and for each of them it will create reference job that will be passed to a destination to execute.
The postgres
destination does not implement any copy jobs.
- See
RedshiftCopyFileLoadJob
inredshift.py
how we create and start a copy job from a bucket. It usesCopyRemoteFileLoadJob
base to handle the references and creates aCOPY
SQL statement inexecute()
method. - See
SnowflakeLoadJob
insnowflake.py
how to implement a job that can load local and reference files. It also forwards AWS credentials from staging destination. At the end the code just generates a COPY command for various loader file formats.
7. Expose your destination to dltโ
The Destination
subclass in dlt.destinations.impl.<destination_name>.factory
module is the entry point for the destination.
Add an import to your factory in dlt.destinations.__init__
. dlt
looks in this module when you reference a destination by name, i.e. dlt.pipeline(..., destination="postgres")
.
Testingโ
We can quickly repurpose existing github source and secrets.toml
already present in the project to test new destination. Let's assume that the module name is presto
, same for the destination name and config section name. Here's our testing script github_pipeline.py
import dlt
from github import github_repo_events
from presto import presto # importing destination factory
def load_airflow_events() -> None:
"""Loads airflow events. Shows incremental loading. Forces anonymous access token"""
pipeline = dlt.pipeline(
"github_events", destination=presto(), dataset_name="airflow_events"
)
data = github_repo_events("apache", "airflow", access_token="")
print(pipeline.run(data))
if __name__ == "__main__":
load_airflow_events()
Here's secrets.toml
:
[destination.presto]
# presto config
[destination.presto.credentials]
database = "dlt_data"
password = "loader"
username = "loader"
host = "localhost"
port = 5432
Mind that in the script above we import the presto
module and then pass it in destination
argument to dlt.pipeline
. Github pipeline will load the events in append
mode. You may force replace
and merge
modes in pipeline.run
to check more advanced behavior of the destination.
After executing the pipeline script:
python github_pipeline.py
got page https://api.github.com/repos/apache/airflow/events?per_page=100, requests left: 59
got page https://api.github.com/repositories/33884891/events?per_page=100&page=2, requests left: 58
got page https://api.github.com/repositories/33884891/events?per_page=100&page=3, requests left: 57
Pipeline github_events completed in 4.56 seconds
1 load package(s) were loaded to destination presto and into dataset airflow_events
The presto destination used postgres://loader:***@localhost:5432/dlt_data location to store data
Load package 1690628947.953597 is LOADED and contains no failed jobs
you can use dlt pipeline show github_events
to view data in the destination.