Skip to main content

30+ SQL Databases

Need help deploying these sources, or figuring out how to run them in your data stack?
Join our Slack community or book a call with our support engineer Violetta.

SQL databases are management systems (DBMS) that store data in a structured format, commonly used for efficient and reliable data retrieval.

Our SQL Database verified source loads data to your specified destination using SQLAlchemy, pyarrow, pandas or ConnectorX


View the pipeline example here.

Sources and resources that can be loaded using this verified source are:

sql_databaseReflects the tables and views in SQL database and retrieves the data
sql_tableRetrieves data from a particular SQL database table

Supported databases

We support all SQLAlchemy dialects, which include, but are not limited to, the following database engines:

  • PostgreSQL
  • MySQL
  • SQLite
  • Oracle
  • Microsoft SQL Server
  • MariaDB
  • IBM DB2 and Informix
  • Google BigQuery
  • Snowflake
  • Redshift
  • Apache Hive and Presto
  • SAP Hana
  • CockroachDB
  • Firebird
  • Teradata Vantage

Note that there many unofficial dialects, such as DuckDB.

Setup Guide

  1. Initialize the verified source

To get started with your data pipeline, follow these steps:

  1. Enter the following command:

    dlt init sql_database duckdb

    It will initialize the pipeline example with an SQL database as the source and DuckDB as the destination.


    If you'd like to use a different destination, simply replace duckdb with the name of your preferred destination.

  2. After running this command, a new directory will be created with the necessary files and configuration settings to get started.

For more information, read the guide on how to add a verified source.

  1. Add credentials

  2. In the .dlt folder, there's a file called secrets.toml. It's where you store sensitive information securely, like access tokens. Keep this file safe.

    Here's what the secrets.toml looks like:

    drivername = "mysql+pymysql" # driver name for the database
    database = "Rfam" # database name
    username = "rfamro" # username associated with the database
    host = "" # host address
    port = "4497" # port required for connection
  3. Alternatively, you can also provide credentials in "secrets.toml" as:


    See pipeline example for details.

  4. Finally, follow the instructions in Destinations to add credentials for your chosen destination. This will ensure that your data is properly routed.

For more information, read the General Usage: Credentials.

Credentials format

sql_database uses SQLAlchemy to create database connections and reflect table schemas. You can pass credentials using database urls. For example:


will connect to myssql database with a name Rfam using pymysql dialect. The database host is at, port 4497. User name is rfmaro and password is PWD.

  1. Run the pipeline

  2. Install the necessary dependencies by running the following command:

    pip install -r requirements.txt
  3. Run the verified source by entering:

  4. Make sure that everything is loaded as expected with:

    dlt pipeline <pipeline_name> show

    The pipeline_name for the above example is rfam, you may also use any custom name instead.

Source and resource functions

Import sql_database and sql_table functions as follows:

from sql_database import sql_database, sql_table

and read the docstrings to learn about available options.


We intend our sources to be fully hackable. Feel free to change the code of the source to customize it to your needs

Pick the right backend to load table data

Table backends convert stream of rows from database tables into batches in various formats. The default backend sqlalchemy is following standard dlt behavior of extracting and normalizing Python dictionaries. We recommend it for smaller tables, initial development work and when minimal dependencies or pure Python environment is required. This backend is also the slowest. Database tables are structured data and other backends speed up dealing with such data significantly. The pyarrow will convert rows into arrow tables, has good performance, preserves exact database types and we recommend it for large tables.

sqlalchemy backend

sqlalchemy (the default) yields table data as list of Python dictionaries. This data goes through regular extract and normalize steps and does not require additional dependencies to be installed. It is the most robust (works with any destination, correctly represents data types) but also the slowest. You can use detect_precision_hints to pass exact database types to dlt schema.

pyarrow backend

pyarrow yields data as Arrow tables. It uses SqlAlchemy to read rows in batches but then immediately converts them into ndarray, transposes it and uses to set columns in an arrow table. This backend always fully reflects the database table and preserves original types ie. decimal / numeric will be extracted without loss of precision. If the destination loads parquet files, this backend will skip dlt normalizer and you can gain two orders of magnitude (20x - 30x) speed increase.

Note that if pandas is installed, we'll use it to convert SqlAlchemy tuples into ndarray as it seems to be 20-30% faster than using numpy directly.

import sqlalchemy as sa
pipeline = dlt.pipeline(
pipeline_name="rfam_cx", destination="postgres", dataset_name="rfam_data_arrow"

def _double_as_decimal_adapter(table: sa.Table) -> None:
"""Return double as double, not decimals, this is mysql thing"""
for column in table.columns.values():
if isinstance(column.type, sa.Double):
column.type.asdecimal = False

sql_alchemy_source = sql_database(
).with_resources("family", "genome")

info =

pandas backend

pandas backend yield data as data frames using the module. dlt use pyarrow dtypes by default as they generate more stable typing.

With default settings, several database types will be coerced to dtypes in yielded data frame:

  • decimal are mapped to doubles so it is possible to lose precision.
  • date and time are mapped to strings
  • all types are nullable.

Note: dlt will still use the reflected source database types to create destination tables. It is up to the destination to reconcile / parse type differences. Most of the destinations will be able to parse date/time strings and convert doubles into decimals (Please note that you' still lose precision on decimals with default settings.). However we strongly suggest not to use pandas backend if your source tables contain date, time or decimal columns

Example: Use backend_kwargs to pass backend-specific settings ie. coerce_float. Internally dlt uses to generate panda frames.

import sqlalchemy as sa
pipeline = dlt.pipeline(
pipeline_name="rfam_cx", destination="postgres", dataset_name="rfam_data_pandas_2"

def _double_as_decimal_adapter(table: sa.Table) -> None:
"""Emits decimals instead of floats."""
for column in table.columns.values():
if isinstance(column.type, sa.Float):
column.type.asdecimal = True

sql_alchemy_source = sql_database(
# set coerce_float to False to represent them as string
backend_kwargs={"coerce_float": False, "dtype_backend": "numpy_nullable"},
).with_resources("family", "genome")

info =

connectorx backend

connectorx backend completely skips sqlalchemy when reading table rows, in favor of doing that in rust. This is claimed to be significantly faster than any other method (confirmed only on postgres - see next chapter). With the default settings it will emit pyarrow tables, but you can configure it via backend_kwargs.

There are certain limitations when using this backend:

  • it will ignore chunk_size. connectorx cannot yield data in batches.
  • in many cases it requires a connection string that differs from sqlalchemy connection string. Use conn argument in backend_kwargs to set it up.
  • it will convert decimals to doubles so you'll will lose precision.
  • nullability of the columns is ignored (always true)
  • it uses different database type mappings for each database type. check here for more details
  • JSON fields (at least those coming from postgres) are double wrapped in strings. Here's a transform to be added with add_map that will unwrap it:
from sources.sql_database.helpers import unwrap_json_connector_x

Note: dlt will still use the reflected source database types to create destination tables. It is up to the destination to reconcile / parse type differences. Please note that you' still lose precision on decimals with default settings.

"""Uses unsw_flow dataset (~2mln rows, 25+ columns) to test connectorx speed"""
import os
from dlt.destinations import filesystem

unsw_table = sql_table(
# this is ignored by connectorx
# keep source data types
# just to demonstrate how to setup a separate connection string for connectorx
backend_kwargs={"conn": "postgresql://loader:loader@localhost:5432/dlt_data"}

pipeline = dlt.pipeline(

info =

With dataset above and local postgres instance, connectorx is 2x faster than pyarrow backend.

Notes on source databases


  1. When using oracledb dialect in thin mode we are getting protocol errors. Use thick mode or cx_oracle (old) client.
  2. Mind that sqlalchemy translates Oracle identifiers into lower case! Keep the default dlt naming convention (snake_case) when loading data. We'll support more naming conventions soon.
  3. Connectorx is for some reason slower for Oracle than pyarrow backend.


  1. Mind that sqlalchemy translates DB2 identifiers into lower case! Keep the default dlt naming convention (snake_case) when loading data. We'll support more naming conventions soon.
  2. DB2 DOUBLE type is mapped to Numeric SqlAlchemy type with default precision, still float python types are returned. That requires dlt to perform additional casts. The cost of the cast however is minuscule compared to the cost of reading rows from database


  1. SqlAlchemy dialect converts doubles to decimals, we disable that behavior via table adapter in our demo pipeline

Postgres / MSSQL

No issues found. Postgres is the only backend where we observed 2x speedup with connector x. On other db systems it performs same as pyarrrow backend or slower.

Incremental Loading

Efficient data management often requires loading only new or updated data from your SQL databases, rather than reprocessing the entire dataset. This is where incremental loading comes into play.

Incremental loading uses a cursor column (e.g., timestamp or auto-incrementing ID) to load only data newer than a specified initial value, enhancing efficiency by reducing processing time and resource use.

Configuring Incremental Loading

  1. Choose a Cursor Column: Identify a column in your SQL table that can serve as a reliable indicator of new or updated rows. Common choices include timestamp columns or auto-incrementing IDs.
  2. Set an Initial Value: Choose a starting value for the cursor to begin loading data. This could be a specific timestamp or ID from which you wish to start loading data.
  3. Deduplication: When using incremental loading, the system automatically handles the deduplication of rows based on the primary key (if available) or row hash for tables without a primary key.
  4. Set end_value for backfill: Set end_value if you want to backfill data from certain range.
  5. Order returned rows. Set row_order to asc or desc to order returned rows.

Incremental Loading Example

  1. Consider a table with a last_modified timestamp column. By setting this column as your cursor and specifying an initial value, the loader generates a SQL query filtering rows with last_modified values greater than the specified initial value.

    from sql_database import sql_table
    from datetime import datetime

    # Example: Incrementally loading a table based on a timestamp column
    table = sql_table(
    'last_modified', # Cursor column name
    initial_value=datetime(2024, 1, 1) # Initial cursor value

    info = pipeline.extract(table, write_disposition="merge")
  2. To incrementally load the "family" table using the sql_database source method:

    source = sql_database().with_resources("family")
    #using the "updated" field as an incremental field using initial value of January 1, 2022, at midnight"updated"),initial_value=pendulum.DateTime(2022, 1, 1, 0, 0, 0))
    #running the pipeline
    info =, write_disposition="merge")

    In this example, we load data from the family table, using the updated column for incremental loading. In the first run, the process loads all data starting from midnight (00:00:00) on January 1, 2022. Subsequent runs perform incremental loading, guided by the values in the updated field.

  3. To incrementally load the "family" table using the 'sql_table' resource.

    family = sql_table(
    "updated", initial_value=pendulum.datetime(2022, 1, 1, 0, 0, 0)
    # Running the pipeline
    info = pipeline.extract(family, write_disposition="merge")

    This process initially loads all data from the family table starting at midnight on January 1, 2022. For later runs, it uses the updated field for incremental loading as well.

    • For merge write disposition, the source table needs a primary key, which dlt automatically sets up.
    • apply_hints is a powerful method that enables schema modifications after resource creation, like adjusting write disposition and primary keys. You can choose from various tables and use apply_hints multiple times to create pipelines with merged, appended, or replaced resources.

Run on Airflow

When running on Airflow

  1. Use dlt Airflow Helper to create tasks from sql_database source. You should be able to run table extraction in parallel with parallel-isolated source->DAG conversion.
  2. Reflect tables at runtime with defer_table_reflect argument.
  3. Set allow_external_schedulers to load data using Airflow intervals.

Parallel extraction

You can extract each table in a separate thread (no multiprocessing at this point). This will decrease loading time if your queries take time to execute or your network latency/speed is low.

database = sql_database().parallelize()
table = sql_table().parallelize()


Connect to mysql with SSL

Here, we use the mysql and pymysql dialects to set up an SSL connection to a server, with all information taken from the SQLAlchemy docs.

  1. To enforce SSL on the client without a client certificate you may pass the following DSN:

  2. You can also pass the server's public certificate (potentially bundled with your pipeline) and disable host name checks:

  3. For servers requiring a client certificate, provide the client's private key (a secret value). In Airflow, this is usually saved as a variable and exported to a file before use. The server certificate is omitted in the example below:


SQL Server connection options

To connect to an mssql server using Windows authentication, include trusted_connection=yes in the connection string.


To connect to a local sql server instance running without SSL pass encrypt=no parameter:


To allow self signed SSL certificate when you are getting certificate verify failed:unable to get local issuer certificate:


*To use long strings (>8k) and avoid collation errors:



Transform the data in Python before it is loaded

You have direct access to all resources (that represent tables) and you can modify hints, add python transforms, parallelize execution etc. as for any other resource. Below we show you an example on how to pseudonymize the data before it is loaded by using deterministic hashing.

  1. Configure the pipeline by specifying the pipeline name, destination, and dataset as follows:

    pipeline = dlt.pipeline(
    pipeline_name="rfam", # Use a custom name if desired
    destination="duckdb", # Choose the appropriate destination (e.g., duckdb, redshift, post)
    dataset_name="rfam_data" # Use a custom name if desired
  2. Pass your credentials using any of the methods described above.

  3. To load the entire database, use the sql_database source as:

    source = sql_database()
    info =, write_disposition="replace")
  4. If you just need the "family" table, use:

    source = sql_database().with_resources("family")
    #running the pipeline
    info =, write_disposition="replace")
  5. To pseudonymize columns and hide personally identifiable information (PII), refer to the documentation. As an example, here's how to pseudonymize the "rfam_acc" column in the "family" table:

    import hashlib

    def pseudonymize_name(doc):
    Pseudonmyisation is a deterministic type of PII-obscuring
    Its role is to allow identifying users by their hash,
    without revealing the underlying info.
    # add a constant salt to generate
    salt = 'WI@N57%zZrmk#88c'
    salted_string = doc['rfam_acc'] + salt
    sh = hashlib.sha256()
    hashed_string = sh.digest().hex()
    doc['rfam_acc'] = hashed_string
    return doc

    pipeline = dlt.pipeline(
    # Configure the pipeline
    # using sql_database source to load family table and pseudonymize the column "rfam_acc"
    source = sql_database().with_resources("family")
    # modify this source instance's resource
    source =
    # Run the pipeline. For a large db this may take a while
    info =, write_disposition="replace")
  6. To exclude columns, such as the "rfam_id" column from the "family" table before loading:

    def remove_columns(doc):
    del doc["rfam_id"]
    return doc

    pipeline = dlt.pipeline(
    # Configure the pipeline
    # using sql_database source to load family table and remove the column "rfam_id"
    source = sql_database().with_resources("family")
    # modify this source instance's resource
    source =
    # Run the pipeline. For a large db this may take a while
    info =, write_disposition="replace")
  7. Remember to keep the pipeline name and destination dataset name consistent. The pipeline name is crucial for retrieving the state from the last run, which is essential for incremental loading. Altering these names could initiate a "full_refresh", interfering with the metadata tracking necessary for incremental loads.

Additional Setup guides

This demo works on codespaces. Codespaces is a development environment available for free to anyone with a Github account. You'll be asked to fork the demo repository and from there the README guides you with further steps.
The demo uses the Continue VSCode extension.

Off to codespaces!


Ask a question

Welcome to "Codex Central", your next-gen help center, driven by OpenAI's GPT-4 model. It's more than just a forum or a FAQ hub – it's a dynamic knowledge base where coders can find AI-assisted solutions to their pressing problems. With GPT-4's powerful comprehension and predictive abilities, Codex Central provides instantaneous issue resolution, insightful debugging, and personalized guidance. Get your code running smoothly with the unparalleled support at Codex Central - coding help reimagined with AI prowess.