Skip to main content
Version: 1.5.0 (latest)

Databricks

Big thanks to Evan Phillips and swishbi.com for contributing code, time, and a test environment.

Install dlt with Databricksโ€‹

To install the dlt library with Databricks dependencies:

pip install "dlt[databricks]"

Set up your Databricks workspaceโ€‹

To use the Databricks destination, you need:

  • A Databricks workspace with a Unity Catalog metastore connected
  • A Gen 2 Azure storage account and container

If you already have your Databricks workspace set up, you can skip to the Loader setup guide.

1. Create a Databricks workspace in Azureโ€‹

  1. Create a Databricks workspace in Azure

    In your Azure Portal, search for Databricks and create a new workspace. In the "Pricing Tier" section, select "Premium" to be able to use the Unity Catalog.

  2. Create an ADLS Gen 2 storage account

    Search for "Storage accounts" in the Azure Portal and create a new storage account. Make sure it's a Data Lake Storage Gen 2 account by enabling "hierarchical namespace" when creating the account. Refer to the Azure documentation for further information.

  3. Create a container in the storage account

    In the storage account, create a new container. This will be used as a datastore for your Databricks catalog.

  4. Create an Access Connector for Azure Databricks

    This will allow Databricks to access your storage account. In the Azure Portal, search for "Access Connector for Azure Databricks" and create a new connector.

  5. Grant access to your storage container

    Navigate to the storage container you created earlier and select "Access control (IAM)" in the left-hand menu.

    Add a new role assignment and select "Storage Blob Data Contributor" as the role. Under "Members" select "Managed Identity" and add the Databricks Access Connector you created in the previous step.

2. Set up a metastore and Unity Catalogโ€‹

  1. Now go to your Databricks workspace

    To get there from the Azure Portal, search for "Databricks", select your Databricks, and click "Launch Workspace".

  2. In the top right corner, click on your email address and go to "Manage Account"

  3. Go to "Data" and click on "Create Metastore"

    Name your metastore and select a region. If you'd like to set up a storage container for the whole metastore, you can add your ADLS URL and Access Connector Id here. You can also do this on a granular level when creating the catalog.

    In the next step, assign your metastore to your workspace.

  4. Go back to your workspace and click on "Catalog" in the left-hand menu

  5. Click "+ Add" and select "Add Storage Credential"

    Create a name and paste in the resource ID of the Databricks Access Connector from the Azure portal. It will look something like this: /subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.Databricks/accessConnectors/<connector_name>

  1. Click "+ Add" again and select "Add external location"

    Set the URL of your storage container. This should be in the form: abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<path>

    Once created, you can test the connection to make sure the container is accessible from Databricks.

  2. Now you can create a catalog

    Go to "Catalog" and click "Create Catalog". Name your catalog and select the storage location you created in the previous step.

Authenticationโ€‹

dlt currently supports two options for authentication:

  1. OAuth2 (recommended) allows you to authenticate to Databricks using a service principal via OAuth2 M2M.
  2. Access token approach using a developer access token. This method may be deprecated in the future by Databricks.

Using OAuth2โ€‹

You can authenticate to Databricks using a service principal via OAuth2 M2M. To enable it:

  1. Follow the instructions in the Databricks documentation: Authenticate access to Databricks using OAuth M2M to create a service principal and retrieve the client_id and client_secret.

  2. Once you have the service principal credentials, update your credentials with any of the options shown below:

# secrets.toml
[destination.databricks.credentials]
server_hostname = "MY_DATABRICKS.azuredatabricks.net"
http_path = "/sql/1.0/warehouses/12345"
catalog = "my_catalog"
client_id = "XXX"
client_secret = "XXX"

Using access tokenโ€‹

To create your access token:

  1. Click your email in the top right corner and go to "User Settings". Go to "Developer" -> "Access Tokens". Generate a new token and save it.
  2. Set up credentials in a desired way:
# secrets.toml
[destination.databricks.credentials]
server_hostname = "MY_DATABRICKS.azuredatabricks.net"
http_path = "/sql/1.0/warehouses/12345"
catalog = "my_catalog"
access_token = "XXX"

Loader setup guideโ€‹

1. Initialize a project with a pipeline that loads to Databricks by running

dlt init chess databricks

2. Install the necessary dependencies for Databricks by running

pip install -r requirements.txt

This will install dlt with the databricks extra, which contains the Databricks Python dbapi client.

3. Enter your credentials into .dlt/secrets.toml.

This should include your connection parameters and your authentication credentials.

You can find your server hostname and HTTP path in the Databricks workspace dashboard. Go to "SQL Warehouses", select your warehouse (default is called "Starter Warehouse"), and go to "Connection details".

Example:

[destination.databricks.credentials]
server_hostname = "MY_DATABRICKS.azuredatabricks.net"
http_path = "/sql/1.0/warehouses/12345"
client_id = "XXX"
client_secret = "XXX"
catalog = "my_catalog"

You can find other options for specifying credentials in the Authentication section.

See Staging support for authentication options when dlt copies files from buckets.

Write dispositionโ€‹

All write dispositions are supported.

Data loadingโ€‹

To load data into Databricks, you must set up a staging filesystem by configuring an Amazon S3 or Azure Blob Storage bucket. Parquet is the default file format used for data uploads. As an alternative to Parquet, you can switch to using JSONL.

dlt will upload the data in Parquet files (or JSONL, if configured) to the bucket and then use COPY INTO statements to ingest the data into Databricks.

For more information on staging, see the Staging support section below.

Supported file formatsโ€‹

  • Parquet supported when staging is enabled.
  • JSONL supported when staging is enabled (see limitations below).

The JSONL format has some limitations when used with Databricks:

  1. Compression must be disabled to load JSONL files in Databricks. Set data_writer.disable_compression to true in the dlt config when using this format.
  2. The following data types are not supported when using the JSONL format with databricks: decimal, json, date, binary. Use parquet if your data contains these types.
  3. The bigint data type with precision is not supported with the JSONL format.

Staging supportโ€‹

Databricks supports both Amazon S3, Azure Blob Storage, and Google Cloud Storage as staging locations. dlt will upload files in Parquet format to the staging location and will instruct Databricks to load data from there.

Databricks and Amazon S3โ€‹

Please refer to the S3 documentation for details on connecting your S3 bucket with the bucket_url and credentials.

Example to set up Databricks with S3 as a staging destination:

# secrets.toml
[destination.filesystem]
bucket_url = "s3://your-bucket-name"

[destination.filesystem.credentials]
aws_access_key_id="XXX"
aws_secret_access_key="XXX"

Databricks and Azure Blob Storageโ€‹

Refer to the Azure Blob Storage filesystem documentation for details on connecting your Azure Blob Storage container with the bucket_url and credentials.

To enable support for Azure Blob Storage with dlt, make sure to install the necessary dependencies by running:

pip install "dlt[az]"
note

Databricks requires that you use ABFS URLs in the following format: abfss://container_name@storage_account_name.dfs.core.windows.net/path. dlt is able to adapt the other representation (i.e., az://container-name/path), but we recommend that you use the correct form.

Example to set up Databricks with Azure as a staging destination:

# secrets.toml
[destination.filesystem]
bucket_url = "abfss://container_name@storage_account_name.dfs.core.windows.net/path"

[destination.filesystem.credentials]
azure_storage_account_name="XXX"
azure_storage_account_key="XXX"

Databricks and Google Cloud Storageโ€‹

In order to load from Google Cloud Storage stage, you must set up the credentials via a named credential. See below. Databricks does not allow you to pass Google Credentials explicitly in SQL statements.

Use external locations and stored credentialsโ€‹

dlt forwards bucket credentials to the COPY INTO SQL command by default. You may prefer to use external locations or stored credentials instead that are stored on the Databricks side.

If you set up an external location for your staging path, you can tell dlt to use it:

[destination.databricks]
is_staging_external_location=true

If you set up Databricks credentials named, for example, credential_x, you can tell dlt to use them:

[destination.databricks]
staging_credentials_name="credential_x"

Both options are available from code:

import dlt

bricks = dlt.destinations.databricks(staging_credentials_name="credential_x")

Additional destination capabilitiesโ€‹

dbt supportโ€‹

This destination integrates with dbt via dbt-databricks.

Syncing of dlt stateโ€‹

This destination fully supports dlt state sync.

Databricks user agentโ€‹

We enable Databricks to identify that the connection is created by dlt. Databricks will use this user agent identifier to better understand the usage patterns associated with dlt integration. The connection identifier is dltHub_dlt.

Additional Setup guidesโ€‹

This demo works on codespaces. Codespaces is a development environment available for free to anyone with a Github account. You'll be asked to fork the demo repository and from there the README guides you with further steps.
The demo uses the Continue VSCode extension.

Off to codespaces!

DHelp

Ask a question

Welcome to "Codex Central", your next-gen help center, driven by OpenAI's GPT-4 model. It's more than just a forum or a FAQ hub โ€“ it's a dynamic knowledge base where coders can find AI-assisted solutions to their pressing problems. With GPT-4's powerful comprehension and predictive abilities, Codex Central provides instantaneous issue resolution, insightful debugging, and personalized guidance. Get your code running smoothly with the unparalleled support at Codex Central - coding help reimagined with AI prowess.