Microsoft Fabric Warehouse
Install dlt with Fabric
To install the dlt library with Fabric Warehouse dependencies, use:
pip install "dlt[fabric]"
This will install dlt with the mssql extra, which contains all the dependencies required by the SQL Server client that Fabric uses.
Destination capabilities
The following table shows the capabilities of the Fabric destination:
| Feature | Value | More |
|---|---|---|
| Preferred loader file format | insert_values | File formats |
| Supported loader file formats | insert_values, model | File formats |
| Preferred staging file format | parquet | File formats |
| Supported staging file formats | parquet | File formats |
| Has case sensitive identifiers | True | Naming convention |
| Supported merge strategies | delete-insert, scd2 | Merge strategy |
| Supported replace strategies | truncate-and-insert, insert-from-staging | Replace strategy |
| Sqlglot dialect | fabric | Dataset access |
| Supports tz aware datetime | True | Timestamps and Timezones |
| Supports naive datetime | True | Timestamps and Timezones |
This table shows the supported features of the Fabric destination in dlt.
Setup guide
Prerequisites
The Microsoft ODBC Driver for SQL Server must be installed to use this destination.
This cannot be included with dlt's Python dependencies, so you must install it separately on your system. You can find the official installation instructions here.
Supported driver versions:
ODBC Driver 18 for SQL Server(recommended)ODBC Driver 17 for SQL Server
You can also configure the driver name explicitly.
Service Principal Authentication
Fabric Warehouse requires Azure Active Directory Service Principal authentication. You'll need:
- Tenant ID: Your Azure AD tenant ID (GUID)
- Client ID: Application (service principal) client ID (GUID)
- Client Secret: Application client secret
- Host: Your Fabric warehouse SQL endpoint
- Database: The database name within your warehouse
Finding your SQL endpoint:
- In the Fabric portal, go to your warehouse Settings
- Select SQL endpoint
- Copy the SQL connection string - it should be in the format:
<guid>.datawarehouse.fabric.microsoft.com
Create a pipeline
1. Initialize a project with a pipeline that loads to Fabric by running:
dlt init chess fabric
2. Install the necessary dependencies for Fabric by running:
pip install -r requirements.txt
or run:
pip install "dlt[fabric]"
3. Enter your credentials into .dlt/secrets.toml.
[destination.fabric.credentials]
host = "<your-warehouse-guid>.datawarehouse.fabric.microsoft.com"
database = "mydb"
azure_tenant_id = "your-azure-tenant-id"
azure_client_id = "your-client-id"
azure_client_secret = "your-client-secret"
port = 1433
connect_timeout = 30
Write disposition
All write dispositions are supported.
If you set the replace strategy to staging-optimized, the destination tables will be dropped and recreated with an ALTER SCHEMA ... TRANSFER. The operation is atomic: Fabric supports DDL transactions.
Staging support
Fabric Warehouse supports staging data via OneLake Lakehouse or Azure Blob / Data Lake Storage using the COPY INTO command for efficient bulk loading. This is the recommended approach for large datasets.
Examples
import dlt
pipeline = dlt.pipeline(
destination="fabric",
staging="filesystem",
dataset_name='my_dataset'
)
.dlt/secrets.toml when using OneLake:
[destination.fabric.credentials]
# your fabric credentials
[destination.filesystem]
bucket_url = "abfss://<your-workspace-guid>@onelake.dfs.fabric.microsoft.com/<your-lakehouse-guid>/Files"
[destination.filesystem.credentials]
azure_storage_account_name = "onelake"
azure_account_host = "onelake.blob.fabric.microsoft.com"
# use same Service Principal credentials as in [destination.fabric.credentials]
azure_tenant_id = "your-tenant-id"
azure_client_id = "your-client-id"
azure_client_secret = "your-client-secret"
Finding your GUIDs:
- Navigate to your Fabric workspace in the browser
- The workspace GUID is in the URL:
https://fabric.microsoft.com/groups/<workspace_guid>/... - Open your Lakehouse
- The lakehouse GUID is in the URL:
https://fabric.microsoft.com/.../lakehouses/<lakehouse_guid>
.dlt/secrets.toml when using Azure Blob / Data Lake Storage:
[destination.fabric.credentials]
# your fabric credentials
[destination.filesystem]
bucket_url = "az://your-container-name"
[destination.filesystem.credentials]
azure_storage_account_name = "your-storage-account-name"
azure_storage_account_key = "your-storage-account-key"
Data loading
Data is loaded via INSERT statements by default. Fabric Warehouse has a limit of 1000 rows per INSERT, and this is what we use.
Supported file formats
- insert-values is the default and currently only supported format
Supported column hints
fabric will create unique indexes for all columns with unique hints. This behavior is disabled by default.
Table and column identifiers
Fabric Warehouse (like SQL Server) uses case-insensitive identifiers but preserves the casing of identifiers stored in the INFORMATION SCHEMA. You can use case-sensitive naming conventions to keep the identifier casing. Note that you risk generating identifier collisions, which are detected by dlt and will fail the load process.
Syncing of dlt state
This destination fully supports dlt state sync.
Data types
Fabric Warehouse differs from standard SQL Server in several important ways:
VARCHAR vs NVARCHAR
Fabric Warehouse uses varchar for text columns instead of nvarchar. This destination automatically maps:
text→varchar(max)text(with unique hint) →varchar(900)(limited for index support)
DATETIME2 vs DATETIMEOFFSET
Fabric uses datetime2 for timestamps instead of datetimeoffset:
timestamp→datetime2(6)(precision limited to 0-6, not 0-7)time→time(6)(explicit precision required)
JSON Storage
Fabric does not support native JSON columns. JSON objects are stored as varchar(max) columns.
Collation Support
Fabric Warehouse supports UTF-8 collations. The destination automatically configures LongAsMax=yes which is required for UTF-8 collations to work properly.
Default collation: Latin1_General_100_BIN2_UTF8 (case-sensitive, UTF-8)
You can specify a different collation:
[destination.fabric]
collation = "Latin1_General_100_CI_AS_KS_WS_SC_UTF8" # case-insensitive
Or in code:
pipeline = dlt.pipeline(
destination=fabric(
credentials=my_credentials,
collation="Latin1_General_100_CI_AS_KS_WS_SC_UTF8"
)
)
Additional destination options
The fabric destination does not create UNIQUE indexes by default on columns with the unique hint (i.e., _dlt_id). To enable this behavior:
[destination.fabric]
create_indexes=true
You can explicitly set the ODBC driver name:
[destination.fabric.credentials]
driver="ODBC Driver 18 for SQL Server"
Differences from MSSQL Destination
While Fabric Warehouse is based on SQL Server, there are key differences:
- Authentication: Fabric requires Service Principal; username/password auth is not supported
- Type System: Uses
varcharanddatetime2instead ofnvarcharanddatetimeoffset - Collation: Optimized for UTF-8 collations with automatic
LongAsMaxconfiguration - SQL Dialect: Uses
fabricSQLglot dialect for proper SQL generation
Troubleshooting
ODBC Driver Not Found
If you see "No supported ODBC driver found", install the Microsoft ODBC Driver 18 for SQL Server:
# Ubuntu/Debian
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
curl https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/prod.list | sudo tee /etc/apt/sources.list.d/mssql-release.list
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install -y msodbcsql18
Authentication Failures
Ensure your Service Principal has:
- Proper permissions on the Fabric workspace
- Access to the target database/warehouse
- Correct tenant ID (your Azure AD tenant, not the workspace/capacity ID)
UTF-8 Character Issues
If you experience character encoding issues:
- Verify your warehouse uses a UTF-8 collation
- Check that
LongAsMax=yesis in the connection (automatically added by this destination) - Consider using the case-insensitive UTF-8 collation if needed