D3 Python API Docs | dltHub

Build a D3-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.

Last updated:

D3 API is a partner API that provides access to D3 name-token marketplace and registry functionality (search, recommendations, token metadata, wallet tokens, payments and orders). The REST API base URL is https://api-public.d3.app/v1 and All requests require an API key passed in a header (Api-Key / api-key)..

dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading D3 data in under 10 minutes.


What data can I load from D3?

Here are some of the endpoints you can load from D3:

ResourceEndpointMethodData selectorDescription
partner_search/v1/partner/searchGETpageItemsSearch name tokens with availability and pricing (paginated).
partner_recommendations/v1/partner/recommendationsGET(top-level array)Get name recommendations for given SLD(s)/TLD(s).
partner_payment_options/v1/partner/payment/optionsGEToptionsGet supported payment options for TLDs.
partner_token_status/v1/partner/token/{sld}/{tld}GET(object)Get metadata and registration status for a name token.
partner_token_by_id/v1/partner/token/{chainId}/{contractAddress}/{tokenId}GET(object)Get token metadata by on-chain token identifier.
partner_wallet_tokens/v1/partner/tokens/{addressType}/{address}GETpageItemsGet name tokens registered to a wallet (paginated).
partner_tokens_batch/v1/partner/tokens/{chainId}/{contractAddress}POST(top-level array)Batch fetch token metadata by token IDs (POST).
partner_order/v1/partner/orderPOST(object)Create an order to purchase/mint name tokens (returns order + voucher).

How do I authenticate with the D3 API?

Provide your API key in the request header named Api-Key (docs also show lowercase 'api-key' in examples). Keep the key secret; some endpoints additionally require EIP-712 signatures in request bodies for wallet-mapping operations.

1. Get your credentials

  1. Sign up or log in to the D3 Developer Dashboard at https://developers.d3.app. 2) Create a new application/integration and request partner API access. 3) Generate an API key in the dashboard and copy it. 4) Assign required permissions (SEARCH, PURCHASE, NON_PREMIUM_MINT, etc.) to the key depending on endpoints you will call.

2. Add them to .dlt/secrets.toml

[sources.d3_api_source] api_key = "your_api_key_here"

dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.


How do I set up and run the pipeline?

Set up a virtual environment and install dlt:

uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"

1. Install the dlt AI Workbench:

dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex

This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →

2. Install the rest-api-pipeline toolkit:

dlt ai toolkit rest-api-pipeline install

This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →

3. Start LLM-assisted coding:

Use /find-source to load data from the D3 API into DuckDB.

The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.

4. Run the pipeline:

python d3_api_pipeline.py

If everything is configured correctly, you'll see output like this:

Pipeline d3_api_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset d3_api_data The duckdb destination used duckdb:/d3_api.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs

Inspect your pipeline and data:

dlt pipeline d3_api_pipeline show

This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.


Python pipeline example

This example loads partner_search and partner_token_status from the D3 API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:

import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def d3_api_source(api_key=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://api-public.d3.app/v1", "auth": { "type": "api_key", "api_key": api_key, }, }, "resources": [ {"name": "partner_search", "endpoint": {"path": "v1/partner/search", "data_selector": "pageItems"}}, {"name": "partner_token_status", "endpoint": {"path": "v1/partner/token/{sld}/{tld}"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="d3_api_pipeline", destination="duckdb", dataset_name="d3_api_data", ) load_info = pipeline.run(d3_api_source()) print(load_info)

To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.


How do I query the loaded data?

Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.

Python (pandas DataFrame):

import dlt data = dlt.pipeline("d3_api_pipeline").dataset() sessions_df = data.partner_search.df() print(sessions_df.head())

SQL (DuckDB example):

SELECT * FROM d3_api_data.partner_search LIMIT 10;

In a marimo or Jupyter notebook:

import dlt data = dlt.pipeline("d3_api_pipeline").dataset() data.partner_search.df().head()

See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.


What destinations can I load D3 data to?

dlt supports loading into any of these destinations — only the destination parameter changes:

DestinationExample value
DuckDB (local, default)"duckdb"
PostgreSQL"postgres"
BigQuery"bigquery"
Snowflake"snowflake"
Redshift"redshift"
Databricks"databricks"
Filesystem (S3, GCS, Azure)"filesystem"

Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.


Troubleshooting

Authentication failures

If the API key is missing or invalid the API returns 401 Unauthorized. Ensure you send the header Api-Key (or api-key in examples) with a valid key and that the key has the required permissions for the endpoint (e.g. SEARCH, PURCHASE).

Signing / wallet mapping errors

Endpoints that modify wallet mappings or set primary names require an EIP-712 signature in the request body. If signature is expired or invalid the API returns 400/403; ensure signatureExpiresAt is in milliseconds and within the allowed window and that the signature is generated with the documented EIP-712 domain and types.

Pagination

Paginated endpoints return an object with total and pageItems arrays (e.g. search and wallet tokens). Use the 'limit' and 'skip' query parameters to page through results.

common_api_errors:

  • 400 Bad Request — invalid parameters or signature
  • 401 Unauthorized — API key missing or invalid
  • 403 Forbidden — API key missing required permission or signature incorrect/expired
  • 404 Not Found — resource not found
  • 409 Conflict — resource already exists (e.g. mint conflict)

Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.


Next steps

Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:

  • data-exploration — Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.
  • dlthub-runtime — Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install

Was this page helpful?

Community Hub

Need more dlt context for D3?

Request dlt skills, commands, AGENT.md files, and AI-native context.