Hedera Custodians Library Python API Docs | dltHub

Build a Hedera Custodians Library-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.

Last updated:

Hedera Custodians Library is a TypeScript utility that simplifies custodial wallet management and account operations for the Hedera network. The REST API base URL is N/A — the Hedera Custodians Library is a client library, not a hosted REST API. For REST integrations use the underlying provider APIs (e.g., Fireblocks API: https://api.fireblocks.io/v1) or Hedera Mirror Node REST API (e.g., https://mainnet-public.mirrornode.hedera.com). and No global REST auth for the library itself; authentication is provider‑specific (Fireblocks API keys and HMAC, DFNS service tokens/keys) and Hedera services use their respective auth or none for public mirror nodes..

dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Hedera Custodians Library data in under 10 minutes.


What data can I load from Hedera Custodians Library?

Here are some of the endpoints you can load from Hedera Custodians Library:

ResourceEndpointMethodData selectorDescription
custodians_library(client library)N/ATypeScript client library — no REST endpoints exposed by the library itself
fireblocks_vault_assetshttps://api.fireblocks.io/v1/vault/assetsGETassetsFireblocks vault assets list (provider API; used by library when configured)
fireblocks_vault_accountshttps://api.fireblocks.io/v1/vault/accountsGETaccountsFireblocks vault accounts list (provider API)
dfns_wallets(DFNS customer API base)GET(provider-specific)DFNS provides RESTful endpoints per customer for wallets/keys; exact paths and response keys are customer/DFNS-version dependent
mirror_transactions/api/v1/transactionsGETtransactionsHedera Mirror Node REST API — list of transactions (historical)
mirror_accounts/api/v1/accountsGETaccountsHedera Mirror Node REST API — account list/search
mirror_balances/api/v1/balancesGETbalancesHedera Mirror Node REST API — account balance snapshots
mirror_tokens/api/v1/tokensGETtokensHedera Mirror Node REST API — tokens list/search

How do I authenticate with the Hedera Custodians Library API?

The Custodians Library itself uses provider‑specific credentials (e.g., Fireblocks API key + secret; DFNS service token/credentials). Hedera Mirror Node REST API is public for read endpoints (no auth) — provider endpoints require headers per provider docs (e.g., Fireblocks uses 'X-API-Key' and signed requests).

1. Get your credentials

  1. Fireblocks: sign up for a Fireblocks account, create an API key in the Fireblocks console and download the API private key (or copy the Base64 secret); record the Vault Account ID and Asset ID used in your integration.
  2. DFNS: register an account with DFNS, create a service/app in the DFNS dashboard to obtain service account credentials (authorization token, credential ID, and private key).
  3. Hedera Mirror Node: no credentials required for public REST read endpoints; simply use the network‑specific mirror node base URL.

2. Add them to .dlt/secrets.toml

[sources.hedera_custodians_library_source] fireblocks_api_key = "your_fireblocks_api_key" fireblocks_api_secret_key = "base64_or_pem_private_key_or_path" fireblocks_base_url = "https://api.fireblocks.io/v1" dfns_service_token = "your_dfns_service_token" dfns_service_private_key = "base64_private_key_or_path" mirror_base_url = "https://mainnet-public.mirrornode.hedera.com"

dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.


How do I set up and run the pipeline?

Set up a virtual environment and install dlt:

uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"

1. Install the dlt AI Workbench:

dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex

This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →

2. Install the rest-api-pipeline toolkit:

dlt ai toolkit rest-api-pipeline install

This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →

3. Start LLM-assisted coding:

Use /find-source to load data from the Hedera Custodians Library API into DuckDB.

The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.

4. Run the pipeline:

python hedera_custodians_library_pipeline.py

If everything is configured correctly, you'll see output like this:

Pipeline hedera_custodians_library_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset hedera_custodians_library_data The duckdb destination used duckdb:/hedera_custodians_library.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs

Inspect your pipeline and data:

dlt pipeline hedera_custodians_library_pipeline show

This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.


Python pipeline example

This example loads mirror_transactions and fireblocks_vault_accounts from the Hedera Custodians Library API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:

import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def hedera_custodians_library_source(fireblocks_api_key=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "N/A — the Hedera Custodians Library is a client library, not a hosted REST API. For REST integrations use the underlying provider APIs (e.g., Fireblocks API: https://api.fireblocks.io/v1) or Hedera Mirror Node REST API (e.g., https://mainnet-public.mirrornode.hedera.com).", "auth": { "type": "api_key", "fireblocks_api_key": fireblocks_api_key, }, }, "resources": [ {"name": "mirror_transactions", "endpoint": {"path": "api/v1/transactions", "data_selector": "transactions"}}, {"name": "fireblocks_vault_accounts", "endpoint": {"path": "vault/accounts", "data_selector": "accounts"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="hedera_custodians_library_pipeline", destination="duckdb", dataset_name="hedera_custodians_library_data", ) load_info = pipeline.run(hedera_custodians_library_source()) print(load_info)

To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.


How do I query the loaded data?

Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.

Python (pandas DataFrame):

import dlt data = dlt.pipeline("hedera_custodians_library_pipeline").dataset() sessions_df = data.mirror_transactions.df() print(sessions_df.head())

SQL (DuckDB example):

SELECT * FROM hedera_custodians_library_data.mirror_transactions LIMIT 10;

In a marimo or Jupyter notebook:

import dlt data = dlt.pipeline("hedera_custodians_library_pipeline").dataset() data.mirror_transactions.df().head()

See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.


What destinations can I load Hedera Custodians Library data to?

dlt supports loading into any of these destinations — only the destination parameter changes:

DestinationExample value
DuckDB (local, default)"duckdb"
PostgreSQL"postgres"
BigQuery"bigquery"
Snowflake"snowflake"
Redshift"redshift"
Databricks"databricks"
Filesystem (S3, GCS, Azure)"filesystem"

Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.


Troubleshooting

Authentication failures

If provider API credentials are incorrect or missing you'll get 401/403 from provider APIs (Fireblocks/DFNS). Verify API key, secret/private key encoding, and any required header names (e.g. Fireblocks 'X-API-Key'). The Hedera Custodians Library requires correct env vars such as FIREBLOCKS_API_KEY and FIREBLOCKS_API_SECRET_KEY or DFNS_* vars as shown in repo tests.

Provider-specific rate limits and signed requests

Fireblocks imposes rate limits and requires requests to be signed; exceed limits and you may receive 429 responses. Ensure your integration signs requests per Fireblocks docs. DFNS may use tenant-specific endpoints and auth — consult DFNS docs for throttling.

Mirror Node pagination and query limits

Mirror Node GET endpoints paginate with 'limit' and 'timestamp' or 'order' query params; responses include arrays under keys like 'transactions', 'accounts', 'balances'. For large data pulls use time‑windowed queries to avoid timeouts.

Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.


Next steps

Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:

  • data-exploration — Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.
  • dlthub-runtime — Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install

Was this page helpful?

Community Hub

Need more dlt context for Hedera Custodians Library?

Request dlt skills, commands, AGENT.md files, and AI-native context.