Mambu Python API Docs | dltHub

Build a Mambu-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.

Last updated:

Mambu is a cloud-native core banking platform exposing RESTful APIs for managing clients, accounts, transactions, and core banking configuration. The REST API base URL is https://{TENANT_NAME}.mambu.com/api and All requests require HTTP Basic authentication (API consumer credentials) and an Accept versioning header..

dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Mambu data in under 10 minutes.


What data can I load from Mambu?

Here are some of the endpoints you can load from Mambu:

ResourceEndpointMethodData selectorDescription
clientsclientsGETclientsRetrieve list of clients
clientclients/{clientId}GETRetrieve a single client (top-level object)
savings_accountssavingsAccountsGETsavingsAccountsRetrieve list of savings accounts
savings_accountsavingsAccounts/{savingsAccountId}GETRetrieve single savings account
loansloansGETloansRetrieve list of loan accounts
transactionstransactionsGETtransactionsRetrieve list of transactions
branchesbranchesGETbranchesRetrieve list of branches
usersusersGETusersRetrieve list of users
general_ledger_entriesgeneralLedgerEntriesGETgeneralLedgerEntriesRetrieve ledger/journal entries
api_consumersapiConsumersGETapiConsumersRetrieve API consumers / keys

How do I authenticate with the Mambu API?

Mambu v2 requires an Accept header (e.g. Accept: application/vnd.mambu.v2+json) and HTTP Basic authentication using API consumer credentials (API key/secret). Include the Basic auth header on every request.

1. Get your credentials

  1. Log in to your Mambu tenant UI as an administrator. 2) Navigate to Administration → API Consumers (or API Keys / Integrations). 3) Create a new API consumer (name, role/permissions). 4) Generate credentials (client id / secret or API key pair) for that consumer. 5) Copy the credentials securely; use them as Basic auth username/password when calling the API.

2. Add them to .dlt/secrets.toml

[sources.mambu_source] username = "your_api_consumer_username" password = "your_api_consumer_password"

dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.


How do I set up and run the pipeline?

Set up a virtual environment and install dlt:

uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"

1. Install the dlt AI Workbench:

dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex

This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →

2. Install the rest-api-pipeline toolkit:

dlt ai toolkit rest-api-pipeline install

This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →

3. Start LLM-assisted coding:

Use /find-source to load data from the Mambu API into DuckDB.

The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.

4. Run the pipeline:

python mambu_pipeline.py

If everything is configured correctly, you'll see output like this:

Pipeline mambu_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset mambu_data The duckdb destination used duckdb:/mambu.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs

Inspect your pipeline and data:

dlt pipeline mambu_pipeline show

This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.


Python pipeline example

This example loads clients and savingsAccounts from the Mambu API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:

import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def mambu_source(username=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://{TENANT_NAME}.mambu.com/api", "auth": { "type": "http_basic", "password": username, }, }, "resources": [ {"name": "clients", "endpoint": {"path": "clients", "data_selector": "clients"}}, {"name": "savings_accounts", "endpoint": {"path": "savingsAccounts", "data_selector": "savingsAccounts"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="mambu_pipeline", destination="duckdb", dataset_name="mambu_data", ) load_info = pipeline.run(mambu_source()) print(load_info)

To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.


How do I query the loaded data?

Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.

Python (pandas DataFrame):

import dlt data = dlt.pipeline("mambu_pipeline").dataset() sessions_df = data.clients.df() print(sessions_df.head())

SQL (DuckDB example):

SELECT * FROM mambu_data.clients LIMIT 10;

In a marimo or Jupyter notebook:

import dlt data = dlt.pipeline("mambu_pipeline").dataset() data.clients.df().head()

See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.


What destinations can I load Mambu data to?

dlt supports loading into any of these destinations — only the destination parameter changes:

DestinationExample value
DuckDB (local, default)"duckdb"
PostgreSQL"postgres"
BigQuery"bigquery"
Snowflake"snowflake"
Redshift"redshift"
Databricks"databricks"
Filesystem (S3, GCS, Azure)"filesystem"

Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.


Troubleshooting

Authentication failures

If you receive 401 Unauthorized or 403 Forbidden responses, verify the Basic auth credentials and that the API consumer has sufficient permissions. Ensure the Accept header is present and correct (e.g. Accept: application/vnd.mambu.v2+json).

Required Accept header / versioning

All v2 requests require the Accept header set to a Mambu media type (for example application/vnd.mambu.v2+json). Missing or incorrect Accept headers can return 400/406 style errors.

Pagination

List endpoints return paged results. Use the query parameters (startIndex, limit, or page-based params documented per endpoint) and inspect response metadata (totalCount/startIndex/links) to iterate pages. Failing to page correctly may return partial data.

Rate limits and throttling

If you encounter 429 Too Many Requests, back off and retry according to standard exponential backoff. Check response headers for any provider-specific retry-after hints.

Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.


Next steps

Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:

  • data-exploration — Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.
  • dlthub-runtime — Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install

Was this page helpful?

Community Hub

Need more dlt context for Mambu?

Request dlt skills, commands, AGENT.md files, and AI-native context.