Pennylane Python API Docs | dltHub
Build a Pennylane-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.
Last updated:
Pennylane is a cloud accounting and financial management platform exposing a REST API to manage companies, invoices, customers, suppliers, ledger entries and related accounting data. The REST API base URL is https://app.pennylane.com/api/external/v2 and All requests require a Bearer token for authentication.
dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Pennylane data in under 10 minutes.
What data can I load from Pennylane?
Here are some of the endpoints you can load from Pennylane:
| Resource | Endpoint | Method | Data selector | Description |
|---|---|---|---|---|
| me | me | GET | Retrieve authenticated user / environment (single object) | |
| customers | companies/{company_id}/customers | GET | items | List customers for a company |
| suppliers | companies/{company_id}/suppliers | GET | items | List suppliers for a company |
| invoices | companies/{company_id}/customer_invoices | GET | items | List customer invoices for a company |
| ledger_entries | ledger_entries | GET | items | List ledger entries (cursor pagination) |
| journals | journals | GET | items | List journals |
| ledger_accounts | ledger_accounts | GET | items | List ledger accounts |
| fiscal_years | companies/{company_id}/fiscal_years | GET | items | List company's fiscal years |
| file_attachments | file_attachments | POST | Upload a file attachment (note: POST included for relevance) |
How do I authenticate with the Pennylane API?
The API accepts a Bearer token in the Authorization header. Example header: Authorization: Bearer <YOUR_TOKEN>.
1. Get your credentials
- Sign in to your Pennylane account. 2) For Companies: go to Account Settings or API Tokens and generate a Company API token. For Firms: go to the Firm account settings and generate a Firm API token. For Integration Partners: request OAuth2 access via Pennylane Partnerships or follow the OAuth flow documented by Pennylane. 3) Copy the generated token and store securely.
2. Add them to .dlt/secrets.toml
[sources.pennylane_finance_source] api_key = "your_pennylane_api_token_here"
dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.
How do I set up and run the pipeline?
Set up a virtual environment and install dlt:
uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"
1. Install the dlt AI Workbench:
dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex
This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →
2. Install the rest-api-pipeline toolkit:
dlt ai toolkit rest-api-pipeline install
This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →
3. Start LLM-assisted coding:
Use /find-source to load data from the Pennylane API into DuckDB.
The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.
4. Run the pipeline:
python pennylane_finance_pipeline.py
If everything is configured correctly, you'll see output like this:
Pipeline pennylane_finance_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset pennylane_finance_data The duckdb destination used duckdb:/pennylane_finance.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs
Inspect your pipeline and data:
dlt pipeline pennylane_finance_pipeline show
This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.
Python pipeline example
This example loads customers and invoices from the Pennylane API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:
import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def pennylane_finance_source(api_key=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://app.pennylane.com/api/external/v2", "auth": { "type": "bearer", "token": api_key, }, }, "resources": [ {"name": "customers", "endpoint": {"path": "companies/{company_id}/customers", "data_selector": "items"}}, {"name": "invoices", "endpoint": {"path": "companies/{company_id}/customer_invoices", "data_selector": "items"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="pennylane_finance_pipeline", destination="duckdb", dataset_name="pennylane_finance_data", ) load_info = pipeline.run(pennylane_finance_source()) print(load_info)
To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.
How do I query the loaded data?
Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.
Python (pandas DataFrame):
import dlt data = dlt.pipeline("pennylane_finance_pipeline").dataset() sessions_df = data.customers.df() print(sessions_df.head())
SQL (DuckDB example):
SELECT * FROM pennylane_finance_data.customers LIMIT 10;
In a marimo or Jupyter notebook:
import dlt data = dlt.pipeline("pennylane_finance_pipeline").dataset() data.customers.df().head()
See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.
What destinations can I load Pennylane data to?
dlt supports loading into any of these destinations — only the destination parameter changes:
| Destination | Example value |
|---|---|
| DuckDB (local, default) | "duckdb" |
| PostgreSQL | "postgres" |
| BigQuery | "bigquery" |
| Snowflake | "snowflake" |
| Redshift | "redshift" |
| Databricks | "databricks" |
| Filesystem (S3, GCS, Azure) | "filesystem" |
Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.
Troubleshooting
Authentication failures
If you receive 401 Unauthorized, check the Authorization header: Authorization: Bearer . Ensure the token is valid and has correct scope (Company vs Firm vs OAuth scopes). 403 Forbidden indicates insufficient scopes (request the appropriate API scopes for ledger, file_attachments, etc.).
Rate limits
Pennylane enforces rate limits (example: typically 4 requests/sec for most endpoints; customer invoices may be limited to 2 requests/sec). Implement retry/backoff and respect X-RateLimit headers if present.
Pagination and migration notes
List endpoints return paginated results in an object with an "items" key containing the records array. Pennylane is migrating to cursor-based pagination: responses include has_more and next_cursor fields; older offset fields (current_page, total_pages) may be present during migration. Use next_cursor for cursor pagination when available. Some ledger endpoints changed pagination and filtering behavior in 2026 -- follow the 2026 migration guide and include the X-Use-2026-API-Changes header during testing if instructed.
Common 404 error format
When a resource is not found the API returns a 404 with a JSON body like: { "status": 404, "error": "Couldn't find with 'id'=123" }
Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.
Next steps
Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:
data-exploration— Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.dlthub-runtime— Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install
Was this page helpful?
Community Hub
Need more dlt context for Pennylane?
Request dlt skills, commands, AGENT.md files, and AI-native context.