Gaviti Python API Docs | dltHub
Build a Gaviti-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.
Last updated:
Gaviti is a platform for accounts receivable automation and invoice management, exposing a public REST API to access invoices, customers, payments and related data. The REST API base URL is https://api.gaviti.com/v2 and all requests require an API key (company‑scoped) for authentication.
dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Gaviti data in under 10 minutes.
What data can I load from Gaviti?
Here are some of the endpoints you can load from Gaviti:
| Resource | Endpoint | Method | Data selector | Description |
|---|---|---|---|---|
| invoices | invoices | POST/GET* | data | Retrieve a paginated list of invoices matching filters. Example response contains top-level "data" array of invoice objects. |
| customers | customers | GET | data | Retrieve list of customers (returned in "data"). |
| payments | payments | GET | data | Retrieve payments and reconciliation records (returned in "data"). |
| contacts | contacts | GET | data | Retrieve customer contact records (returned in "data"). |
| users | users | GET | data | Retrieve Gaviti users and related metadata (returned in "data"). |
| *The Gaviti docs show example requests where lists (invoices) are fetched by sending a JSON body including companyId, page and perPage; check the Swagger docs for whether the list endpoint is POST or GET for each resource. |
How do I authenticate with the Gaviti API?
Gaviti issues a unique API key and a Company ID. The API key must be supplied via the Swagger "Authorize" button (or as a header) for every request, and the Company ID is often required as a request body parameter.
1. Get your credentials
- Contact Gaviti support to request access to the Public API.
- Provide your account details and request the unique API key and Company ID.
- Receive the correct public API base URL for your region (Europe or US).
- Open the provided public API docs URL (e.g. https://api.gaviti.com/v2/docs), click "Authorize" and paste the API key to test endpoints.
2. Add them to .dlt/secrets.toml
[sources.gaviti_source] api_key = "your_gaviti_api_key"
dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.
How do I set up and run the pipeline?
Set up a virtual environment and install dlt:
uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"
1. Install the dlt AI Workbench:
dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex
This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →
2. Install the rest-api-pipeline toolkit:
dlt ai toolkit rest-api-pipeline install
This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →
3. Start LLM-assisted coding:
Use /find-source to load data from the Gaviti API into DuckDB.
The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.
4. Run the pipeline:
python gaviti_pipeline.py
If everything is configured correctly, you'll see output like this:
Pipeline gaviti_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset gaviti_data The duckdb destination used duckdb:/gaviti.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs
Inspect your pipeline and data:
dlt pipeline gaviti_pipeline show
This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.
Python pipeline example
This example loads invoices and customers from the Gaviti API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:
import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def gaviti_source(api_key=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://api.gaviti.com/v2", "auth": { "type": "api_key", "api_key": api_key, }, }, "resources": [ {"name": "invoices", "endpoint": {"path": "invoices", "data_selector": "data"}}, {"name": "customers", "endpoint": {"path": "customers", "data_selector": "data"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="gaviti_pipeline", destination="duckdb", dataset_name="gaviti_data", ) load_info = pipeline.run(gaviti_source()) print(load_info)
To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.
How do I query the loaded data?
Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.
Python (pandas DataFrame):
import dlt data = dlt.pipeline("gaviti_pipeline").dataset() sessions_df = data.invoices.df() print(sessions_df.head())
SQL (DuckDB example):
SELECT * FROM gaviti_data.invoices LIMIT 10;
In a marimo or Jupyter notebook:
import dlt data = dlt.pipeline("gaviti_pipeline").dataset() data.invoices.df().head()
See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.
What destinations can I load Gaviti data to?
dlt supports loading into any of these destinations — only the destination parameter changes:
| Destination | Example value |
|---|---|
| DuckDB (local, default) | "duckdb" |
| PostgreSQL | "postgres" |
| BigQuery | "bigquery" |
| Snowflake | "snowflake" |
| Redshift | "redshift" |
| Databricks | "databricks" |
| Filesystem (S3, GCS, Azure) | "filesystem" |
Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.
Troubleshooting
Authentication failures
If requests return authentication errors, verify you are using the API key provided by Gaviti and that you authorized it in the public API docs (the Swagger "Authorize" button). Ensure you are calling the correct regional base URL (Europe vs US) and include the companyId when required by the endpoint.
Pagination and filtering
List endpoints are paginated. Use the page and perPage parameters (or equivalent query/body fields) to navigate results. Large perPage values may be restricted by the API.
API errors and responses
The API returns JSON responses with top‑level fields such as "success" (boolean), "errorMsg" (nullable) and "data" (array for list endpoints). On errors, check "success": false and "errorMsg" for diagnostics. Contact Gaviti support if you receive unexpected 4xx/5xx HTTP statuses.
Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.
Next steps
Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:
data-exploration— Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.dlthub-runtime— Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install
Was this page helpful?
Community Hub
Need more dlt context for Gaviti?
Request dlt skills, commands, AGENT.md files, and AI-native context.