Clari Python API Docs | dltHub
Build a Clari-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.
Last updated:
Clari is the heartbeat of your revenue organization, providing APIs to retrieve and push revenue‑critical data. The REST API base URL is https://api.clari.com/v2 and All requests require an API token passed in the 'apikey' header; partner/ingest APIs also need a 'partnerkey' header, while Copilot endpoints use 'X-Api-Key' and 'X-Api-Password' headers..
dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Clari data in under 10 minutes.
What data can I load from Clari?
Here are some of the endpoints you can load from Clari:
| Resource | Endpoint | Method | Data selector | Description |
|---|---|---|---|---|
| audit_events | /audit/events | GET | activities | View audit events with pagination (items in activities array) |
| export_jobs | /export/jobs | GET | jobs | Manage and list bulk export jobs |
| export_job_results | /export/jobs/{jobId}/results | GET | (varies by export, e.g., activities or items) | Retrieve JSON output for a completed export job |
| ingest_job_status | /ingest/job/{jobId} | GET | (job object) | Check status of an ingest/bulk upload job |
| limits | /admin/limits | GET | (object) | View organization API limits and usage |
| calls | /calls | GET | calls | Copilot: list calls |
| call_details | /call-details | GET | call | Copilot: get full call details |
| users | /users | GET | users | Copilot: list users |
| topics | /topics | GET | topics | Copilot: list topics |
| scorecards | /scorecard | GET | scorecards | Copilot: list scorecards |
How do I authenticate with the Clari API?
Clari REST APIs use API tokens supplied in the 'apikey' header; partner ingestion APIs also require a 'partnerkey' header, while Copilot uses 'X-Api-Key' and 'X-Api-Password'.
1. Get your credentials
- Sign in to the Clari web app.
- Click your user avatar → Settings.
- Open the "API Token" tab and click "Generate New API Token".
- Provide a name, generate the token, and copy it securely (it cannot be viewed again).
- For partner/ingest integrations, request a partner key from your Clari contact or enable it in workspace settings.
2. Add them to .dlt/secrets.toml
[sources.clari_source] apikey = "your_clari_apikey_here" partnerkey = "your_partner_key_here" # only for ingest/partner endpoints x_api_password = "your_copilot_api_password" # for Copilot (if used)
dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.
How do I set up and run the pipeline?
Set up a virtual environment and install dlt:
uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"
1. Install the dlt AI Workbench:
dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex
This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →
2. Install the rest-api-pipeline toolkit:
dlt ai toolkit rest-api-pipeline install
This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →
3. Start LLM-assisted coding:
Use /find-source to load data from the Clari API into DuckDB.
The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.
4. Run the pipeline:
python clari_pipeline.py
If everything is configured correctly, you'll see output like this:
Pipeline clari_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset clari_data The duckdb destination used duckdb:/clari.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs
Inspect your pipeline and data:
dlt pipeline clari_pipeline show
This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.
Python pipeline example
This example loads calls and export_jobs from the Clari API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:
import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def clari_source(apikey=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://api.clari.com/v2", "auth": { "type": "api_key", "apikey": apikey, }, }, "resources": [ {"name": "calls", "endpoint": {"path": "calls", "data_selector": "calls"}}, {"name": "export_jobs", "endpoint": {"path": "export/jobs", "data_selector": "jobs"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="clari_pipeline", destination="duckdb", dataset_name="clari_data", ) load_info = pipeline.run(clari_source()) print(load_info)
To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.
How do I query the loaded data?
Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.
Python (pandas DataFrame):
import dlt data = dlt.pipeline("clari_pipeline").dataset() sessions_df = data.calls.df() print(sessions_df.head())
SQL (DuckDB example):
SELECT * FROM clari_data.calls LIMIT 10;
In a marimo or Jupyter notebook:
import dlt data = dlt.pipeline("clari_pipeline").dataset() data.calls.df().head()
See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.
What destinations can I load Clari data to?
dlt supports loading into any of these destinations — only the destination parameter changes:
| Destination | Example value |
|---|---|
| DuckDB (local, default) | "duckdb" |
| PostgreSQL | "postgres" |
| BigQuery | "bigquery" |
| Snowflake | "snowflake" |
| Redshift | "redshift" |
| Databricks | "databricks" |
| Filesystem (S3, GCS, Azure) | "filesystem" |
Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.
Troubleshooting
Authentication failures
If you receive 401 Unauthorized or messages like "Invalid authentication credentials" ensure the header 'apikey' is present with a valid token. Copilot endpoints require X-Api-Key and X-Api-Password; ingest/partner endpoints also require 'partnerkey'. Tokens generated in Settings cannot be viewed again — revoke and regenerate if lost.
Rate limits
Clari enforces rate limits (example: 100 requests/sec per API token for some ingest endpoints). If you receive 429 responses or "API rate limit exceeded", implement exponential backoff and respect pagination parameters (limit, skip, nextLink) to reduce request volume.
Pagination and performance
List endpoints commonly return a pagination object and limit/skip or nextLink. Use 'limit' (1‑1000 depending on endpoint) and follow provided nextLink or skip/nextPageSkip to iterate. Some endpoints accept includePagination=false to improve performance (omits pagination metadata).
Bulk export & ingest errors
Export/ingest endpoints return structured error arrays with codes (e.g., INVALID_INPUT_FORMAT, INVALID_PARTNER_KEY, INTERNAL_SERVER_ERROR). For ingest APIs, common errors include missing primary key, invalid field types, duplicate primary keys, invalid partner key, and job quota/execution limits. Check job status via /ingest/job/{jobId} and export job via /export/jobs/{jobId}.
Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.
Next steps
Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:
data-exploration— Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.dlthub-runtime— Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install
Was this page helpful?
Community Hub
Need more dlt context for Clari?
Request dlt skills, commands, AGENT.md files, and AI-native context.