Dimensions Analytics Python API Docs | dltHub
Build a Dimensions Analytics-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.
Last updated:
Dimensions Analytics API is a subscription-based analytics API that provides a DSL (Dimensions Search Language) to query and retrieve linked research data (publications, grants, patents, clinical trials, organizations, researchers, etc.). The REST API base URL is https://<your-domain>.dimensions.ai and All requests require obtaining a JWT query token by POSTing an API key (or username/password for legacy accounts) to /api/auth and then using Authorization: JWT for subsequent queries..
dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Dimensions Analytics data in under 10 minutes.
What data can I load from Dimensions Analytics?
Here are some of the endpoints you can load from Dimensions Analytics:
| Resource | Endpoint | Method | Data selector | Description |
|---|---|---|---|---|
| publications | api/dsl/v2 | POST (DSL query endpoint) | publications | Execute DSL queries; common resource returning publications records |
| researchers | api/dsl/v2 | POST (DSL query endpoint) | researchers | Execute DSL queries returning researchers |
| grants | api/dsl/v2 | POST (DSL query endpoint) | grants | Execute DSL queries returning grants |
| patents | api/dsl/v2 | POST (DSL query endpoint) | patents | Execute DSL queries returning patents |
| clinical_trials | api/dsl/v2 | POST (DSL query endpoint) | clinical_trials | Execute DSL queries returning clinical trial records |
| api_auth | api/auth | POST | token | Authentication endpoint; POST credentials to receive JWT token |
How do I authenticate with the Dimensions Analytics API?
Clients POST credentials (JSON with key or username/password) to https://.dimensions.ai/api/auth (or /api/auth.json) to receive a JWT token in the response JSON (token field). Subsequent DSL requests are POSTed to the DSL endpoint with header Authorization: JWT and the DSL query in the request body.
1. Get your credentials
- Ensure you have a Dimensions account with Analytics API subscription. 2) Log into the Dimensions web application. 3) Open 'My Account' (or Account settings). 4) Generate or copy your API Key from the My Account / API Key section. 5) If you have legacy credentials, you may use username and password instead. 6) Use that key in a POST to /api/auth to obtain a short-lived JWT token.
2. Add them to .dlt/secrets.toml
[sources.dimensions_analytics_source] api_key = "your_api_key_here"
dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.
How do I set up and run the pipeline?
Set up a virtual environment and install dlt:
uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"
1. Install the dlt AI Workbench:
dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex
This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →
2. Install the rest-api-pipeline toolkit:
dlt ai toolkit rest-api-pipeline install
This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →
3. Start LLM-assisted coding:
Use /find-source to load data from the Dimensions Analytics API into DuckDB.
The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.
4. Run the pipeline:
python dimensions_analytics_pipeline.py
If everything is configured correctly, you'll see output like this:
Pipeline dimensions_analytics_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset dimensions_analytics_data The duckdb destination used duckdb:/dimensions_analytics.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs
Inspect your pipeline and data:
dlt pipeline dimensions_analytics_pipeline show
This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.
Python pipeline example
This example loads publications and researchers from the Dimensions Analytics API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:
import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def dimensions_analytics_source(api_key=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://<your-domain>.dimensions.ai", "auth": { "type": "bearer", "token": api_key, }, }, "resources": [ {"name": "publications", "endpoint": {"path": "api/dsl/v2", "data_selector": "publications"}}, {"name": "researchers", "endpoint": {"path": "api/dsl/v2", "data_selector": "researchers"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="dimensions_analytics_pipeline", destination="duckdb", dataset_name="dimensions_analytics_data", ) load_info = pipeline.run(dimensions_analytics_source()) print(load_info)
To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.
How do I query the loaded data?
Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.
Python (pandas DataFrame):
import dlt data = dlt.pipeline("dimensions_analytics_pipeline").dataset() sessions_df = data.publications.df() print(sessions_df.head())
SQL (DuckDB example):
SELECT * FROM dimensions_analytics_data.publications LIMIT 10;
In a marimo or Jupyter notebook:
import dlt data = dlt.pipeline("dimensions_analytics_pipeline").dataset() data.publications.df().head()
See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.
What destinations can I load Dimensions Analytics data to?
dlt supports loading into any of these destinations — only the destination parameter changes:
| Destination | Example value |
|---|---|
| DuckDB (local, default) | "duckdb" |
| PostgreSQL | "postgres" |
| BigQuery | "bigquery" |
| Snowflake | "snowflake" |
| Redshift | "redshift" |
| Databricks | "databricks" |
| Filesystem (S3, GCS, Azure) | "filesystem" |
Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.
Troubleshooting
Authentication failures
If POST to /api/auth returns a non-200 status or no token, verify the API key or username/password and ensure you are using the correct Dimensions domain (e.g. app.dimensions.ai or your tenant). The authentication response JSON contains the token under the key "token" when successful.
Rate limits and "Reasonable Use"
The Dimensions Analytics API is subject to reasonable use limits: typically 30 requests per IP address per minute. Hitting the rate limit will cause 429 responses; implement exponential backoff and token reuse (tokens are valid ~2 hours).
Pagination and token lifetime
DSL queries may return _stats or counts alongside arrays (responses embed the returned records under resource-named keys and often include an _stats object). The query token is valid for ~2 hours; when it expires you must re-authenticate.
Common HTTP errors
401 Unauthorized — invalid or expired JWT token. 403 Forbidden — account lacks Analytics API subscription or insufficient privileges. 429 Too Many Requests — rate limit exceeded. 400 Bad Request — malformed DSL query. 500 Server Error — transient server error; retry with backoff.
Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.
Next steps
Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:
data-exploration— Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.dlthub-runtime— Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install
Was this page helpful?
Community Hub
Need more dlt context for Dimensions Analytics?
Request dlt skills, commands, AGENT.md files, and AI-native context.