Jedox Python API Docs | dltHub
Build a Jedox-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.
Last updated:
Jedox is an OLAP‑based planning and analytics platform providing REST and HTTP APIs. The REST API base URL is Cloud Logs API: https://logs.{instance}.cloud.jedox.com/logs\nCloud OLAP HTTP API: https://olap.{instance}.cloud.jedox.com/api\nOn‑premises OLAP HTTP API: http://<server-address>:<port>/api and Logs API: all requests require a Bearer token. OLAP HTTP API: requires a valid OLAP session or admin credentials..
dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Jedox data in under 10 minutes.
What data can I load from Jedox?
Here are some of the endpoints you can load from Jedox:
| Resource | Endpoint | Method | Data selector | Description |
|---|---|---|---|---|
| logs | https://logs.{instance}.cloud.jedox.com/logs | GET | rows | Query log records with filtering and pagination (max 100 000 rows per request). |
| olap_server_browser | https://olap.{instance}.cloud.jedox.com/api/... | GET | Returns CSV‑formatted data for OLAP queries or server object listings. | |
| integrator_odata | https://odata.{instance}.cloud.jedox.com/ | GET | OData Hub exposing cubes, dimensions, views, cells (JSON OData 4.0). | |
| server_browser | https://browser.{instance}.cloud.jedox.com/ | GET | Admin UI for inspecting databases, cubes, and dimensions (HTML UI). | |
| olap_admin | http://:/api | GET | On‑premises OLAP HTTP API entry point (CSV responses). |
How do I authenticate with the Jedox API?
Logs API uses Personal Access Tokens sent as a Bearer token in the Authorization header. OLAP HTTP API authenticates via a valid OLAP session or admin username/password.
1. Get your credentials
- Open the Jedox Cloud Console at https://console.cloud.jedox.com/settings.
- Navigate to the "Personal Access Tokens" (or "API tokens") section.
- Click "Create new token", give it a name, select required scopes, and generate the token.
- Copy the generated token and store it securely (e.g., in secrets.toml).
2. Add them to .dlt/secrets.toml
[sources.jedox_source] token = "your_personal_access_token_here"
dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.
How do I set up and run the pipeline?
Set up a virtual environment and install dlt:
uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"
1. Install the dlt AI Workbench:
dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex
This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →
2. Install the rest-api-pipeline toolkit:
dlt ai toolkit rest-api-pipeline install
This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →
3. Start LLM-assisted coding:
Use /find-source to load data from the Jedox API into DuckDB.
The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.
4. Run the pipeline:
python jedox_pipeline.py
If everything is configured correctly, you'll see output like this:
Pipeline jedox_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset jedox_data The duckdb destination used duckdb:/jedox.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs
Inspect your pipeline and data:
dlt pipeline jedox_pipeline show
This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.
Python pipeline example
This example loads logs and olap_server from the Jedox API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:
import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def jedox_source(token=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "Cloud Logs API: https://logs.{instance}.cloud.jedox.com/logs\nCloud OLAP HTTP API: https://olap.{instance}.cloud.jedox.com/api\nOn‑premises OLAP HTTP API: http://<server-address>:<port>/api", "auth": { "type": "bearer", "token": token, }, }, "resources": [ {"name": "logs", "endpoint": {"path": "logs", "data_selector": "rows"}}, {"name": "olap_server", "endpoint": {"path": "api"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="jedox_pipeline", destination="duckdb", dataset_name="jedox_data", ) load_info = pipeline.run(jedox_source()) print(load_info)
To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.
How do I query the loaded data?
Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.
Python (pandas DataFrame):
import dlt data = dlt.pipeline("jedox_pipeline").dataset() sessions_df = data.logs.df() print(sessions_df.head())
SQL (DuckDB example):
SELECT * FROM jedox_data.logs LIMIT 10;
In a marimo or Jupyter notebook:
import dlt data = dlt.pipeline("jedox_pipeline").dataset() data.logs.df().head()
See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.
What destinations can I load Jedox data to?
dlt supports loading into any of these destinations — only the destination parameter changes:
| Destination | Example value |
|---|---|
| DuckDB (local, default) | "duckdb" |
| PostgreSQL | "postgres" |
| BigQuery | "bigquery" |
| Snowflake | "snowflake" |
| Redshift | "redshift" |
| Databricks | "databricks" |
| Filesystem (S3, GCS, Azure) | "filesystem" |
Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.
Troubleshooting
Authentication failures
If you receive 401 Unauthorized from the Logs API, verify the Authorization header: "Authorization: Bearer {PAT}" and ensure the token is valid and not expired. For OLAP API calls, ensure you use a valid OLAP session or correct user credentials; server browser requires an admin user.
Pagination and row limits (Logs API)
The Logs API limits results to a maximum of 100,000 rows per request. If a response contains exactly 100,000 rows, the result is incomplete; extract last_date from the response and re‑query with from=<last_date> until fewer than 100,000 rows are returned.
Server browser and admin access
The server browser is protected and only accessible to users with the admin role; do not expose the admin interface publicly. On‑premises OLAP HTTP API ports are configured in palo.ini; ensure firewalls allow the configured ports.
Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.
Next steps
Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:
data-exploration— Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.dlthub-runtime— Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install
Was this page helpful?
Community Hub
Need more dlt context for Jedox?
Request dlt skills, commands, AGENT.md files, and AI-native context.