Open Policy Agent Python API Docs | dltHub
Build a Open Policy Agent-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.
Last updated:
Open Policy Agent (OPA) is a general-purpose policy engine that exposes a REST API to manage policies, query decisions, and read/write data used during policy evaluation. The REST API base URL is http://<OPA_HOST>:8181 and OPA supports optional TLS and pluggable HTTP authentication; by default no auth is required (local deployments). When enabled, HTTP auth (e.g., client TLS, proxy-provided credentials) or custom authorizers protect endpoints..
dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Open Policy Agent data in under 10 minutes.
What data can I load from Open Policy Agent?
Here are some of the endpoints you can load from Open Policy Agent:
| Resource | Endpoint | Method | Data selector | Description |
|---|---|---|---|---|
| policy_modules | /v1/policies | GET | result | List all policy modules and metadata |
| policy_module | /v1/policies/{id} | GET | result | Get a single policy module |
| data_document | /v1/data/{path:.*} | GET | result | Read a document or decision at the specified data path |
| query | /v1/query | GET | result | Execute an ad-hoc query (q param) (GET returns result array) |
| health | /health | GET | (top-level object) | Health/readiness endpoint |
| config | /v1/config | GET | result | Get active OPA configuration |
| status | /v1/status | GET | result | Get agent status and bundle activation info |
| bundles | /v1/bundles | GET | result | (when configured) list bundles (note: bundle management often via config) |
| metrics_prometheus | /metrics | GET | (top-level text) | Prometheus metrics (text/plain) |
How do I authenticate with the Open Policy Agent API?
OPA itself does not require credentials by default; in production you must configure TLS and an authentication/authorization layer (e.g., reverse proxy or OPA config). When auth is present, include required headers (Authorization, TLS client certs) as configured.
1. Get your credentials
- Determine how your deployment enforces auth (reverse proxy, mTLS, OIDC). 2) For proxy/OIDC: register an application with your identity provider to obtain client id/secret and follow provider steps to get access tokens. 3) For mTLS: obtain client certificate/key from your PKI. 4) For service credentials (if using an external control plane) follow that provider’s dashboard to create service credentials. (OPA has no centralized credential UI.)
2. Add them to .dlt/secrets.toml
[sources.open_policy_agent_source] auth_token = "your_bearer_token_here"
dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.
How do I set up and run the pipeline?
Set up a virtual environment and install dlt:
uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"
1. Install the dlt AI Workbench:
dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex
This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →
2. Install the rest-api-pipeline toolkit:
dlt ai toolkit rest-api-pipeline install
This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →
3. Start LLM-assisted coding:
Use /find-source to load data from the Open Policy Agent API into DuckDB.
The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.
4. Run the pipeline:
python open_policy_agent_pipeline.py
If everything is configured correctly, you'll see output like this:
Pipeline open_policy_agent_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset open_policy_agent_data The duckdb destination used duckdb:/open_policy_agent.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs
Inspect your pipeline and data:
dlt pipeline open_policy_agent_pipeline show
This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.
Python pipeline example
This example loads data_document and policies from the Open Policy Agent API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:
import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def open_policy_agent_source(auth_token=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "http://<OPA_HOST>:8181", "auth": { "type": "bearer", "token": auth_token, }, }, "resources": [ {"name": "data_document", "endpoint": {"path": "v1/data/{path}", "data_selector": "result"}}, {"name": "policy_modules", "endpoint": {"path": "v1/policies", "data_selector": "result"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="open_policy_agent_pipeline", destination="duckdb", dataset_name="open_policy_agent_data", ) load_info = pipeline.run(open_policy_agent_source()) print(load_info)
To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.
How do I query the loaded data?
Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.
Python (pandas DataFrame):
import dlt data = dlt.pipeline("open_policy_agent_pipeline").dataset() sessions_df = data.data_document.df() print(sessions_df.head())
SQL (DuckDB example):
SELECT * FROM open_policy_agent_data.data_document LIMIT 10;
In a marimo or Jupyter notebook:
import dlt data = dlt.pipeline("open_policy_agent_pipeline").dataset() data.data_document.df().head()
See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.
What destinations can I load Open Policy Agent data to?
dlt supports loading into any of these destinations — only the destination parameter changes:
| Destination | Example value |
|---|---|
| DuckDB (local, default) | "duckdb" |
| PostgreSQL | "postgres" |
| BigQuery | "bigquery" |
| Snowflake | "snowflake" |
| Redshift | "redshift" |
| Databricks | "databricks" |
| Filesystem (S3, GCS, Azure) | "filesystem" |
Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.
Troubleshooting
Authentication failures
If OPA is fronted by a proxy or configured with TLS/client certs, ensure Authorization header or client certificate is sent. OPA returns 401/403 from the proxy or 405/401 if method/auth is rejected. Check proxy logs and OPA config for enabled authentication.
Missing result / undefined decisions
When querying /v1/data/, OPA returns HTTP 200 but omits the "result" key if the document/decision is undefined. Treat absent "result" as undefined rather than an error.
Common HTTP errors
- 400 Bad Request: malformed JSON input or invalid module on PUT.
- 404 Not Found: resource or decision path missing (some webhook endpoints return 404 when document is missing).
- 405 Method Not Allowed: HTTP method not supported for endpoint.
- 500 Server Error: internal errors; response body contains JSON error with code/message and optional errors/location fields.
Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.
Next steps
Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:
data-exploration— Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.dlthub-runtime— Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install
Was this page helpful?
Community Hub
Need more dlt context for Open Policy Agent?
Request dlt skills, commands, AGENT.md files, and AI-native context.