PCI Vault Python API Docs | dltHub
Build a PCI Vault-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.
Last updated:
PCI Vault's Enterprise API offers advanced features for Enterprise customers. The API enables tokenization and decryption of credit card data. It supports multiple tokenization algorithms and is PCI DSS compliant. The REST API base URL is https://api.pcivault.io/v1 and All API requests to protected endpoints require HTTP Basic Auth (username is key identifier, password is passphrase); some public endpoints (hosted iframe) do not require authentication..
dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading PCI Vault data in under 10 minutes.
What data can I load from PCI Vault?
Here are some of the endpoints you can load from PCI Vault:
| Resource | Endpoint | Method | Data selector | Description |
|---|---|---|---|---|
| vault | /vault/ | GET | (response is an object whose keys are key identifiers; each key maps to an array of token objects) | Decrypt a token or list tokenized data (tree grouped by key) |
| bin_lookup | /bin | GET | issuer | BIN lookup; returns array under issuer |
| retrieve_list | /retrieve/ | GET | (top-level array or object? listing retrieval endpoints) | List retrieval endpoints |
| retrieve_use | /retrieve/{unique_id} | GET | (when listing tokens: tokens list returned; when decrypting a token: decrypted data object) | Use retrieval endpoint to decrypt or list tokens; requires X-PCIVault-Retrieve-Secret header |
| rule_list | /rule/ | GET | (top-level array) | List available rules |
| rule_operations | /rule/operations | GET | (top-level array) | List available rule operations |
How do I authenticate with the PCI Vault API?
Use HTTP Basic Authentication with the key identifier as the username and the passphrase as the password. Include X-PCIVault-Retrieve-Secret header for retrieval endpoints when using a retrieval endpoint secret.
1. Get your credentials
- Register at https://pcivault.io/register or contact PCI Vault to get an API key and passphrase (key identifier and passphrase). 2) In your PCI Vault dashboard create or note an existing encryption key identifier and passphrase. 3) Use these as HTTP Basic Auth credentials when calling protected endpoints.
2. Add them to .dlt/secrets.toml
[sources.pci_vault_source] username = "YOUR_KEY_ID" password = "YOUR_PASSPHRASE"
dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.
How do I set up and run the pipeline?
Set up a virtual environment and install dlt:
uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"
1. Install the dlt AI Workbench:
dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex
This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →
2. Install the rest-api-pipeline toolkit:
dlt ai toolkit rest-api-pipeline install
This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →
3. Start LLM-assisted coding:
Use /find-source to load data from the PCI Vault API into DuckDB.
The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.
4. Run the pipeline:
python pci_vault_pipeline.py
If everything is configured correctly, you'll see output like this:
Pipeline pci_vault_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset pci_vault_data The duckdb destination used duckdb:/pci_vault.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs
Inspect your pipeline and data:
dlt pipeline pci_vault_pipeline show
This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.
Python pipeline example
This example loads vault and retrieve from the PCI Vault API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:
import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def pci_vault_source(auth=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://api.pcivault.io/v1", "auth": { "type": "http_basic", "(use 'username' and 'password' fields in dlt config for Basic auth)": auth, }, }, "resources": [ {"name": "vault", "endpoint": {"path": "vault/", "data_selector": "(response object keyed by key id; records are arrays under each key — selector: top-level keys contain arrays)"}}, {"name": "retrieve", "endpoint": {"path": "retrieve/{unique_id}", "data_selector": "(when listing tokens: unspecified; when decrypting: decrypted record object)"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="pci_vault_pipeline", destination="duckdb", dataset_name="pci_vault_data", ) load_info = pipeline.run(pci_vault_source()) print(load_info)
To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.
How do I query the loaded data?
Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.
Python (pandas DataFrame):
import dlt data = dlt.pipeline("pci_vault_pipeline").dataset() sessions_df = data.vault.df() print(sessions_df.head())
SQL (DuckDB example):
SELECT * FROM pci_vault_data.vault LIMIT 10;
In a marimo or Jupyter notebook:
import dlt data = dlt.pipeline("pci_vault_pipeline").dataset() data.vault.df().head()
See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.
What destinations can I load PCI Vault data to?
dlt supports loading into any of these destinations — only the destination parameter changes:
| Destination | Example value |
|---|---|
| DuckDB (local, default) | "duckdb" |
| PostgreSQL | "postgres" |
| BigQuery | "bigquery" |
| Snowflake | "snowflake" |
| Redshift | "redshift" |
| Databricks | "databricks" |
| Filesystem (S3, GCS, Azure) | "filesystem" |
Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.
Next steps
Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:
data-exploration— Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.dlthub-runtime— Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install
Was this page helpful?
Community Hub
Need more dlt context for PCI Vault?
Request dlt skills, commands, AGENT.md files, and AI-native context.