USA Spending Python API Docs | dltHub
Build a USA Spending-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.
Last updated:
The USAspending API provides access to comprehensive U.S. government spending data. The API includes various endpoints for different types of spending information. For detailed documentation, visit the official USAspending API website. The REST API base URL is https://api.usaspending.gov and No authentication required for public GET endpoints (no API key or bearer token required)..
dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading USA Spending data in under 10 minutes.
What data can I load from USA Spending?
Here are some of the endpoints you can load from USA Spending:
| Resource | Endpoint | Method | Data selector | Description |
|---|---|---|---|---|
| agency | /api/v2/agency/<TOPTIER_AGENCY_CODE>/ | GET | (object) | Returns agency overview information for Agency Details page |
| agency_awards | /api/v2/agency/<TOPTIER_AGENCY_CODE>/awards/ | GET | results | Returns agency summary info (number of transactions and obligations) |
| bulk_download_status | /api/v2/bulk_download/status/ | GET | (object) | Returns current status of a download job initiated via bulk download endpoints |
| federal_account | /api/v2/federal_accounts/<ACCOUNT_CODE>/ | GET | (object) | Returns a federal account by its account code |
| recipient_children | /api/v2/recipient/children/<DUNS_OR_UEI>/ | GET | (object) | Returns recipient details based on DUNS or UEI number |
| references_toptier_agencies | /api/v2/references/toptier_agencies/ | GET | results | Returns all toptier agencies and related data |
| references_total_budgetary_resources | /api/v2/references/total_budgetary_resources/ | GET | results | Returns total budgetary resources totaled by fiscal year and period |
| reporting_agencies_overview | /api/v2/reporting/agencies/overview/ | GET | results | Returns About the Data info about agencies with submissions for a fiscal year/period |
| references_data_dictionary | /api/v2/references/data_dictionary/ | GET | (object) | Returns the Schema team's Rosetta Crosswalk Data Dictionary |
| download_status | /api/v2/download/status/ | GET | (object) | Returns current status for a download job requested via /api/v2/download/awards/ or /api/v2/download/transaction/ |
| Note: many endpoints return an object; in list endpoints the array of records is commonly under the key "results" when paginated; single-record endpoints return an object at top-level. |
How do I authenticate with the USA Spending API?
USAspending public API endpoints are accessible without credentials for read-only GET requests; some endpoints accept POST filter bodies and still do not require auth. Include typical headers: Accept: application/json and Content-Type: application/json for POST requests.
1. Get your credentials
N/A — no credentials required for public GET usage. If any private/partner endpoints require auth, follow provider portal instructions (not documented in public docs).
2. Add them to .dlt/secrets.toml
[sources.usa_spending_source]
dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.
How do I set up and run the pipeline?
Set up a virtual environment and install dlt:
uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"
1. Install the dlt AI Workbench:
dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex
This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →
2. Install the rest-api-pipeline toolkit:
dlt ai toolkit rest-api-pipeline install
This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →
3. Start LLM-assisted coding:
Use /find-source to load data from the USA Spending API into DuckDB.
The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.
4. Run the pipeline:
python usa_spending_pipeline.py
If everything is configured correctly, you'll see output like this:
Pipeline usa_spending_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset usa_spending_data The duckdb destination used duckdb:/usa_spending.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs
Inspect your pipeline and data:
dlt pipeline usa_spending_pipeline show
This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.
Python pipeline example
This example loads agency and references_toptier_agencies from the USA Spending API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:
import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def usa_spending_source(None=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://api.usaspending.gov", "auth": { "type": "none", "None": None, }, }, "resources": [ {"name": "agency", "endpoint": {"path": "api/v2/agency/<TOPTIER_AGENCY_CODE>/"}}, {"name": "references_toptier_agencies", "endpoint": {"path": "api/v2/references/toptier_agencies/", "data_selector": "results"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="usa_spending_pipeline", destination="duckdb", dataset_name="usa_spending_data", ) load_info = pipeline.run(usa_spending_source()) print(load_info)
To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.
How do I query the loaded data?
Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.
Python (pandas DataFrame):
import dlt data = dlt.pipeline("usa_spending_pipeline").dataset() sessions_df = data.agency.df() print(sessions_df.head())
SQL (DuckDB example):
SELECT * FROM usa_spending_data.agency LIMIT 10;
In a marimo or Jupyter notebook:
import dlt data = dlt.pipeline("usa_spending_pipeline").dataset() data.agency.df().head()
See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.
What destinations can I load USA Spending data to?
dlt supports loading into any of these destinations — only the destination parameter changes:
| Destination | Example value |
|---|---|
| DuckDB (local, default) | "duckdb" |
| PostgreSQL | "postgres" |
| BigQuery | "bigquery" |
| Snowflake | "snowflake" |
| Redshift | "redshift" |
| Databricks | "databricks" |
| Filesystem (S3, GCS, Azure) | "filesystem" |
Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.
Next steps
Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:
data-exploration— Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.dlthub-runtime— Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install
Was this page helpful?
Community Hub
Need more dlt context for USA Spending?
Request dlt skills, commands, AGENT.md files, and AI-native context.