Amazon selling partner Python API Docs | dltHub

Build a Amazon selling partner-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.

Last updated:

Amazon Selling Partner API (SP-API) is a REST API platform that provides programmatic access to Amazon seller and vendor data across multiple services such as Orders, Catalog, Reports, Sellers, and Tokens. The REST API base URL is https://sellingpartnerapi-na.amazon.com (North America), https://sellingpartnerapi-eu.amazon.com (Europe), https://sellingpartnerapi-fe.amazon.com (Far East). and Requests require AWS Signature Version 4 signing and a Login with Amazon (LWA) access token (or Restricted Data Token for restricted operations)..

dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Amazon selling partner data in under 10 minutes.


What data can I load from Amazon selling partner?

Here are some of the endpoints you can load from Amazon selling partner:

ResourceEndpointMethodData selectorDescription
sellers/sellers/v1/marketplaceParticipationsGETpayload (object) — marketplaces listed under marketplaceParticipationsRetrieve seller account and marketplaces information
orders/orders/v0/ordersGETpayload.orders (array)Search for orders that meet filter criteria
orders_get/orders/v0/orders/{orderId}GETpayload (object) — order in payloadGet a specific order by orderId
catalog_items/catalog/2022-04-01/items/{asin}GETpayload (object) — item objectRetrieve catalog item metadata by ASIN
reports_create/reports/2021-06-30/reportsPOST (list relevant GETs from Reports API below)payload (object) — reportId/metadataCreate and manage reports (Reports API uses report document endpoints to GET generated report documents)
tokens_getRDT/tokens/2021-03-01/restrictedDataTokenPOST (used to obtain RDT for restricted GETs)payload (object) — restrictedDataTokenObtain Restricted Data Token for PII/restricted operations
feeds_get/feeds/2021-06-30/feeds/{feedId}GETpayload (object)Get feed status and metadata
catalog_item_search/catalog/2022-04-01/itemsGETpayload.items (array)Search catalog items by identifiers/filters
orders_getOrderItems/orders/v0/orders/{orderId}/orderItemsGETpayload.orderItems (array)Retrieve order items for specified order

How do I authenticate with the Amazon selling partner API?

Authentication requires obtaining an LWA access token (POST https://api.amazon.com/auth/o2/token) and signing each SP-API request with AWS SigV4 using your IAM role/credentials. Include header 'x-amz-access-token: <LWA_token>' and standard SigV4 Authorization and x-amz-date headers; use Restricted Data Token (RDT) in x-amz-access-token for restricted operations.

1. Get your credentials

  1. Register your application in Seller Central/Developer Console to obtain client_id and client_secret. 2) Create/associate an AWS IAM user or role and generate AWS access key ID and secret access key; grant the role the IAM policies required for SP-API. 3) In Seller Central, create and link your application to obtain Selling Partner app credentials and get the refresh_token by authorizing the app (Authorization Code Grant). 4) Exchange refresh_token for an LWA access token via POST to https://api.amazon.com/auth/o2/token. 5) Use AWS SigV4 (with your AWS credentials) to sign outgoing requests; include the LWA access token in x-amz-access-token. For restricted endpoints, call the Tokens API to get an RDT and use it in x-amz-access-token.

2. Add them to .dlt/secrets.toml

[sources.amazon_selling_partner_source] lwa_client_id = "your_lwa_client_id" lwa_client_secret = "your_lwa_client_secret" refresh_token = "your_lwa_refresh_token" aws_access_key_id = "YOUR_AWS_ACCESS_KEY_ID" aws_secret_access_key = "YOUR_AWS_SECRET_ACCESS_KEY" role_arn = "arn:aws:iam::123456789012:role/YourSPAPIRole" region = "us-east-1"

dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.


How do I set up and run the pipeline?

Set up a virtual environment and install dlt:

uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"

1. Install the dlt AI Workbench:

dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex

This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →

2. Install the rest-api-pipeline toolkit:

dlt ai toolkit rest-api-pipeline install

This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →

3. Start LLM-assisted coding:

Use /find-source to load data from the Amazon selling partner API into DuckDB.

The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.

4. Run the pipeline:

python amazon_selling_partner_pipeline.py

If everything is configured correctly, you'll see output like this:

Pipeline amazon_selling_partner_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset amazon_selling_partner_data The duckdb destination used duckdb:/amazon_selling_partner.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs

Inspect your pipeline and data:

dlt pipeline amazon_selling_partner_pipeline show

This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.


Python pipeline example

This example loads orders and sellers from the Amazon selling partner API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:

import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def amazon_selling_partner_source(lwa_client=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://sellingpartnerapi-na.amazon.com (North America), https://sellingpartnerapi-eu.amazon.com (Europe), https://sellingpartnerapi-fe.amazon.com (Far East).", "auth": { "type": "aws_sigv4+lwa", "access_token": lwa_client, }, }, "resources": [ {"name": "orders", "endpoint": {"path": "orders/v0/orders", "data_selector": "payload.orders"}}, {"name": "sellers", "endpoint": {"path": "sellers/v1/marketplaceParticipations", "data_selector": "marketplaceParticipations"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="amazon_selling_partner_pipeline", destination="duckdb", dataset_name="amazon_selling_partner_data", ) load_info = pipeline.run(amazon_selling_partner_source()) print(load_info)

To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.


How do I query the loaded data?

Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.

Python (pandas DataFrame):

import dlt data = dlt.pipeline("amazon_selling_partner_pipeline").dataset() sessions_df = data.orders.df() print(sessions_df.head())

SQL (DuckDB example):

SELECT * FROM amazon_selling_partner_data.orders LIMIT 10;

In a marimo or Jupyter notebook:

import dlt data = dlt.pipeline("amazon_selling_partner_pipeline").dataset() data.orders.df().head()

See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.


What destinations can I load Amazon selling partner data to?

dlt supports loading into any of these destinations — only the destination parameter changes:

DestinationExample value
DuckDB (local, default)"duckdb"
PostgreSQL"postgres"
BigQuery"bigquery"
Snowflake"snowflake"
Redshift"redshift"
Databricks"databricks"
Filesystem (S3, GCS, Azure)"filesystem"

Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.


Troubleshooting

Authentication failures

If you receive 401/403: verify LWA access token validity, ensure SigV4 signature matches request (correct host, x-amz-date, signed headers), and confirm your AWS credentials/role permissions and role ARN are correct. For restricted operations ensure you used a Restricted Data Token (RDT).

Rate limits and throttling

SP-API enforces per-operation rate limits; a 429 or 503 can be returned when throttled. Implement exponential backoff with jitter and respect x-amzn-RateLimit headers and Retry-After if present.

Pagination quirks

Many list GETs use nextToken (or NextToken) in the payload to retrieve subsequent pages. Pass nextToken as query parameter in the next GET. Some APIs return results under payload.* arrays (e.g., payload.orders) — always inspect the payload wrapper.

Common error response

Typical error JSON: {"errors":[{"code":"InvalidInput","message":"description","details":"..."}]} — check error code and details for remediation.

Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.


Next steps

Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:

  • data-exploration — Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.
  • dlthub-runtime — Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install

Was this page helpful?

Community Hub

Need more dlt context for Amazon selling partner?

Request dlt skills, commands, AGENT.md files, and AI-native context.