RSSHub Python API Docs | dltHub
Build a RSSHub-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.
Last updated:
RSSHub is an open-source RSS feed generator that converts web content into RSS feeds. It can be deployed on platforms like Railway and Hostinger for self-hosting. RSSHub supports thousands of websites and platforms for easy RSS feed creation. The REST API base URL is https://rsshub.app and No global API token by default; authentication via HTTP Basic for protected routes or per-service credentials configured as environment variables..
dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading RSSHub data in under 10 minutes.
What data can I load from RSSHub?
Here are some of the endpoints you can load from RSSHub:
| Resource | Endpoint | Method | Data selector | Description |
|---|---|---|---|---|
| rss_feed | {base_url}/{route} | GET | (top-level XML) or items | Generate RSS XML for a given route (e.g. /youtube/channel/:id) |
| rss_feed_json | {base_url}/{route}.json | GET | items | Same feed as JSON; items contains the list of feed entries |
| routes | {base_url}/routes | GET | routes | List available routes (documentation/homepage route listing) |
| protected_routes | {base_url}/protected/rsshub/routes | GET | (top-level HTML/JSON) | Protected route listing (requires HTTP Basic Auth) |
| status | {base_url}/status | GET | (top-level) | Instance status / health endpoint |
| example_youtube_channel | {base_url}/youtube/channel/:id | GET | items | YouTube channel uploads feed (JSON via .json) |
| example_github_releases | {base_url}/github/release/:user/:repo | GET | items | GitHub releases feed |
How do I authenticate with the RSSHub API?
RSSHub does not require API keys for public routes. Authentication is accomplished via instance configuration and environment variables. Protected routes may require HTTP Basic Auth (username:password in URL) or an ACCESS_KEY/access code; many routes that access third-party services require provider credentials set as environment variables (e.g., GITHUB_ACCESS_TOKEN, TWITTER_* tokens, YOUTUBE_KEY).
1. Get your credentials
- For per-service credentials (e.g., GitHub, YouTube, Twitter) create the required API key/token on the provider’s developer portal.
- On your RSSHub instance, set the corresponding environment variable(s) in the .env or container config (e.g., GITHUB_ACCESS_TOKEN=xxx, YOUTUBE_KEY=xxx, TWITTER_CONSUMER_KEY=xxx).
- Restart RSSHub so the environment variables take effect.
2. Add them to .dlt/secrets.toml
[sources.rsshub_source] api_key = "your_service_api_key_here"
dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.
How do I set up and run the pipeline?
Set up a virtual environment and install dlt:
uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"
1. Install the dlt AI Workbench:
dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex
This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →
2. Install the rest-api-pipeline toolkit:
dlt ai toolkit rest-api-pipeline install
This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →
3. Start LLM-assisted coding:
Use /find-source to load data from the RSSHub API into DuckDB.
The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.
4. Run the pipeline:
python rsshub_pipeline.py
If everything is configured correctly, you'll see output like this:
Pipeline rsshub_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset rsshub_data The duckdb destination used duckdb:/rsshub.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs
Inspect your pipeline and data:
dlt pipeline rsshub_pipeline show
This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.
Python pipeline example
This example loads rss_feed and rss_feed_json from the RSSHub API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:
import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def rsshub_source(api_key=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://rsshub.app", "auth": { "type": "api_key", "api_key": api_key, }, }, "resources": [ {"name": "rss_feed", "endpoint": {"path": "youtube/channel/:id", "data_selector": "items"}}, {"name": "rss_feed_json", "endpoint": {"path": "github/release/:user/:repo.json", "data_selector": "items"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="rsshub_pipeline", destination="duckdb", dataset_name="rsshub_data", ) load_info = pipeline.run(rsshub_source()) print(load_info)
To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.
How do I query the loaded data?
Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.
Python (pandas DataFrame):
import dlt data = dlt.pipeline("rsshub_pipeline").dataset() sessions_df = data.rss_feed.df() print(sessions_df.head())
SQL (DuckDB example):
SELECT * FROM rsshub_data.rss_feed LIMIT 10;
In a marimo or Jupyter notebook:
import dlt data = dlt.pipeline("rsshub_pipeline").dataset() data.rss_feed.df().head()
See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.
What destinations can I load RSSHub data to?
dlt supports loading into any of these destinations — only the destination parameter changes:
| Destination | Example value |
|---|---|
| DuckDB (local, default) | "duckdb" |
| PostgreSQL | "postgres" |
| BigQuery | "bigquery" |
| Snowflake | "snowflake" |
| Redshift | "redshift" |
| Databricks | "databricks" |
| Filesystem (S3, GCS, Azure) | "filesystem" |
Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.
Next steps
Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:
data-exploration— Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.dlthub-runtime— Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install
Was this page helpful?
Community Hub
Need more dlt context for RSSHub?
Request dlt skills, commands, AGENT.md files, and AI-native context.