Load Rhythm Rolodex data in Python using dltHub

Build a Rhythm Rolodex-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.

Last updated:

In this guide, we'll set up a complete Rhythm Software Membership data pipeline from API credentials to your first data load in just 10 minutes. You'll end up with a fully declarative Python pipeline based on dlt's REST API connector, like in the partial example code below:

Example code
@dlt.source def rhythm_rolodex_source(access_token=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://membership.api.rhythmsoftware.com/", "auth": { "type": "bearer", "token": access_token, }, }, "resources": [ "fees/{tenantId}", "gifts/{tenantId}", "funds/{tenantId}" ], } [...] yield from rest_api_resources(config) def get_data() -> None: # Connect to destination pipeline = dlt.pipeline( pipeline_name='rhythm_rolodex_pipeline', destination='duckdb', dataset_name='rhythm_rolodex_data', ) # Load the data load_info = pipeline.run(rhythm_rolodex_source()) print(load_info)

Why use dlt to generate Python pipelines?

  • Accelerate pipeline development with AI-native context
  • Debug pipelines, validate schemas and data with the integrated Pipeline Dashboard
  • Build Python notebooks for end users of your data
  • Low maintenance thanks to schema evolution with type inference, resilience and self-documenting REST API connectors. A shallow learning curve makes the pipeline easy to extend by any team member
  • dlt is the tool of choice for Pythonic Iceberg Lakehouses, bringing mature data loading to Iceberg with or without catalogs

What you’ll do

We’ll show you how to generate a readable and easily maintainable Python script that fetches data from Rhythm Rolodex's API and loads it into Iceberg, DataFrames, files, or a database of your choice. Here are some of the endpoints you can load:

  • Fees: Manage fees related to tenants.
  • Gifts: Handle gift-related information for tenants.
  • Funds: Access and manage fund details for tenants.
  • Exams: Manage exam details for tenants.
  • Rooms: Access room information for tenants.
  • Carts: Handle shopping cart details for tenants.
  • Tasks: Manage tasks associated with tenants.
  • Types: Access various types related to tenants.
  • AddOns: Manage add-ons available for tenants.
  • Events: Handle events related to tenants.
  • Donors: Access donor information for tenants.
  • Awards: Manage awards related to tenants.
  • Venues: Access venue details for tenants.
  • Orders: Handle orders made by tenants.
  • Stores: Access store information for tenants.
  • Notices: Manage notices associated with tenants.
  • Pledges: Handle pledges made by tenants.
  • Credits: Access credit information for tenants.
  • Resumes: Manage resume submissions for tenants.
  • Coupons: Handle coupon information for tenants.

You will then debug the Rhythm Software Membership pipeline using our Pipeline Dashboard tool to ensure it is copying the data correctly, before building a Notebook to explore your data and build reports.

Setup & steps to follow

💡

Before getting started, set up a virtual environment (instructions) and install the dlt workspace:

uv venv && source .venv/bin/activate
uv pip install "dlt[workspace]"

Now you're ready to get started!

  1. Install the dlt AI Workbench

    Configure the workbench for your coding assistant:

    dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex

    This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent.

    Learn more about the dltHub AI Workbench and setup details for each assistant →

  2. Install the rest-api-pipeline toolkit

    The AI Workbench provides different toolkits for each phase of the data engineering lifecycle. To start you need to install the rest-api-pipeline toolkit:

    dlt ai toolkit rest-api-pipeline install

    This loads different skills and contexts about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. Importantly, it does not need to ask you for credentials directly. In dlt, API credentials are provided via a secrets.toml file (learn more about secrets management →), and the agent should use the MCP tools to see their shape and detect misconfigurations. It never needs to access the file directly.

    Learn more about the rest-api-pipeline toolkit →

  3. Start LLM-assisted coding

    Here's a prompt to get you started:

    Prompt
    Use /find-source to load data from the Rhythm Rolodex API into DuckDB.

    The AI Workbench rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and then follows a structured workflow to scaffold, debug, and validate the pipeline step by step.

  4. View the result

    After the rest-api-pipeline workflow has finished, you will end up with a working REST API source with validated endpoints and a pipeline that writes data into a local dataset you have inspected and verified.

    > python rhythm_rolodex_pipeline.py Pipeline rhythm_rolodex load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset rhythm_rolodex_data The duckdb destination used duckdb:/rhythm_rolodex.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs

    By launching the Pipeline Dashboard, you can see various information about the pipeline and the loaded data

    • Pipeline overview: State, load metrics
    • Data's schema: tables, columns, types, hints
    • You can query the data itself
    dlt pipeline rhythm_rolodex_pipeline show

Running into errors?

Tokens are valid for 24 hours, so they must be securely cached and reused to prevent excessive consumption. Applications are limited to a maximum of 31 M2M access tokens per month. Avoid creating new tokens for each user login, and ensure that queries are not directly linked to user interactions. Throttling API calls may be necessary to avoid request limit errors.

Next steps

You can go to the next phases of your data engineering journey by handing over to other toolkits of the dltHub AI Workbench:

  • data-exploration — Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.
  • dlthub-runtime — Deploy, schedule, and monitor your pipeline in production
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install

Or explore the following resources for more information:

Was this page helpful?

Community Hub

Need more dlt context for Rhythm Rolodex?

Request dlt skills, commands, AGENT.md files, and AI-native context.