Rootly Python API Docs | dltHub

Build a Rootly-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.

Last updated:

Rootly is an incident management and on‑call orchestration platform that exposes a JSON:API‑compliant REST API for programmatic access to incidents, alerts, schedules, users, services and related resources. The REST API base URL is https://api.rootly.com/v1 and all requests require a Bearer token for authentication.

dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Rootly data in under 10 minutes.


What data can I load from Rootly?

Here are some of the endpoints you can load from Rootly:

ResourceEndpointMethodData selectorDescription
incidents/v1/incidentsGETdataList incidents (collection responses follow JSON:API; records are in data)
alerts/v1/alertsGETdataList alerts attached to organization (JSON:API collection)
users/v1/usersGETdataList users (JSON:API collection)
services/v1/servicesGETdataList services (JSON:API collection)
schedules/v1/schedulesGETdataList on‑call schedules (JSON:API collection)
teams/v1/teamsGETdataList teams (JSON:API collection)
heartbeats/v1/heartbeatsGETdataList heartbeats (JSON:API collection)
dashboards/v1/dashboardsGETdataList dashboards (JSON:API collection)
authorizations/v1/authorizationsGETdataList authorizations (JSON:API collection)
any-single-resource/v1//:idGETdataRetrieve a single resource; top‑level data contains the object

How do I authenticate with the Rootly API?

Requests must include an Authorization header with a Bearer token (Authorization: Bearer ) and use the JSON:API media type Content-Type: application/vnd.api+json. All requests are over HTTPS.

1. Get your credentials

  1. In the Rootly web app, open the organization dropdown.
  2. Go to Organization Settings.
  3. Open API Keys.
  4. Click Generate New API Key and choose Global, Team, or Personal scope depending on required permissions.
  5. Copy the generated token and store it securely (it is presented once).

2. Add them to .dlt/secrets.toml

[sources.rootly_incident_management_source] token = "your_rootly_bearer_token_here"

dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.


How do I set up and run the pipeline?

Set up a virtual environment and install dlt:

uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"

1. Install the dlt AI Workbench:

dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex

This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →

2. Install the rest-api-pipeline toolkit:

dlt ai toolkit rest-api-pipeline install

This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →

3. Start LLM-assisted coding:

Use /find-source to load data from the Rootly API into DuckDB.

The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.

4. Run the pipeline:

python rootly_incident_management_pipeline.py

If everything is configured correctly, you'll see output like this:

Pipeline rootly_incident_management_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset rootly_incident_management_data The duckdb destination used duckdb:/rootly_incident_management.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs

Inspect your pipeline and data:

dlt pipeline rootly_incident_management_pipeline show

This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.


Python pipeline example

This example loads incidents and alerts from the Rootly API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:

import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def rootly_incident_management_source(token=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://api.rootly.com/v1", "auth": { "type": "bearer", "token": token, }, }, "resources": [ {"name": "incidents", "endpoint": {"path": "incidents", "data_selector": "data"}}, {"name": "alerts", "endpoint": {"path": "alerts", "data_selector": "data"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="rootly_incident_management_pipeline", destination="duckdb", dataset_name="rootly_incident_management_data", ) load_info = pipeline.run(rootly_incident_management_source()) print(load_info)

To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.


How do I query the loaded data?

Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.

Python (pandas DataFrame):

import dlt data = dlt.pipeline("rootly_incident_management_pipeline").dataset() sessions_df = data.incidents.df() print(sessions_df.head())

SQL (DuckDB example):

SELECT * FROM rootly_incident_management_data.incidents LIMIT 10;

In a marimo or Jupyter notebook:

import dlt data = dlt.pipeline("rootly_incident_management_pipeline").dataset() data.incidents.df().head()

See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.


What destinations can I load Rootly data to?

dlt supports loading into any of these destinations — only the destination parameter changes:

DestinationExample value
DuckDB (local, default)"duckdb"
PostgreSQL"postgres"
BigQuery"bigquery"
Snowflake"snowflake"
Redshift"redshift"
Databricks"databricks"
Filesystem (S3, GCS, Azure)"filesystem"

Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.


Troubleshooting

Authentication failures

If you receive 401 Unauthorized, verify your Authorization header is exactly: Authorization: Bearer <token> and the token has appropriate scope (Global/Team/Personal). Ensure Content-Type is application/vnd.api+json.

Rate limits

Rootly enforces rate limits (default: 3000 GET/HEAD/OPTIONS requests per API key per minute; POST/PUT/PATCH/DELETE also default to 3000/min and alert creation limited to 50/min). When exceeded the API returns 429 Too Many Requests with body {"error":"Rate limit exceeded. Try again later."}. Check X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, and X-RateLimit-Reset response headers.

Pagination

Collection endpoints are paginated using JSON:API page parameters (page[number] and page[size]). All collection responses return records in the top‑level data array. Use the page parameters and respect deterministic sorting when iterating pages.

Common error responses

  • 400 Bad Request – malformed request or invalid parameters.
  • 401 Unauthorized – missing or invalid Bearer token.
  • 403 Forbidden – token lacks permission for the requested resource.
  • 404 Not Found – resource ID does not exist.
  • 422 Unprocessable Entity – invalid record data (validation errors).
  • 429 Too Many Requests – rate limit exceeded (see Rate limits section).
  • 5xx Server errors – transient server error; retry with backoff.

Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.


Next steps

Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:

  • data-exploration — Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.
  • dlthub-runtime — Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install

Was this page helpful?

Community Hub

Need more dlt context for Rootly?

Request dlt skills, commands, AGENT.md files, and AI-native context.