Skip to main content

Python Guide: Loading Data from Rest API to Redshift using dlt

About our rest_api verified source

This example demonstrates how to use the rest_api to retrieve data from the GitHub Rest API, but will work with any HTTP Rest API. Please read:

Need help deploying these pipelines, or figuring out how to run them in your data stack?

Join our Slack community or book a call with our support engineer Violetta.

This page provides technical documentation for using the dlt library to load data from any HTTP Rest API into Amazon Redshift. Redshift is a cloud-based data warehouse service that can handle data from a few hundred gigabytes to over a petabyte. The dlt library is an open-source Python tool designed to facilitate data loading. The rest_api verified source, supported by dlt, enables data extraction from any HTTP Rest API. Detailed information about this source can be found at this link.

dlt Key Features

  • Amazon Redshift: dlt provides seamless integration with Amazon Redshift, enabling easy setup and data loading. It also supports the initialization of a new dlt project with Redshift as the destination.

  • Governance Support: dlt pipelines offer robust governance support through three key mechanisms: pipeline metadata utilization, schema enforcement and curation, and schema change alerts.

  • Tutorial: A comprehensive guide on how to efficiently use dlt to build a data pipeline. The tutorial introduces foundational concepts of dlt and guides you through basic and advanced usage scenarios.

  • Snowflake Authentication Types: dlt supports multiple authentication types for Snowflake destination including password authentication, key pair authentication, and external authentication, providing flexible and secure options for your data pipeline.

  • Getting Started with dlt: Get started with dlt through a quick introductory guide, a Google Colab demo, a tutorial on building a data pipeline, and a collection of how-to guides for common use cases. Join the dlt community for further discussions and support.

Getting started with your pipeline locally

0. Prerequisites

dlt requires Python 3.8 or higher. Additionally, you need to have the pip package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.

1. Install dlt

First you need to install the dlt library with the correct extras for Redshift:

pip install "dlt[redshift]"

The dlt cli has a useful command to get you started with any combination of source and destination. For this example, we want to load data from Rest API to Redshift. You can run the following commands to create a starting point for loading data from Rest API to Redshift:

# create a new directory
mkdir rest_api_pipeline
cd rest_api_pipeline
# initialize a new pipeline with your source and destination
dlt init rest_api redshift
# install the required dependencies
pip install -r requirements.txt

The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt:

dlt[redshift]>=0.4.11

You now have the following folder structure in your project:

rest_api_pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── rest_api/ # folder with source specific files
│ └── ...
├── rest_api_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)

2. Configuring your source and destination credentials

The dlt cli will have created a .dlt directory in your project folder. This directory contains a config.toml file and a secrets.toml file that you can use to configure your pipeline. The automatically created version of these files look like this:

generated config.toml

# put your configuration values here

[runtime]
log_level="WARNING" # the system log level of dlt
# use the dlthub_telemetry setting to enable/disable anonymous usage data reporting, see https://dlthub.com/docs/telemetry
dlthub_telemetry = true

generated secrets.toml

# put your secret values and credentials here. do not share this file and do not push it to github

[sources.rest_api]
github_token = "github_token" # please set me up!

[destination.redshift]
dataset_name = "dataset_name" # please set me up!

[destination.redshift.credentials]
database = "database" # please set me up!
password = "password" # please set me up!
username = "username" # please set me up!
host = "host" # please set me up!
port = 5439
connect_timeout = 15

2.1. Adjust the generated code to your usecase

Further help setting up your source and destinations
  • Read more about setting up the Rest API source in our docs.
  • Read more about setting up the Redshift destination in our docs.

3. Running your pipeline for the first time

The dlt cli has also created a main pipeline script for you at rest_api_pipeline.py, as well as a folder rest_api that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.

The main pipeline script will look something like this:


from typing import Any

import dlt
from rest_api import (
RESTAPIConfig,
check_connection,
rest_api_source,
rest_api_resources,
)


@dlt.source
def github_source(github_token: str = dlt.secrets.value) -> Any:
# Create a REST API configuration for the GitHub API
# Use RESTAPIConfig to get autocompletion and type checking
config: RESTAPIConfig = {
"client": {
"base_url": "https://api.github.com/repos/dlt-hub/dlt/",
"auth": {
"type": "bearer",
"token": github_token,
},
},
# The default configuration for all resources and their endpoints
"resource_defaults": {
"primary_key": "id",
"write_disposition": "merge",
"endpoint": {
"params": {
"per_page": 100,
},
},
},
"resources": [
# This is a simple resource definition,
# that uses the endpoint path as a resource name:
# "pulls",
# Alternatively, you can define the endpoint as a dictionary
# {
# "name": "pulls", # <- Name of the resource
# "endpoint": "pulls", # <- This is the endpoint path
# }
# Or use a more detailed configuration:
{
"name": "issues",
"endpoint": {
"path": "issues",
# Query parameters for the endpoint
"params": {
"sort": "updated",
"direction": "desc",
"state": "open",
# Define `since` as a special parameter
# to incrementally load data from the API.
# This works by getting the updated_at value
# from the previous response data and using this value
# for the `since` query parameter in the next request.
"since": {
"type": "incremental",
"cursor_path": "updated_at",
"initial_value": "2024-01-25T11:21:28Z",
},
},
},
},
# The following is an example of a resource that uses
# a parent resource (`issues`) to get the `issue_number`
# and include it in the endpoint path:
{
"name": "issue_comments",
"endpoint": {
# The placeholder {issue_number} will be resolved
# from the parent resource
"path": "issues/{issue_number}/comments",
"params": {
# The value of `issue_number` will be taken
# from the `number` field in the `issues` resource
"issue_number": {
"type": "resolve",
"resource": "issues",
"field": "number",
}
},
},
# Include data from `id` field of the parent resource
# in the child data. The field name in the child data
# will be called `_issues_id` (_{resource_name}_{field_name})
"include_from_parent": ["id"],
},
],
}

yield from rest_api_resources(config)


def load_github() -> None:
pipeline = dlt.pipeline(
pipeline_name="rest_api_github",
destination='redshift',
dataset_name="rest_api_data",
)

load_info = pipeline.run(github_source())
print(load_info)


def load_pokemon() -> None:
pipeline = dlt.pipeline(
pipeline_name="rest_api_pokemon",
destination='redshift',
dataset_name="rest_api_data",
)

pokemon_source = rest_api_source(
{
"client": {
"base_url": "https://pokeapi.co/api/v2/",
# If you leave out the paginator, it will be inferred from the API:
# paginator: "json_response",
},
"resource_defaults": {
"endpoint": {
"params": {
"limit": 1000,
},
},
},
"resources": [
"pokemon",
"berry",
"location",
],
}
)

def check_network_and_authentication() -> None:
(can_connect, error_msg) = check_connection(
pokemon_source,
"not_existing_endpoint",
)
if not can_connect:
pass # do something with the error message

check_network_and_authentication()

load_info = pipeline.run(pokemon_source)
print(load_info)


if __name__ == "__main__":
load_github()
load_pokemon()

Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:

python rest_api_pipeline.py

4. Inspecting your load result

You can now inspect the state of your pipeline with the dlt cli:

dlt pipeline rest_api_github info

You can also use streamlit to inspect the contents of your Redshift destination for this:

# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline rest_api_github show

5. Next steps to get your pipeline running in production

One of the beauties of dlt is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:

The Deploy section will show you how to deploy your pipeline to

  • Deploy with Github Actions: dlt provides an easy way to deploy your pipeline using Github Actions. This CI/CD runner is versatile and essentially free to use.
  • Deploy with Airflow: If you prefer using Airflow for deployment, dlt has you covered. Check out how to deploy a pipeline with Airflow and Google Composer here.
  • Deploy with Google Cloud Functions: dlt also supports deployment with Google Cloud Functions. Learn more about the process here.
  • Other Deployment Options: dlt offers a variety of other deployment options to suit your specific needs. Find more information about these options here.

The running in production section will teach you about:

  • Monitor Your Pipeline: dlt provides a comprehensive guide on how to monitor your data pipeline. This includes checking the status of your pipeline, inspecting the load info and trace, and saving the load info. Learn more from their Monitoring Guide.
  • Set Up Alerts: Stay informed about your pipeline's performance and any potential issues by setting up alerts. dlt offers a detailed guide on how to set up alerts, including setting up alerting on schema changes. Check out the Alerting Guide for more information.
  • Set Up Tracing: dlt offers a tracing feature that provides timing information on extract, normalize, and load steps. It also provides all the config and secret values with full information from where they were obtained. Learn how to set up tracing by visiting the Tracing Guide.

Available Sources and Resources

For this verified source the following sources and resources are available

Source github_source

"Rest API Source for GitHub, providing detailed data on issues and related comments."

Resource NameWrite DispositionDescription
issue_commentsmergeContains information about the issue comments including the author, body of the comment, created date, and user details among other data.
issuesmergeContains information about the issues including the assignee details, author, body of the issue, comments, created date, and user details among other data.

Additional pipeline guides

This demo works on codespaces. Codespaces is a development environment available for free to anyone with a Github account. You'll be asked to fork the demo repository and from there the README guides you with further steps.
The demo uses the Continue VSCode extension.

Off to codespaces!

DHelp

Ask a question

Welcome to "Codex Central", your next-gen help center, driven by OpenAI's GPT-4 model. It's more than just a forum or a FAQ hub – it's a dynamic knowledge base where coders can find AI-assisted solutions to their pressing problems. With GPT-4's powerful comprehension and predictive abilities, Codex Central provides instantaneous issue resolution, insightful debugging, and personalized guidance. Get your code running smoothly with the unparalleled support at Codex Central - coding help reimagined with AI prowess.