Skip to main content

Load Data from CircleCI to Azure Synapse Using dlt in Python

Need help deploying these pipelines, or figuring out how to run them in your data stack?

Join our Slack community or book a call with our support engineer Violetta.

CircleCI is a leading continuous integration and continuous delivery (CI/CD) platform that automates the build, test, and deployment processes for software development. It helps teams deliver high-quality software faster by providing robust tools for automation, orchestration, and monitoring. CircleCI supports various programming languages and integrates with popular development tools, enabling seamless collaboration and efficient workflow management. Azure Synapse Analytics is a limitless analytics service that combines enterprise data warehousing and Big Data analytics. Using the open-source Python library dlt, you can efficiently load data from CircleCI to Azure Synapse. This documentation will guide you through the steps required to set up and manage this data pipeline. For further information on CircleCI, visit their website.

dlt Key Features

  • Fetching data from the GitHub API: Learn how to load data from the GitHub API into your pipeline. Read more
  • Data Loading Behaviors: Understand and manage data loading behaviors, including appending or replacing data. Read more
  • Incremental Loading: Incrementally load new data and deduplicate existing data to maintain efficiency. Read more
  • Secure Handling of Secrets: Learn how to handle secrets securely within your data pipeline. Read more
  • Reusable Data Sources: Create configurable and reusable data sources to reduce code redundancy. Read more

Getting started with your pipeline locally

OpenAPI Source Generator dlt-init-openapi

This walkthrough makes use of the dlt-init-openapi generator cli tool. You can read more about it here. The code generated by this tool uses the dlt rest_api verified source, docs for this are here.

0. Prerequisites

dlt and dlt-init-openapi requires Python 3.9 or higher. Additionally, you need to have the pip package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.

1. Install dlt and dlt-init-openapi

First you need to install the dlt-init-openapi cli tool.

pip install dlt-init-openapi

The dlt-init-openapi cli is a powerful generator which you can use to turn any OpenAPI spec into a dlt source to ingest data from that api. The quality of the generator source is dependent on how well the API is designed and how accurate the OpenAPI spec you are using is. You may need to make tweaks to the generated code, you can learn more about this here.

# generate pipeline
# NOTE: add_limit adds a global limit, you can remove this later
# NOTE: you will need to select which endpoints to render, you
# can just hit Enter and all will be rendered.
dlt-init-openapi circleci --url https://raw.githubusercontent.com/dlt-hub/openapi-specs/main/open_api_specs/Business/circleci.yaml --global-limit 2
cd circleci_pipeline
# install generated requirements
pip install -r requirements.txt

The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt:

dlt>=0.4.12

You now have the following folder structure in your project:

circleci_pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── rest_api/ # The rest api verified source
│ └── ...
├── circleci/
│ └── __init__.py # TODO: possibly tweak this file
├── circleci_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)

1.1. Tweak circleci/__init__.py

This file contains the generated configuration of your rest_api. You can continue with the next steps and leave it as is, but you might want to come back here and make adjustments if you need your rest_api source set up in a different way. The generated file for the circleci source will look like this:

Click to view full file (206 lines)

from typing import List

import dlt
from dlt.extract.source import DltResource
from rest_api import rest_api_source
from rest_api.typing import RESTAPIConfig


@dlt.source(name="circleci_source", max_table_nesting=2)
def circleci_source(
api_key: str = dlt.secrets.value,
base_url: str = dlt.config.value,
) -> List[DltResource]:

# source configuration
source_config: RESTAPIConfig = {
"client": {
"base_url": base_url,
"auth": {

"type": "api_key",
"api_key": api_key,
"name": "circle-token",
"location": "query"

},
"paginator": {
"type":
"offset",
"limit":
100,
"offset_param":
"offset",
"limit_param":
"limit",
"total_path":
"",
"maximum_offset":
20,
},
},
"resources":
[
# List the artifacts produced by a given build.
{
"name": "get_projectusernameprojectbuild_numartifacts",
"table_name": "artifact",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/{build_num}/artifacts",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
"build_num": "FILL_ME_IN", # TODO: fill in required path parameter

},
}
},
# Build summary for each of the last 30 builds for a single git repo.
{
"name": "get_projectusernameproject",
"table_name": "build",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
# the parameters below can optionally be configured
# "filter": "OPTIONAL_CONFIG",

},
}
},
# Build summary for each of the last 30 recent builds, ordered by build_num.
{
"name": "get_recent_builds",
"table_name": "build",
"endpoint": {
"data_selector": "$",
"path": "/recent-builds",
}
},
# Full details for a single build. The response includes all of the fields from the build summary. This is also the payload for the [notification webhooks](/docs/configuration/#notify), in which case this object is the value to a key named 'payload'.
{
"name": "get_projectusernameprojectbuild_num",
"table_name": "build_detail",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/{build_num}",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
"build_num": "FILL_ME_IN", # TODO: fill in required path parameter

},
}
},
# Lists the environment variables for :project
{
"name": "get_projectusernameprojectenvvar",
"table_name": "envvar",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/envvar",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter

},
}
},
# Gets the hidden value of environment variable :name
{
"name": "get_projectusernameprojectenvvarname",
"table_name": "envvar",
"primary_key": "name",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/envvar/{name}",
"params": {
"name": {
"type": "resolve",
"resource": "get_projectusernameprojectenvvar",
"field": "name",
},
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter

},
}
},
# Lists checkout keys.
{
"name": "get_projectusernameprojectcheckout_key",
"table_name": "key",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/checkout-key",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter

},
}
},
# Get a checkout key.
{
"name": "get_projectusernameprojectcheckout_keyfingerprint",
"table_name": "key",
"primary_key": "fingerprint",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/checkout-key/{fingerprint}",
"params": {
"fingerprint": {
"type": "resolve",
"resource": "get_projectusernameprojectcheckout_key",
"field": "fingerprint",
},
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter

},
}
},
# Provides information about the signed in user.
{
"name": "get_me",
"table_name": "me",
"endpoint": {
"data_selector": "all_emails",
"path": "/me",
}
},
# List of all the projects you're following on CircleCI, with build information organized by branch.
{
"name": "get_projects",
"table_name": "project",
"endpoint": {
"data_selector": "$",
"path": "/projects",
}
},
# Provides test metadata for a build Note: [Learn how to set up your builds to collect test metadata](https://circleci.com/docs/test-metadata/)
{
"name": "get_projectusernameprojectbuild_numtests",
"table_name": "test",
"endpoint": {
"data_selector": "tests",
"path": "/project/{username}/{project}/{build_num}/tests",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
"build_num": "FILL_ME_IN", # TODO: fill in required path parameter

},
}
},
]
}

return rest_api_source(source_config)

2. Configuring your source and destination credentials

info

dlt-init-openapi will try to detect which authentication mechanism (if any) is used by the API in question and add a placeholder in your secrets.toml.

  • If you know your API needs authentication, but none was detected, you can learn more about adding authentication to the rest_api here.
  • OAuth detection currently is not supported, but you can supply your own authentication mechanism as outlined here.

The dlt cli will have created a .dlt directory in your project folder. This directory contains a config.toml file and a secrets.toml file that you can use to configure your pipeline. The automatically created version of these files look like this:

generated config.toml


[runtime]
log_level="INFO"

[sources.circleci]
# Base URL for the API
base_url = "https://circleci.com/api/v1"

generated secrets.toml


[sources.circleci]
# secrets for your circleci source
api_key = "FILL ME OUT" # TODO: fill in your credentials

2.1. Adjust the generated code to your usecase

Further help setting up your source and destinations

At this time, the dlt-init-openapi cli tool will always create pipelines that load to a local duckdb instance. Switching to a different destination is trivial, all you need to do is change the destination parameter in circleci_pipeline.py to synapse and supply the credentials as outlined in the destination doc linked below.

  • Read more about setting up the rest_api source in our docs.
  • Read more about setting up the Azure Synapse destination in our docs.

3. Running your pipeline for the first time

The dlt cli has also created a main pipeline script for you at circleci_pipeline.py, as well as a folder circleci that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.

The main pipeline script will look something like this:


import dlt

from circleci import circleci_source


if __name__ == "__main__":
pipeline = dlt.pipeline(
pipeline_name="circleci_pipeline",
destination='duckdb',
dataset_name="circleci_data",
progress="log",
export_schema_path="schemas/export"
)
source = circleci_source()
info = pipeline.run(source)
print(info)

Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:

python circleci_pipeline.py

4. Inspecting your load result

You can now inspect the state of your pipeline with the dlt cli:

dlt pipeline circleci_pipeline info

You can also use streamlit to inspect the contents of your Azure Synapse destination for this:

# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline circleci_pipeline show

5. Next steps to get your pipeline running in production

One of the beauties of dlt is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:

The Deploy section will show you how to deploy your pipeline to

  • Deploy with GitHub Actions: Learn how to deploy your dlt pipeline using GitHub Actions for automated workflows. Github Actions
  • Deploy with Airflow and Google Composer: Follow this guide to deploy your dlt pipeline with Airflow and Google Composer for managed workflow orchestration. Airflow
  • Deploy with Google Cloud Functions: Discover how to deploy your dlt pipeline using Google Cloud Functions for a serverless execution environment. Google cloud functions
  • Explore other deployment options: Check out additional guides and resources for deploying your dlt pipeline in various environments. and others...

The running in production section will teach you about:

  • How to Monitor your pipeline: Learn how to effectively monitor your dlt pipeline to ensure smooth operations and quickly identify any issues that may arise. How to Monitor your pipeline
  • Set up alerts: Configure alerts to get notified when something goes wrong with your dlt pipeline, allowing you to take immediate action. Set up alerts
  • Set up tracing: Implement tracing to get detailed insights into the execution of your dlt pipeline, including timing and configuration information. And set up tracing

Available Sources and Resources

For this verified source the following sources and resources are available

Source CircleCI

Loads CircleCI data on builds, environment variables, keys, artifacts, user info, and projects.

Resource NameWrite DispositionDescription
buildappendRepresents a single build process, including status, duration, and outcome.
envvarappendEnvironment variables used in the build process.
keyappendSSH keys associated with the project for secure access.
artifactappendFiles generated by the build process, such as logs or compiled binaries.
meappendInformation about the authenticated user.
build_detailappendDetailed information about a specific build, including steps and logs.
projectappendInformation about the projects configured in CircleCI, including settings and configurations.
testappendTest results from the build process, including passed, failed, and skipped tests.

Additional pipeline guides

This demo works on codespaces. Codespaces is a development environment available for free to anyone with a Github account. You'll be asked to fork the demo repository and from there the README guides you with further steps.
The demo uses the Continue VSCode extension.

Off to codespaces!

DHelp

Ask a question

Welcome to "Codex Central", your next-gen help center, driven by OpenAI's GPT-4 model. It's more than just a forum or a FAQ hub – it's a dynamic knowledge base where coders can find AI-assisted solutions to their pressing problems. With GPT-4's powerful comprehension and predictive abilities, Codex Central provides instantaneous issue resolution, insightful debugging, and personalized guidance. Get your code running smoothly with the unparalleled support at Codex Central - coding help reimagined with AI prowess.