Loading Data from CircleCI
to Azure Cloud Storage
with dlt
in Python
Join our Slack community or book a call with our support engineer Violetta.
Loading data from CircleCI
to Azure Cloud Storage
using dlt
involves automating the transfer of build, test, and deployment data to Microsoft's cloud storage solution. CircleCI
is a leading CI/CD platform that streamlines software development by automating various processes. By integrating CircleCI
with Azure Cloud Storage
, you can create efficient data lakes using formats like JSONL, Parquet, or CSV. The open-source Python library, dlt
, facilitates this integration, providing tools to manage and monitor the data transfer seamlessly. For more information on CircleCI
, visit their website.
dlt
Key Features
- Pipeline Metadata:
dlt
pipelines leverage metadata to provide governance capabilities, including load IDs for incremental transformations and data lineage. Learn more - Schema Enforcement and Curation: Ensure data consistency and quality by enforcing and curating schemas that define the structure of normalized data. Learn more
- Schema Evolution: Get alerted to schema changes in source data, allowing proactive governance and impact analysis. Learn more
- Scaling and Finetuning: Utilize various configuration options to scale up and finetune pipelines, including parallel execution and memory buffer adjustments. Learn more
- Filesystem & Buckets: Store data in remote file systems and bucket storages like S3, Google Storage, or Azure Blob Storage, and build a data lake. Learn more
Getting started with your pipeline locally
dlt-init-openapi
0. Prerequisites
dlt
and dlt-init-openapi
requires Python 3.9 or higher. Additionally, you need to have the pip
package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.
1. Install dlt and dlt-init-openapi
First you need to install the dlt-init-openapi
cli tool.
pip install dlt-init-openapi
The dlt-init-openapi
cli is a powerful generator which you can use to turn any OpenAPI spec into a dlt
source to ingest data from that api. The quality of the generator source is dependent on how well the API is designed and how accurate the OpenAPI spec you are using is. You may need to make tweaks to the generated code, you can learn more about this here.
# generate pipeline
# NOTE: add_limit adds a global limit, you can remove this later
# NOTE: you will need to select which endpoints to render, you
# can just hit Enter and all will be rendered.
dlt-init-openapi circleci --url https://raw.githubusercontent.com/dlt-hub/openapi-specs/main/open_api_specs/Business/circleci.yaml --global-limit 2
cd circleci_pipeline
# install generated requirements
pip install -r requirements.txt
The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt
:
dlt>=0.4.12
You now have the following folder structure in your project:
circleci_pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── rest_api/ # The rest api verified source
│ └── ...
├── circleci/
│ └── __init__.py # TODO: possibly tweak this file
├── circleci_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)
1.1. Tweak circleci/__init__.py
This file contains the generated configuration of your rest_api. You can continue with the next steps and leave it as is, but you might want to come back here and make adjustments if you need your rest_api
source set up in a different way. The generated file for the circleci source will look like this:
Click to view full file (206 lines)
from typing import List
import dlt
from dlt.extract.source import DltResource
from rest_api import rest_api_source
from rest_api.typing import RESTAPIConfig
@dlt.source(name="circleci_source", max_table_nesting=2)
def circleci_source(
api_key: str = dlt.secrets.value,
base_url: str = dlt.config.value,
) -> List[DltResource]:
# source configuration
source_config: RESTAPIConfig = {
"client": {
"base_url": base_url,
"auth": {
"type": "api_key",
"api_key": api_key,
"name": "circle-token",
"location": "query"
},
"paginator": {
"type":
"offset",
"limit":
100,
"offset_param":
"offset",
"limit_param":
"limit",
"total_path":
"",
"maximum_offset":
20,
},
},
"resources":
[
# List the artifacts produced by a given build.
{
"name": "get_projectusernameprojectbuild_numartifacts",
"table_name": "artifact",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/{build_num}/artifacts",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
"build_num": "FILL_ME_IN", # TODO: fill in required path parameter
},
}
},
# Build summary for each of the last 30 builds for a single git repo.
{
"name": "get_projectusernameproject",
"table_name": "build",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
# the parameters below can optionally be configured
# "filter": "OPTIONAL_CONFIG",
},
}
},
# Build summary for each of the last 30 recent builds, ordered by build_num.
{
"name": "get_recent_builds",
"table_name": "build",
"endpoint": {
"data_selector": "$",
"path": "/recent-builds",
}
},
# Full details for a single build. The response includes all of the fields from the build summary. This is also the payload for the [notification webhooks](/docs/configuration/#notify), in which case this object is the value to a key named 'payload'.
{
"name": "get_projectusernameprojectbuild_num",
"table_name": "build_detail",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/{build_num}",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
"build_num": "FILL_ME_IN", # TODO: fill in required path parameter
},
}
},
# Lists the environment variables for :project
{
"name": "get_projectusernameprojectenvvar",
"table_name": "envvar",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/envvar",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
},
}
},
# Gets the hidden value of environment variable :name
{
"name": "get_projectusernameprojectenvvarname",
"table_name": "envvar",
"primary_key": "name",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/envvar/{name}",
"params": {
"name": {
"type": "resolve",
"resource": "get_projectusernameprojectenvvar",
"field": "name",
},
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
},
}
},
# Lists checkout keys.
{
"name": "get_projectusernameprojectcheckout_key",
"table_name": "key",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/checkout-key",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
},
}
},
# Get a checkout key.
{
"name": "get_projectusernameprojectcheckout_keyfingerprint",
"table_name": "key",
"primary_key": "fingerprint",
"write_disposition": "merge",
"endpoint": {
"data_selector": "$",
"path": "/project/{username}/{project}/checkout-key/{fingerprint}",
"params": {
"fingerprint": {
"type": "resolve",
"resource": "get_projectusernameprojectcheckout_key",
"field": "fingerprint",
},
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
},
}
},
# Provides information about the signed in user.
{
"name": "get_me",
"table_name": "me",
"endpoint": {
"data_selector": "all_emails",
"path": "/me",
}
},
# List of all the projects you're following on CircleCI, with build information organized by branch.
{
"name": "get_projects",
"table_name": "project",
"endpoint": {
"data_selector": "$",
"path": "/projects",
}
},
# Provides test metadata for a build Note: [Learn how to set up your builds to collect test metadata](https://circleci.com/docs/test-metadata/)
{
"name": "get_projectusernameprojectbuild_numtests",
"table_name": "test",
"endpoint": {
"data_selector": "tests",
"path": "/project/{username}/{project}/{build_num}/tests",
"params": {
"username": "FILL_ME_IN", # TODO: fill in required path parameter
"project": "FILL_ME_IN", # TODO: fill in required path parameter
"build_num": "FILL_ME_IN", # TODO: fill in required path parameter
},
}
},
]
}
return rest_api_source(source_config)
2. Configuring your source and destination credentials
dlt-init-openapi
will try to detect which authentication mechanism (if any) is used by the API in question and add a placeholder in your secrets.toml
.
The dlt
cli will have created a .dlt
directory in your project folder. This directory contains a config.toml
file and a secrets.toml
file that you can use to configure your pipeline. The automatically created version of these files look like this:
generated config.toml
[runtime]
log_level="INFO"
[sources.circleci]
# Base URL for the API
base_url = "https://circleci.com/api/v1"
generated secrets.toml
[sources.circleci]
# secrets for your circleci source
api_key = "FILL ME OUT" # TODO: fill in your credentials
2.1. Adjust the generated code to your usecase
At this time, the dlt-init-openapi
cli tool will always create pipelines that load to a local duckdb
instance. Switching to a different destination is trivial, all you need to do is change the destination
parameter in circleci_pipeline.py
to filesystem and supply the credentials as outlined in the destination doc linked below.
The default filesystem destination is configured to connect to AWS S3. To load to Azure Cloud Storage, update the [destination.filesystem.credentials]
section in your secrets.toml
.
[destination.filesystem.credentials]
azure_storage_account_name="Please set me up!"
azure_storage_account_key="Please set me up!"
By default, the filesystem destination will store your files as JSONL
. You can tell your pipeline to choose a different format with the loader_file_format
property that you can set directly on the pipeline or via your config.toml
. Available values are jsonl
, parquet
and csv
:
[pipeline] # in ./dlt/config.toml
loader_file_format="parquet"
3. Running your pipeline for the first time
The dlt
cli has also created a main pipeline script for you at circleci_pipeline.py
, as well as a folder circleci
that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.
The main pipeline script will look something like this:
import dlt
from circleci import circleci_source
if __name__ == "__main__":
pipeline = dlt.pipeline(
pipeline_name="circleci_pipeline",
destination='duckdb',
dataset_name="circleci_data",
progress="log",
export_schema_path="schemas/export"
)
source = circleci_source()
info = pipeline.run(source)
print(info)
Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:
python circleci_pipeline.py
4. Inspecting your load result
You can now inspect the state of your pipeline with the dlt
cli:
dlt pipeline circleci_pipeline info
You can also use streamlit to inspect the contents of your Azure Cloud Storage
destination for this:
# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline circleci_pipeline show
5. Next steps to get your pipeline running in production
One of the beauties of dlt
is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:
The Deploy section will show you how to deploy your pipeline to
- Deploy with GitHub Actions: Learn how to set up and deploy your pipeline using GitHub Actions. Read more.
- Deploy with Airflow and Google Composer: Follow this guide to deploy your pipeline using Airflow and Google Composer. Read more.
- Deploy with Google Cloud Functions: Explore how to deploy your pipeline using Google Cloud Functions. Read more.
- Other Deployment Options: Discover more ways to deploy your pipeline with
dlt
. Read more.
The running in production section will teach you about:
- How to Monitor your pipeline: Learn the best practices for monitoring your
dlt
pipeline in production to ensure smooth and efficient operation. How to Monitor your pipeline - Set up alerts: Implement alerting mechanisms to stay informed about the status and health of your
dlt
pipeline. Set up alerts - And set up tracing: Utilize tracing to get detailed insights into the performance and execution of your
dlt
pipeline. And set up tracing
Available Sources and Resources
For this verified source the following sources and resources are available
Source CircleCI
Loads CircleCI data on builds, environment variables, keys, artifacts, user info, and projects.
Resource Name | Write Disposition | Description |
---|---|---|
build | append | Represents a single build process, including status, duration, and outcome. |
envvar | append | Environment variables used in the build process. |
key | append | SSH keys associated with the project for secure access. |
artifact | append | Files generated by the build process, such as logs or compiled binaries. |
me | append | Information about the authenticated user. |
build_detail | append | Detailed information about a specific build, including steps and logs. |
project | append | Information about the projects configured in CircleCI, including settings and configurations. |
test | append | Test results from the build process, including passed, failed, and skipped tests. |
Additional pipeline guides
- Load data from Box Platform API to Google Cloud Storage in python with dlt
- Load data from Looker to Azure Cloud Storage in python with dlt
- Load data from Slack to ClickHouse in python with dlt
- Load data from Slack to AlloyDB in python with dlt
- Load data from Cisco Meraki to Azure Synapse in python with dlt
- Load data from Chargebee to BigQuery in python with dlt
- Load data from Pipedrive to PostgreSQL in python with dlt
- Load data from Stripe to Azure Cloud Storage in python with dlt
- Load data from Trello to AlloyDB in python with dlt
- Load data from Klarna to Supabase in python with dlt