Loading Chess.com Data to Azure Storage with Python's dlt
Library
Join our Slack community or book a call with our support engineer Violetta.
This page provides technical documentation for using the open source Python library, dlt
, to load data from Chess.com
to Azure Cloud Storage
. Chess.com
is a comprehensive online platform catering to chess enthusiasts, offering online games, tournaments, lessons, and more. The data from Chess.com
can be stored on Azure Cloud Storage
, a filesystem destination by Microsoft Azure that supports creating data lakes. It allows data upload in formats like JSONL, Parquet, or CSV. This guide will help you leverage the capabilities of dlt
to facilitate this data transfer. Detailed information about the source is available at https://www.chess.com/
.
dlt
Key Features
Initialising a dlt project: The
dlt
library allows easy project initialization with a simple command. This prepares your pipeline for data transfer from source to destination. Read moreGovernance Support in
dlt
Pipelines:dlt
pipelines provide robust governance support through metadata utilization, schema enforcement and curation, and schema change alerts. These features promote data consistency, traceability, and control throughout the data processing lifecycle. Read moreScaling and Fine-tuning:
dlt
offers several mechanisms and configuration options for scaling up and fine-tuning pipelines, including parallel execution, thread pools, and async execution, and the ability to adjust memory buffers, intermediary file sizes, and compression options. Read moreData Loading:
dlt
handles data loading efficiently by storing all files in a single folder. The file name contains essential metadata on the content, and the user can change the file name format by providing the layout setting for the filesystem destination. Read moreSupported File Formats:
dlt
supports various file formats includingjsonl
andparquet
. This diversity allows users to choose the file format that best suits their data processing needs. Read more
Getting started with your pipeline locally
0. Prerequisites
dlt
requires Python 3.8 or higher. Additionally, you need to have the pip
package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.
1. Install dlt
First you need to install the dlt
library with the correct extras for Azure Cloud Storage
:
pip install "dlt[filesystem]"
The dlt
cli has a useful command to get you started with any combination of source and destination. For this example, we want to load data from Chess.com
to Azure Cloud Storage
. You can run the following commands to create a starting point for loading data from Chess.com
to Azure Cloud Storage
:
# create a new directory
mkdir chess_pipeline
cd chess_pipeline
# initialize a new pipeline with your source and destination
dlt init chess filesystem
# install the required dependencies
pip install -r requirements.txt
The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt
:
dlt[filesystem]>=0.3.25
You now have the following folder structure in your project:
chess_pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── chess/ # folder with source specific files
│ └── ...
├── chess_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)
2. Configuring your source and destination credentials
The dlt
cli will have created a .dlt
directory in your project folder. This directory contains a config.toml
file and a secrets.toml
file that you can use to configure your pipeline. The automatically created version of these files look like this:
generated config.toml
# put your configuration values here
[runtime]
log_level="WARNING" # the system log level of dlt
# use the dlthub_telemetry setting to enable/disable anonymous usage data reporting, see https://dlthub.com/docs/telemetry
dlthub_telemetry = true
[sources.chess]
config_int = 0 # please set me up!
generated secrets.toml
# put your secret values and credentials here. do not share this file and do not push it to github
[sources.chess]
secret_str = "secret_str" # please set me up!
[sources.chess.secret_dict] # please set me up!
key = "value"
[destination.filesystem]
dataset_name = "dataset_name" # please set me up!
bucket_url = "bucket_url" # please set me up!
[destination.filesystem.credentials]
aws_access_key_id = "aws_access_key_id" # please set me up!
aws_secret_access_key = "aws_secret_access_key" # please set me up!
2.1. Adjust the generated code to your usecase
The default filesystem destination is configured to connect to AWS S3. To load to Azure Cloud Storage, update the [destination.filesystem.credentials]
section in your secrets.toml
.
[destination.filesystem.credentials]
azure_storage_account_name="Please set me up!"
azure_storage_account_key="Please set me up!"
By default, the filesystem destination will store your files as JSONL
. You can tell your pipeline to choose a different format with the loader_file_format
property that you can set directly on the pipeline or via your config.toml
. Available values are jsonl
, parquet
and csv
:
[pipeline] # in ./dlt/config.toml
loader_file_format="parquet"
3. Running your pipeline for the first time
The dlt
cli has also created a main pipeline script for you at chess_pipeline.py
, as well as a folder chess
that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.
The main pipeline script will look something like this:
import dlt
from chess import source
def load_players_games_example(start_month: str, end_month: str) -> None:
"""Constructs a pipeline that will load chess games of specific players for a range of months."""
# configure the pipeline: provide the destination and dataset name to which the data should go
pipeline = dlt.pipeline(
pipeline_name="chess_pipeline",
destination='filesystem',
dataset_name="chess_players_games_data",
)
# create the data source by providing a list of players and start/end month in YYYY/MM format
data = source(
["magnuscarlsen", "vincentkeymer", "dommarajugukesh", "rpragchess"],
start_month=start_month,
end_month=end_month,
)
# load the "players_games" and "players_profiles" out of all the possible resources
info = pipeline.run(data.with_resources("players_games", "players_profiles"))
print(info)
def load_players_online_status() -> None:
"""Constructs a pipeline that will append online status of selected players"""
pipeline = dlt.pipeline(
pipeline_name="chess_pipeline",
destination='filesystem',
dataset_name="chess_players_games_data",
)
data = source(["magnuscarlsen", "vincentkeymer", "dommarajugukesh", "rpragchess"])
info = pipeline.run(data.with_resources("players_online_status"))
print(info)
def load_players_games_incrementally() -> None:
"""Pipeline will not load the same game archive twice"""
# loads games for 11.2022
load_players_games_example("2022/11", "2022/11")
# second load skips games for 11.2022 but will load for 12.2022
load_players_games_example("2022/11", "2022/12")
if __name__ == "__main__":
# run our main example
load_players_games_example("2022/11", "2022/12")
load_players_online_status()
Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:
python chess_pipeline.py
4. Inspecting your load result
You can now inspect the state of your pipeline with the dlt
cli:
dlt pipeline chess_pipeline info
You can also use streamlit to inspect the contents of your Azure Cloud Storage
destination for this:
# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline chess_pipeline show
5. Next steps to get your pipeline running in production
One of the beauties of dlt
is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:
The Deploy section will show you how to deploy your pipeline to
- Deploy with GitHub Actions: Utilize GitHub's CI/CD runner to automate your deployments. Follow the guide on how to deploy a pipeline with GitHub Actions.
- Deploy with Airflow and Google Composer: Leverage Google Composer, a managed Airflow environment, to deploy your pipelines. Check the instructions on how to deploy a pipeline with Airflow.
- Deploy with Google Cloud Functions: Use Google Cloud Functions for serverless deployment of your pipelines. Learn more from the guide on how to deploy a pipeline with Google Cloud Functions.
- Explore other deployment options: Discover additional methods and platforms for deploying your pipelines by visiting the deployment walkthroughs.
The running in production section will teach you about:
- How to Monitor your pipeline: Learn how to effectively monitor your
dlt
pipeline in production to ensure it runs smoothly and efficiently. How to Monitor your pipeline - Set up alerts: Configure alerts to stay informed about the status and performance of your
dlt
pipeline. Set up alerts - And set up tracing: Implement tracing to gain deeper insights into the execution of your
dlt
pipeline, helping you debug and optimize. And set up tracing
Available Sources and Resources
For this verified source the following sources and resources are available
Source chess
The Chess.com source provides data on player profiles, online statuses, and historical game details.
Resource Name | Write Disposition | Description |
---|---|---|
players_games | append | This resource retrieves players' games that happened between a specified start and end month. It includes various details like accuracy, ratings, results, time control, tournament details, etc. for both the black and white players in each game. |
players_online_status | append | This resource checks the current online status of multiple chess players. It retrieves their username, status, last login date, and check time. |
players_profiles | replace | This resource retrieves player profiles for a list of player usernames. It includes details like the player's avatar, country, followers, streaming status, join date, last online time, league, location, name, player ID, status, title, URL, username, and verification status. |
Additional pipeline guides
- Load data from Sentry to ClickHouse in python with dlt
- Load data from Microsoft SQL Server to YugabyteDB in python with dlt
- Load data from Stripe to AWS S3 in python with dlt
- Load data from Sentry to YugabyteDB in python with dlt
- Load data from Notion to Timescale in python with dlt
- Load data from Google Sheets to MotherDuck in python with dlt
- Load data from Pipedrive to AWS S3 in python with dlt
- Load data from Clubhouse to Google Cloud Storage in python with dlt
- Load data from Pipedrive to AWS S3 in python with dlt
- Load data from Apple App-Store Connect to YugabyteDB in python with dlt