Loading Imgur Data to Microsoft SQL Server with dlt
in Python
Join our Slack community or book a call with our support engineer Violetta.
Imgur
is an online image sharing and hosting service, widely known for hosting viral images and memes, especially those shared on Reddit. This documentation will guide you through the process of loading data from Imgur
to Microsoft SQL Server
using the open-source Python library dlt
. Microsoft SQL Server
is a relational database management system (RDBMS) that allows applications and tools to connect and communicate using Transact-SQL. By following this guide, you will learn how to extract data from Imgur
and load it into Microsoft SQL Server
efficiently. For more information about Imgur
, visit Imgur's website.
dlt
Key Features
- Extract, Normalize, Load:
dlt
simplifies the process of turning JSON from any source into a live dataset stored in your chosen destination. Learn more at How dlt works. - Governance Support:
dlt
pipelines offer robust governance through pipeline metadata, schema enforcement, and schema change alerts. Read more about governance support. - Scaling and Finetuning:
dlt
provides options to scale and fine-tune pipelines, ensuring efficient data processing. Discover more about performance. - Secure Handling of Secrets: Manage your secrets securely within your data pipeline. Find out how in the tutorial.
- Incremental Data Loading: Load only new data and deduplicate existing data with incremental loading. Learn more in the tutorial.
Getting started with your pipeline locally
dlt-init-openapi
0. Prerequisites
dlt
and dlt-init-openapi
requires Python 3.9 or higher. Additionally, you need to have the pip
package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.
1. Install dlt and dlt-init-openapi
First you need to install the dlt-init-openapi
cli tool.
pip install dlt-init-openapi
The dlt-init-openapi
cli is a powerful generator which you can use to turn any OpenAPI spec into a dlt
source to ingest data from that api. The quality of the generator source is dependent on how well the API is designed and how accurate the OpenAPI spec you are using is. You may need to make tweaks to the generated code, you can learn more about this here.
# generate pipeline
# NOTE: add_limit adds a global limit, you can remove this later
# NOTE: you will need to select which endpoints to render, you
# can just hit Enter and all will be rendered.
dlt-init-openapi imgur --url https://raw.githubusercontent.com/dlt-hub/openapi-specs/main/open_api_specs/Public/imgur.yaml --global-limit 2
cd imgur_pipeline
# install generated requirements
pip install -r requirements.txt
The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt
:
dlt>=0.4.12
You now have the following folder structure in your project:
imgur_pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── rest_api/ # The rest api verified source
│ └── ...
├── imgur/
│ └── __init__.py # TODO: possibly tweak this file
├── imgur_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)
1.1. Tweak imgur/__init__.py
This file contains the generated configuration of your rest_api. You can continue with the next steps and leave it as is, but you might want to come back here and make adjustments if you need your rest_api
source set up in a different way. The generated file for the imgur source will look like this:
Click to view full file (110 lines)
from typing import List
import dlt
from dlt.extract.source import DltResource
from rest_api import rest_api_source
from rest_api.typing import RESTAPIConfig
@dlt.source(name="imgur_source", max_table_nesting=2)
def imgur_source(
api_key: str = dlt.secrets.value,
base_url: str = dlt.config.value,
) -> List[DltResource]:
# source configuration
source_config: RESTAPIConfig = {
"client": {
"base_url": base_url,
"auth": {
"type": "api_key",
"api_key": api_key,
"name": "Authorization",
"location": "header"
},
},
"resources":
[
{
"name": "get_account",
"table_name": "account_response",
"endpoint": {
"data_selector": "$",
"path": "/3/account/{userName}",
"params": {
"userName": "FILL_ME_IN", # TODO: fill in required path parameter
},
"paginator": "auto",
}
},
{
"name": "get_account_images_count",
"table_name": "basic_int_32_response",
"endpoint": {
"data_selector": "$",
"path": "/3/account/{userName}/images/count",
"params": {
"userName": {
"type": "resolve",
"resource": "get_account_images",
"field": "id",
},
},
"paginator": "auto",
}
},
{
"name": "get_account_images",
"table_name": "image",
"primary_key": "id",
"write_disposition": "merge",
"endpoint": {
"data_selector": "data",
"path": "/3/account/{userName}/images",
"params": {
"userName": "FILL_ME_IN", # TODO: fill in required path parameter
},
"paginator": "auto",
}
},
{
"name": "get_account_image",
"table_name": "image_response",
"endpoint": {
"data_selector": "$",
"path": "/3/account/{userName}/images/{imageHash}",
"params": {
"imageHash": {
"type": "resolve",
"resource": "get_account_images",
"field": "id",
},
"userName": "FILL_ME_IN", # TODO: fill in required path parameter
},
"paginator": "auto",
}
},
{
"name": "get_image",
"table_name": "image_response",
"endpoint": {
"data_selector": "$",
"path": "/3/image/{imageHash}",
"params": {
"imageHash": "FILL_ME_IN", # TODO: fill in required path parameter
},
"paginator": "auto",
}
},
]
}
return rest_api_source(source_config)
2. Configuring your source and destination credentials
dlt-init-openapi
will try to detect which authentication mechanism (if any) is used by the API in question and add a placeholder in your secrets.toml
.
The dlt
cli will have created a .dlt
directory in your project folder. This directory contains a config.toml
file and a secrets.toml
file that you can use to configure your pipeline. The automatically created version of these files look like this:
generated config.toml
[runtime]
log_level="INFO"
[sources.imgur]
# Base URL for the API
base_url = "https://api.imgur.com"
generated secrets.toml
[sources.imgur]
# secrets for your imgur source
api_key = "FILL ME OUT" # TODO: fill in your credentials
2.1. Adjust the generated code to your usecase
At this time, the dlt-init-openapi
cli tool will always create pipelines that load to a local duckdb
instance. Switching to a different destination is trivial, all you need to do is change the destination
parameter in imgur_pipeline.py
to mssql and supply the credentials as outlined in the destination doc linked below.
3. Running your pipeline for the first time
The dlt
cli has also created a main pipeline script for you at imgur_pipeline.py
, as well as a folder imgur
that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.
The main pipeline script will look something like this:
import dlt
from imgur import imgur_source
if __name__ == "__main__":
pipeline = dlt.pipeline(
pipeline_name="imgur_pipeline",
destination='duckdb',
dataset_name="imgur_data",
progress="log",
export_schema_path="schemas/export"
)
source = imgur_source()
info = pipeline.run(source)
print(info)
Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:
python imgur_pipeline.py
4. Inspecting your load result
You can now inspect the state of your pipeline with the dlt
cli:
dlt pipeline imgur_pipeline info
You can also use streamlit to inspect the contents of your Microsoft SQL Server
destination for this:
# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline imgur_pipeline show
5. Next steps to get your pipeline running in production
One of the beauties of dlt
is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:
The Deploy section will show you how to deploy your pipeline to
- Deploy with GitHub Actions: Learn how to deploy a pipeline using GitHub Actions. Follow the guide here.
- Deploy with Airflow: Discover how to deploy a pipeline with Airflow and Google Composer. Detailed instructions can be found here.
- Deploy with Google Cloud Functions: Check out how to deploy a pipeline using Google Cloud Functions by visiting this page.
- Explore other deployment methods: Find more ways to deploy your pipeline here.
The running in production section will teach you about:
- How to Monitor your pipeline: Learn how to effectively monitor your
dlt
pipeline in production to ensure smooth operation and timely detection of issues. Read more here. - Set up alerts: Set up alerts to get notified about critical events and potential issues in your
dlt
pipeline. Find out how here. - Set up tracing: Implement tracing to gain insights into the execution flow and performance of your
dlt
pipeline. Learn the steps here.
Available Sources and Resources
For this verified source the following sources and resources are available
Source Imgur
Fetches image, account, and interaction data from Imgur.
Resource Name | Write Disposition | Description |
---|---|---|
image_response | append | Details about individual images hosted on Imgur, including metadata and image statistics. |
account_response | append | Information about user accounts on Imgur, such as account settings and user activity. |
basic_int_32_response | append | Basic response containing integer values, potentially used for counters or simple metrics. |
image | append | Core data about images uploaded to Imgur, including URLs, upload timestamps, and image properties. |
Additional pipeline guides
- Load data from Adobe Commerce (Magento) to Neon Serverless Postgres in python with dlt
- Load data from Jira to Dremio in python with dlt
- Load data from Shopify to Redshift in python with dlt
- Load data from Google Analytics to AWS Athena in python with dlt
- Load data from Stripe to Azure Synapse in python with dlt
- Load data from MySQL to AlloyDB in python with dlt
- Load data from Mux to Azure Cosmos DB in python with dlt
- Load data from Soundcloud to Dremio in python with dlt
- Load data from Zuora to AlloyDB in python with dlt
- Load data from Soundcloud to MotherDuck in python with dlt