Loading Data from Qualtrics
to PostgreSQL
Using dlt
in Python
Join our Slack community or book a call with our support engineer Violetta.
Qualtrics
is a cloud-based survey platform that allows users to build surveys, distribute them, and analyze the results. PostgreSQL
is a powerful, open-source object-relational database system that uses and extends the SQL language to handle complex data workloads. This documentation covers how to load data from Qualtrics
to PostgreSQL
using the open-source Python library dlt
. dlt
simplifies the process of extracting, transforming, and loading data, making it easier to manage and analyze survey data from Qualtrics
within a PostgreSQL
database. For more information about Qualtrics
, visit here.
dlt
Key Features
- Automatic Data Normalization:
dlt
normalizes JSON data from any source into relational tables, making it ready to be loaded. Learn more about howdlt
works. - PostgreSQL Integration: Easily load data into PostgreSQL with
dlt
. Follow the Postgres setup guide to get started. - Governance Support:
dlt
pipelines offer robust governance through metadata utilization, schema enforcement, and change alerts. Explore more about governance indlt
. - Scalable Data Extraction:
dlt
leverages iterators, chunking, and parallelization for efficient data extraction. Discover the scalability features. - dbt Integration:
dlt
integrates with dbt for advanced data transformations. Read about dbt support indlt
.
Getting started with your pipeline locally
dlt-init-openapi
0. Prerequisites
dlt
and dlt-init-openapi
requires Python 3.9 or higher. Additionally, you need to have the pip
package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.
1. Install dlt and dlt-init-openapi
First you need to install the dlt-init-openapi
cli tool.
pip install dlt-init-openapi
The dlt-init-openapi
cli is a powerful generator which you can use to turn any OpenAPI spec into a dlt
source to ingest data from that api. The quality of the generator source is dependent on how well the API is designed and how accurate the OpenAPI spec you are using is. You may need to make tweaks to the generated code, you can learn more about this here.
# generate pipeline
# NOTE: add_limit adds a global limit, you can remove this later
# NOTE: you will need to select which endpoints to render, you
# can just hit Enter and all will be rendered.
dlt-init-openapi qualtrics --url https://raw.githubusercontent.com/dlt-hub/openapi-specs/main/open_api_specs/Business/qualtrics.yaml --global-limit 2
cd qualtrics_pipeline
# install generated requirements
pip install -r requirements.txt
The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt
:
dlt>=0.4.12
You now have the following folder structure in your project:
qualtrics_pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── rest_api/ # The rest api verified source
│ └── ...
├── qualtrics/
│ └── __init__.py # TODO: possibly tweak this file
├── qualtrics_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)
1.1. Tweak qualtrics/__init__.py
This file contains the generated configuration of your rest_api. You can continue with the next steps and leave it as is, but you might want to come back here and make adjustments if you need your rest_api
source set up in a different way. The generated file for the qualtrics source will look like this:
Click to view full file (97 lines)
from typing import List
import dlt
from dlt.extract.source import DltResource
from rest_api import rest_api_source
from rest_api.typing import RESTAPIConfig
@dlt.source(name="qualtrics_source", max_table_nesting=2)
def qualtrics_source(
api_key: str = dlt.secrets.value,
base_url: str = dlt.config.value,
) -> List[DltResource]:
# source configuration
source_config: RESTAPIConfig = {
"client": {
"base_url": base_url,
"auth": {
"type": "api_key",
"api_key": api_key,
"name": "X-API-TOKEN",
"location": "header"
},
},
"resources":
[
# Gets all distributions for a given survey
{
"name": "distribution",
"table_name": "distribution",
"primary_key": "id",
"write_disposition": "merge",
"endpoint": {
"data_selector": "result.elements",
"path": "/distributions",
"params": {
"surveyId": "FILL_ME_IN", # TODO: fill in required query parameter
},
"paginator": "auto",
}
},
# Get event subscriptions
{
"name": "event_subscriptions_response",
"table_name": "event_subscriptions_response",
"endpoint": {
"data_selector": "$",
"path": "/eventsubscriptions/{SubscriptionId}",
"params": {
"SubscriptionId": "FILL_ME_IN", # TODO: fill in required path parameter
},
"paginator": "auto",
}
},
# Retrieves all the individual links for a given distribution
{
"name": "link",
"table_name": "link",
"endpoint": {
"data_selector": "result.elements",
"path": "/distributions/{DistributionId}/links",
"params": {
"DistributionId": {
"type": "resolve",
"resource": "distribution",
"field": "id",
},
"surveyId": "FILL_ME_IN", # TODO: fill in required query parameter
},
"paginator": "auto",
}
},
# Gets a single Qualtrics survey speficied by its ID
{
"name": "survey_response",
"table_name": "survey_response",
"endpoint": {
"data_selector": "$",
"path": "/survey-definitions/{SurveyId}",
"params": {
"SurveyId": "FILL_ME_IN", # TODO: fill in required path parameter
},
"paginator": "auto",
}
},
]
}
return rest_api_source(source_config)
2. Configuring your source and destination credentials
dlt-init-openapi
will try to detect which authentication mechanism (if any) is used by the API in question and add a placeholder in your secrets.toml
.
The dlt
cli will have created a .dlt
directory in your project folder. This directory contains a config.toml
file and a secrets.toml
file that you can use to configure your pipeline. The automatically created version of these files look like this:
generated config.toml
[runtime]
log_level="INFO"
[sources.qualtrics]
# Base URL for the API
base_url = "https://fra1.qualtrics.com/API/v3"
generated secrets.toml
[sources.qualtrics]
# secrets for your qualtrics source
api_key = "FILL ME OUT" # TODO: fill in your credentials
2.1. Adjust the generated code to your usecase
At this time, the dlt-init-openapi
cli tool will always create pipelines that load to a local duckdb
instance. Switching to a different destination is trivial, all you need to do is change the destination
parameter in qualtrics_pipeline.py
to postgres and supply the credentials as outlined in the destination doc linked below.
3. Running your pipeline for the first time
The dlt
cli has also created a main pipeline script for you at qualtrics_pipeline.py
, as well as a folder qualtrics
that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.
The main pipeline script will look something like this:
import dlt
from qualtrics import qualtrics_source
if __name__ == "__main__":
pipeline = dlt.pipeline(
pipeline_name="qualtrics_pipeline",
destination='duckdb',
dataset_name="qualtrics_data",
progress="log",
export_schema_path="schemas/export"
)
source = qualtrics_source()
info = pipeline.run(source)
print(info)
Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:
python qualtrics_pipeline.py
4. Inspecting your load result
You can now inspect the state of your pipeline with the dlt
cli:
dlt pipeline qualtrics_pipeline info
You can also use streamlit to inspect the contents of your PostgreSQL
destination for this:
# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline qualtrics_pipeline show
5. Next steps to get your pipeline running in production
One of the beauties of dlt
is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:
The Deploy section will show you how to deploy your pipeline to
- Deploy with GitHub Actions: Learn how to deploy your
dlt
pipeline using GitHub Actions for CI/CD automation. Github Actions - Deploy with Airflow: Follow this guide to deploy your
dlt
pipeline with Airflow and Google Composer. Airflow - Deploy with Google Cloud Functions: Discover how to deploy your
dlt
pipeline using Google Cloud Functions for serverless execution. Google cloud functions - Explore other deployment options: Check out additional methods for deploying your
dlt
pipeline. and others...
The running in production section will teach you about:
- How to Monitor your pipeline: Learn how to effectively monitor your
dlt
pipeline in production to ensure smooth operation and quick issue resolution. How to Monitor your pipeline - Set up alerts: Set up alerts to stay informed about the status of your
dlt
pipeline and take immediate action when necessary. Set up alerts - Set up tracing: Implement tracing to get detailed insights into the performance and behavior of your
dlt
pipeline. And set up tracing
Available Sources and Resources
For this verified source the following sources and resources are available
Source Qualtrics
Collects survey responses, distribution data, and event subscription details from Qualtrics.
Resource Name | Write Disposition | Description |
---|---|---|
distribution | append | Details about the distribution of surveys to respondents |
link | append | Information about the links generated for survey distribution |
survey_response | append | Responses collected from the distributed surveys |
event_subscriptions_response | append | Data related to event subscriptions and their responses |
Additional pipeline guides
- Load data from Jira to Microsoft SQL Server in python with dlt
- Load data from X to Google Cloud Storage in python with dlt
- Load data from Klaviyo to PostgreSQL in python with dlt
- Load data from MySQL to EDB BigAnimal in python with dlt
- Load data from Qualtrics to Databricks in python with dlt
- Load data from Stripe to DuckDB in python with dlt
- Load data from Pipedrive to EDB BigAnimal in python with dlt
- Load data from Oracle Database to Redshift in python with dlt
- Load data from Sentry to Timescale in python with dlt
- Load data from Capsule CRM to The Local Filesystem in python with dlt