Skip to main content

Python Data Loading from mongodb to postgresql using dlt Library

Need help deploying these pipelines, or figuring out how to run them in your data stack?

Join our Slack community or book a call with our support engineer Adrian.

This page provides technical documentation on how to expedite your data handling process using dlt, an open-source Python library. It details the process of loading data from MongoDB, a modern database that simplifies working with data, to PostgreSQL, a robust open-source object-relational database system that safely scales complex data workloads. By leveraging dlt, developers can streamline their data migration from MongoDB to PostgreSQL, thus accelerating the pace of bringing ideas to market. Additional information on MongoDB can be found at MongoDB's official website.

dlt Key Features

  • Pipeline Metadata: dlt pipelines leverage metadata to provide governance capabilities. This metadata includes load IDs, which consist of a timestamp and pipeline name. Load IDs enable incremental transformations and data vaulting by tracking data loads and facilitating data lineage and traceability. Read more about lineage.
  • Schema Enforcement and Curation: dlt empowers users to enforce and curate schemas, ensuring data consistency and quality. Schemas define the structure of normalized data and guide the processing and loading of data. Read more: Adjust a schema docs.
  • Schema evolution: dlt enables proactive governance by alerting users to schema changes. When modifications occur in the source data’s schema, dlt notifies stakeholders, allowing them to take necessary actions. Read more about identifiers, data lineage, and schema lineage.
  • Scaling and finetuning: dlt offers several mechanism and configuration options to scale up and finetune pipelines, including running extraction, normalization and load in parallel, writing sources and resources that are run in parallel via thread pools and async execution, and finetuning the memory buffers, intermediary file sizes and compression options. Read more about performance.
  • Authentication types: Snowflake destination accepts three authentication types - password authentication, key pair authentication, and external authentication. Read more about Snowflake authentication.

Getting started with your pipeline locally

0. Prerequisites

dlt requires Python 3.8 or higher. Additionally, you need to have the pip package manager installed, and we recommend using a virtual environment to manage your dependencies. You can learn more about preparing your computer for dlt in our installation reference.

1. Install dlt

First you need to install the dlt library with the correct extras for PostgreSQL:

pip install "dlt[postgres]"

The dlt cli has a useful command to get you started with any combination of source and destination. For this example, we want to load data from MongoDB to PostgreSQL. You can run the following commands to create a starting point for loading data from MongoDB to PostgreSQL:

# create a new directory
mkdir my-mongodb-pipeline
cd my-mongodb-pipeline
# initialize a new pipeline with your source and destination
dlt init mongodb postgres
# install the required dependencies
pip install -r requirements.txt

The last command will install the required dependencies for your pipeline. The dependencies are listed in the requirements.txt:

pymongo>=4.3.3
dlt[postgres]>=0.3.5

You now have the following folder structure in your project:

my-mongodb-pipeline/
├── .dlt/
│ ├── config.toml # configs for your pipeline
│ └── secrets.toml # secrets for your pipeline
├── mongodb/ # folder with source specific files
│ └── ...
├── mongodb_pipeline.py # your main pipeline script
├── requirements.txt # dependencies for your pipeline
└── .gitignore # ignore files for git (not required)

2. Configuring your source and destination credentials

The dlt cli will have created a .dlt directory in your project folder. This directory contains a config.toml file and a secrets.toml file that you can use to configure your pipeline:

config.toml

# put your configuration values here

[runtime]
log_level="WARNING" # the system log level of dlt
# use the dlthub_telemetry setting to enable/disable anonymous usage data reporting, see https://dlthub.com/docs/telemetry
dlthub_telemetry = true

secrets.toml

# put your secret values and credentials here. do not share this file and do not push it to github

[sources.mongodb]
connection_url = "connection_url" # please set me up!

[destination.postgres.credentials]
database = "database" # please set me up!
password = "password" # please set me up!
username = "username" # please set me up!
host = "host" # please set me up!
port = 5432
connect_timeout = 15
Further help setting up your source and destinations

Please consult the detailed setup instructions for the PostgreSQL destination in the dlt destinations documentation.

Likewise you can find the setup instructions for MongoDB source in the dlt verifed sources documentation.

3. Running your pipeline for the first time

The dlt cli has also created a main pipeline script for you at mongodb_pipeline.py, as well as a folder mongodb that contains additional python files for your source. These files are your local copies which you can modify to fit your needs. In some cases you may find that you only need to do small changes to your pipelines or add some configurations, in other cases these files can serve as a working starting point for your code, but will need to be adjusted to do what you need them to do.

The main pipeline script will look something like this:

from typing import List

import dlt
from dlt.common import pendulum
from dlt.common.pipeline import LoadInfo
from dlt.common.typing import TDataItems
from dlt.pipeline.pipeline import Pipeline

# As this pipeline can be run as standalone script or as part of the tests, we need to handle the import differently.
try:
from .mongodb import mongodb, mongodb_collection # type: ignore
except ImportError:
from mongodb import mongodb, mongodb_collection


def load_select_collection_db(pipeline: Pipeline = None) -> LoadInfo:
"""Use the mongodb source to reflect an entire database schema and load select tables from it.

This example sources data from a sample mongo database data from [mongodb-sample-dataset](https://github.com/neelabalan/mongodb-sample-dataset).
"""
if pipeline is None:
# Create a pipeline
pipeline = dlt.pipeline(
pipeline_name="local_mongo",
destination='postgres',
dataset_name="mongo_select",
)

# Configure the source to load a few select collections incrementally
mflix = mongodb(incremental=dlt.sources.incremental("date")).with_resources(
"comments"
)

# Run the pipeline. The merge write disposition merges existing rows in the destination by primary key
info = pipeline.run(mflix, write_disposition="merge")

return info


def load_select_collection_db_items(parallel: bool = False) -> TDataItems:
"""Get the items from a mongo collection in parallel or not and return a list of records"""
comments = mongodb(
incremental=dlt.sources.incremental("date"), parallel=parallel
).with_resources("comments")
return list(comments)


def load_select_collection_db_filtered(pipeline: Pipeline = None) -> LoadInfo:
"""Use the mongodb source to reflect an entire database schema and load select tables from it.

This example sources data from a sample mongo database data from [mongodb-sample-dataset](https://github.com/neelabalan/mongodb-sample-dataset).
"""
if pipeline is None:
# Create a pipeline
pipeline = dlt.pipeline(
pipeline_name="local_mongo",
destination='postgres',
dataset_name="mongo_select_incremental",
)

# Configure the source to load a few select collections incrementally
movies = mongodb_collection(
collection="movies",
incremental=dlt.sources.incremental(
"lastupdated", initial_value=pendulum.DateTime(2016, 1, 1, 0, 0, 0)
),
)

# Run the pipeline. The merge write disposition merges existing rows in the destination by primary key
info = pipeline.run(movies, write_disposition="merge")

return info


def load_select_collection_hint_db(pipeline: Pipeline = None) -> LoadInfo:
"""Use the mongodb source to reflect an entire database schema and load select tables from it.

This example sources data from a sample mongo database data from [mongodb-sample-dataset](https://github.com/neelabalan/mongodb-sample-dataset).
"""
if pipeline is None:
# Create a pipeline
pipeline = dlt.pipeline(
pipeline_name="local_mongo",
destination='postgres',
dataset_name="mongo_select_hint",
)

# Load a table incrementally with append write disposition
# this is good when a table only has new rows inserted, but not updated
airbnb = mongodb().with_resources("listingsAndReviews")
airbnb.listingsAndReviews.apply_hints(
incremental=dlt.sources.incremental("last_scraped")
)

info = pipeline.run(airbnb, write_disposition="append")

return info


def load_entire_database(pipeline: Pipeline = None) -> LoadInfo:
"""Use the mongo source to completely load all collection in a database"""
if pipeline is None:
# Create a pipeline
pipeline = dlt.pipeline(
pipeline_name="local_mongo",
destination='postgres',
dataset_name="mongo_database",
)

# By default the mongo source reflects all collections in the database
source = mongodb()

# Run the pipeline. For a large db this may take a while
info = pipeline.run(source, write_disposition="replace")

return info


if __name__ == "__main__":
# Credentials for the sample database.
# Load selected tables with different settings
print(load_select_collection_db())
# print(load_select_collection_db_filtered())

# Load all tables from the database.
# Warning: The sample database is large
# print(load_entire_database())

Provided you have set up your credentials, you can run your pipeline like a regular python script with the following command:

python mongodb_pipeline.py

4. Inspecting your load result

You can now inspect the state of your pipeline with the dlt cli:

dlt pipeline local_mongo info

You can also use streamlit to inspect the contents of your PostgreSQL destination for this:

# install streamlit
pip install streamlit
# run the streamlit app for your pipeline with the dlt cli:
dlt pipeline local_mongo show

5. Next steps to get your pipeline running in production

One of the beauties of dlt is, that we are just a plain Python library, so you can run your pipeline in any environment that supports Python >= 3.8. We have a couple of helpers and guides in our docs to get you there:

The Deploy section will show you how to deploy your pipeline to

  • Deploy with Github Actions: dlt can be deployed using Github Actions. This is a CI/CD runner that is basically free to use. You need to specify when the GitHub Action should run using a cron schedule expression. The command also takes additional flags: --run-on-push (default is False) and --run-manually (default is True). Learn more here
  • Deploy with Airflow: dlt can be deployed with Airflow. This process involves creating an Airflow DAG for your pipeline script that you should customize. The DAG is using dlt Airflow wrapper to make this process trivial. Learn more here
  • Deploy with Google Cloud Functions: dlt can be deployed with Google Cloud Functions. This is a serverless execution environment for building and connecting cloud services. With Cloud Functions you write simple, single-purpose functions that are attached to events emitted from your cloud infrastructure and services. Learn more here
  • Other Deployment Options: There are other ways to deploy dlt as well. These include deploying with AWS Lambda, Google Cloud Run, and more. Learn more here

The running in production section will teach you about:

  • Monitor Your Pipeline: dlt provides a comprehensive set of tools for monitoring your data pipeline. You can inspect and save load info, trace runtime, and even alert on schema changes. For more details, visit How to Monitor your pipeline.
  • Set Up Alerts: With dlt, you can set up alerts to notify you of any changes or issues in your data pipeline. This feature allows you to stay on top of your pipeline's health and address any problems promptly. Learn more on Set up alerts.
  • Set Up Tracing: dlt allows you to trace the runtime of your data pipeline. This feature provides valuable insights into the performance of your pipeline and can help you identify areas for optimization. Check out Set up tracing for more information.

Additional pipeline guides

This demo works on codespaces. Codespaces is a development environment available for free to anyone with a Github account. You'll be asked to fork the demo repository and from there the README guides you with further steps.
The demo uses the Continue VSCode extension.

Off to codespaces!

DHelp

Ask a question

Welcome to "Codex Central", your next-gen help center, driven by OpenAI's GPT-4 model. It's more than just a forum or a FAQ hub – it's a dynamic knowledge base where coders can find AI-assisted solutions to their pressing problems. With GPT-4's powerful comprehension and predictive abilities, Codex Central provides instantaneous issue resolution, insightful debugging, and personalized guidance. Get your code running smoothly with the unparalleled support at Codex Central - coding help reimagined with AI prowess.