dltHub
Blog /

What is dlt+ Project?

  • Adrian Brudaru,
    Co-Founder & CDO

Why It Matters

Data teams face three major challenges when building and maintaining pipelines:

  1. Dev-to-prod inconsistencies – Environments behave differently, leading to unexpected failures.
  2. Onboarding friction – New developers spend weeks ramping up due to unclear documentation and scattered configurations.
  3. Collaboration bottlenecks – Without a standardized approach, pipelines evolve chaotically, making teamwork inefficient.

dlt+ Project solves these problems by making pipeline development declarative, transparent, and team-friendly. It enables teams to work faster, smarter, and with less friction by providing a single, authoritative manifest that unifies pipeline definition, deployment, and orchestration.

What dlt+ Project Is For

dlt+ Project is built for teams managing data pipelines at scale. It’s ideal for:

  • Data engineers who need structured, repeatable workflows.
  • Analysts & non-Python developers who want to collaborate without deep coding expertise.
  • Organizations requiring seamless multi-environment support for dev-to-prod workflows with engine-agnostic execution.
  • Teams creating modular, reusable data products that can be packaged and shared effortlessly.

What dlt+ Project Does

At its core, dlt+ Project provides a declarative configuration layer through a single manifest file. This file acts as the single source of truth for your entire pipeline:

  • Declarative YAML Configuration Define your pipelines, sources, destinations, and transformation logic in a clear and consistent format. For example: This structure makes your pipelines self-documenting and easy to understand at a glance.
dlt_project.yaml
project:
  name: sales_pipeline
  version: 1.0.0

profiles:
  dev:
    destination: duckdb
    sources: [google_ads, stripe]
  prod:
    destination: bigquery
    sources: [google_ads, stripe]

pipelines:
  sales_reporting:
    source: google_ads
    destination: bigquery
    dataset_name: sales_analytics
    write_disposition: append


  • Single Source of Truth With every configuration, transformation, and environment setting centralised in dlt.yml, teams never lose track of what’s deployed where. This alignment drastically reduces dev-to-prod discrepancies and onboarding delays.
  • Enhanced Collaboration Because pipelines are defined in YAML, not just Python, everyone on the team, from data engineers to business analysts, can contribute. This democratises pipeline development and fosters cross-functional teamwork.
  • Seamless Dev-to-Prod Transitions dlt+ Project integrates effortlessly with dlt+ Cache, letting you run local tests and then switch profiles to execute in production. For example: With no manual reconfiguration required, you ensure consistent execution across environments.
shell
dlt pipeline run --profile dev
dlt pipeline run --profile prod

How It Works: From Devel to Prod

Imagine this simple flow:

  1. Define Your Pipeline: Write your pipeline’s configuration in dlt.yml.
  2. Local Testing & Development: Use dlt+ Cache to run local transformations and validate your pipeline without incurring cloud costs.
  3. Seamless Deployment: Switch profiles (dev vs. prod) and deploy your pipeline consistently across environments.

This unified approach means your pipeline is always documented, version-controlled, and ready for collaboration.

The Bigger Picture

Beyond its native function, dlt+ Project serves as a manifest for packaging, deployment, and orchestration. It adheres to a standard Python project layout, making it easy to package, distribute, and manage your data workflows. Whether you’re distributing via PyPI or managing your code in Git, dlt+ Project provides the structure and consistency that modern data teams need.

The project can be packaged as pip installable, enabling, for example, an analyst to access it via a local catalog in notebooks. This enables collaboration with the broader team, who can now also develop on local environments, with metadata such as compliance and security governed end to end.

Use Cases: How Teams Benefit

  • Faster Onboarding: New team members can start contributing immediately with a clear, unified pipeline configuration.
  • Effortless Scaling Across Environments: Develop locally and then push to production without worrying about vendor lock-in.

Next Steps

Ready to streamline your data team’s development and collaboration?