From Singer to simplicity: Why Data Teams choose dlt.
- Adrian Brudaru,
Co-Founder & CDO
Most data teams just want pipelines that work, not frameworks that fight back. Singer was Stitch's clever growth hack. Meltano patched the gaps… but never changed the core abstraction. dlt skipped the complexity and made data pipelines feel like Python again.
Singer was Stitch's approach to creating an open data extraction standard. Meltano completed what Stitch never intended to fully open source. dlt learned from both and built the right abstraction for data teams from day one.
The real story: Singer was never meant to be that free
Stitch's difficult to run open source (2017)
Singer established an important standard for data extraction, but Stitch's open source strategy was strategic: open source the specification and let the community build connectors, but keep the operational complexity proprietary. They never released the production tooling: no orchestration, state management, deployment, scaling, or monitoring. The plan was clear: let the community build connectors, then pay Stitch to run them. In their Fivetran vs Stitch, they highlighted as a competitive advantage that you can hire anyone to build connectors for you, not just Stitch team/5tran team. And so an ecosystem of agencies developed.
However for the rest of the business intelligence managers and data engineers of the time, Singer was intellectually interesting but practically useless without significant engineering investment.
Meltano completed what Stitch wouldn't (2018-2024)
Meltano deserves huge credit here. GitLab saw Singer's potential and built Meltano to add the missing pieces: orchestration, state management, deployment tooling. They essentially completed what Stitch should have open-sourced in the first place, making Singer proper OSS and usable by all.
But Meltano kept Singer's core design: it was built by software engineers for software engineers. Data teams still had to learn framework patterns, project structures, and configuration management to solve basic data problems.
dlt: Doing it right from the start (2023)
We built dlt around how data people actually work: Python libraries, not object oriented frameworks. Import it, use it, get production-ready pipelines without learning new paradigms. The concept was clear: Make the data person a first class citizen, empowering them to contribute without requiring them to be someone else.
The intent of a data engineer who is looking for pipeline tools
The fundamental issue comes from the intent of a user finding Singer or Meltano. User intent means understanding the user's underlying objective when they find a product. For a data person, that intent looks like:
- I am looking for source X to destination Y
- If I cannot find it, I will build or buy it.
- If I build it with a tool, it should be easier than doing it from scratch.
Now, to validate if a singer Tap and Target work, a user would have to run them both, which means in essence:
- A user has to understand how singer taps work, and review if the tap works (look in git issues) and fulfils their needs (endpoints)
- A user has to understand how to run a tap and target together - These components usually cannot be run together and require meltano.
- Now a user has to understand how to run meltano and the tap and target
- Finally the user probably has a python orchestrator that he wants to run the code - so now the user has to figure out how to run Meltano under their orchestrator.
Rolling your own source with Meltano? Again, this is harder than vanilla python:
- Learn the concepts
- Write an extractor
- Write tap framework code around the extractor, schemas, configs etc.
- Package the extractor into a Tap.
That's a lot of "meta-work" to just see if this code works for my case! At this point, the data person has to decide. Do I want to invest time into understanding a framework that may or may not solve my problem? Well, my original intent was to get data from X to Y, so I probably don't wanna do all this other stuff first before even knowing if it will help.
Now, apply the same journey to a user finding dlt:
If I find a dlt source, I can just run it (python) and see if it works. I can load it locally or print the data. Roll my own? This is easier than building with vanilla python:
- Write an extractor,
- Pass it to a loading function
pipeline.run(extractor())
and go. - Need to do more? Easy, add config arguments with your python decorator
Comparison that matters
Finally, let's do a comparison table on the criteria that matter.

Singer loses on every dimension that matters to data teams. For someone who thinks in terms of Python, pandas, and SQL, dlt feels natural because it works the way pandas works, you import it, call its functions, and it handles the complex stuff behind the scenes without changing how you approach the problem.
Why we built dlt
We didn’t build dlt to compete with Meltano. We built it because Singer wasn’t the right fit for data teams, and Meltano couldn’t change that base abstraction.
Data integration for data people should feel like using pandas, import a library, call functions, get production-ready results. No frameworks, no project structures, no configuration management overhead.
dlt gives you production-ready data pipelines using Python you already know. to plug into the project structures you already do. No learning curve, no framework lock-in, no operational overhead.
If you can write pd.read_csv()
, you can build production data pipelines with dlt. If you can debug Python, you can debug dlt. If you can deploy Python, you can scale dlt.
Skip the framework: Start building better pipelines today
Get started in 5 minutes
Try dlt instantly: Open our Colab notebook and build a working pipeline without installing anything.
Build a real pipeline: Follow our quickstart using your actual data sources.
The 30-minute test
Build the same integration with both tools. Start with dlt (15 minutes), then try Meltano. Ask yourself:
- Which would you rather debug at 3 AM?
- Which would be easier to hand off to a new team member?
- Which approach scales better with your team's growth?
Most teams know the answer immediately.
Join 3000+ Data teams using dlt
Companies are building better data products faster with dlt's library approach: 10x faster development cycles, easier maintenance, better performance.
Try dlt now and see why we built the tool data teams actually need.
Stop fighting frameworks. Start solving problems.