Learn how our customers transform legacy systems, cut costs and empower their teams to achieve data democracy.

Data Consultancy
Tasman Analytics, a ~20-person data analytics consultancy, uses dltHub Pro to prototype client connectors in real-time — scoping in minutes instead of weeks — and shift from time-and-materials to fixed-price projects.
Read the case study
Renewable energy
Vandebron, a Dutch green-energy provider, rebuilt its complex ingestion stack in just one week using dlt, cutting costs, code, and runtime dramatically.
Read the case study
Retail
A solo data team upskills into more advanced data engineering and finds a robust, reliable solution to data ingestion, building an “enterprise-grade” data operation.
Read the case study“dlt has enabled me to completely rewrite all of our core SaaS service pipelines in 2 weeks and have data pipelines in production with full confidence. We also achieved data democracy for our data platform. Our product, business, and operation teams can independently satisfy a majority of their data needs through no-code self-service. The teams built multi-touch attribution for how Harness acquires customers, and models for how Harness customers utilize licenses. If the teams want to build anything else to push the company forward, they don't need to wait for permission or data access to do it.”

Alex Butler
Senior Data Engineer at Harness

AdTech
Learn how Remerge moved away from manual spreadsheets by centralizing their data, creating a reliable single source of truth.
Read the case study
Fine Art
Artsy transforms their 10-year-old legacy system into a streamlined, customizable solution, dramatically reducing data extraction times.
Read the case study
“It was easy to adopt dlt by our data and application teams. It follows good engineering practices, has open source code and the documentation allows our engineers to self serve and solve most of the problems. With the help of dlt, we were able to reduce the costs of some of pipelines by up to 50%.”
Florian Stefan
Flatiron Health

Healthcare and Life Science
Learn how Flatiron Health cut 50% of their cost of ingestion and transformation pipelines using dlt (data load tool).
Read the case studyHealth Insurance
Dentolo transforms its data ingestion process, empowers the team with a composable data stack and democratizes data access across the organization.
Read the case studySoftware Development
PostHog builds a scalable, customizable data warehouse that seamlessly handles large datasets, and empowers their team to deliver a flexible and high-performing solution for users.
Read the case study“Data democracy for our product, business, and operation teams means that they can independently satisfy a majority of their data needs through no-code self-service. If the teams want to build anything else to push the company forward, they don't need to wait for permission or data access to do it. All kinds of new reporting is being done that wasn't possible before.”
Alex Butler
Harness

Software Development
Harness chooses dlt (data load tool) + sqlmesh to create an end-to-end next generation data platform.
Read the case studyFinancial
How Taktile uses dlt (data load tool) + Snowflake for custom data needs and empowers all software engineers.
Read the case study
“We at Untitled Data Company create new data pipelines for our customers all the time. We are now using dlt's new REST API toolkit in consulting projects. The toolkit allows us to build data pipelines in very little time and very little code. This proves to be great for us as well as our customers as they can easily maintain the data pipelines by anyone who knows Python on their end.”

Willi Müller
Co-Founder at Untitled Data Company
“The current machine learning revolution has been enabled by the Cambrian explosion of Python open-source tools that have become so accessible that a wide range of practitioners can use them. As a simple-to-use Python library, dlt is the first tool that this new wave of people can use. By leveraging this library, we can extend the machine learning revolution into enterprise data.”

Julien Chaumond
CTO/Co-Founder at Hugging Face
“Python and machine learning under security constraints are key to our success. We found that our cloud ETL provider could not meet our needs. dlt is a lightweight yet powerful open source tool we can run together with Snowflake. Our event streaming and batch data loading performs at scale and low cost. Now anyone who knows Python can self-serve to fulfil their data needs.”

Maximilian Eber
CPTO & Co-Founder at Taktile

“Our tests have proven that dlt meets our requirements regarding performance, customization and data privacy. It fits easily into our existing hosting and security infrastructure making the production rollout cost effective. The team behind dlt has a culture of supporting enterprise customers and we were able to get help and advice quickly.”
Erling Brandvik
Sparebank 1 SR-Bank
“I am not a data engineer. But with dltHub, I was able to add Continue's top GitHub contributors to our website. The whole path from creating a dlt pipeline with an MCP, fixing data quality issues, and getting it to run was so seamless that I didn't need any help from infra colleagues.”

Brian Douglas
Head of Developer Experience at Continue, Former Director of Developer Advocacy at GitHub
The current machine learning revolution has been enabled by the Cambrian explosion of Python open-source tools that have become so accessible that a wide range of practitioners can use them. As a simple-to-use Python library, dlt is the first tool that this new wave of people can use. By leveraging this library, we can extend the machine learning revolution into enterprise data.

Julien Chaumond
CTO/Co-Founder at Hugging Face