Blog
- •
- Tutorials
Turn your Documentation into a Queryable Knowledge Graph for High retrieval accuracy and low hallucinations
- Hiba Jamal,
Working Student
Read moreUsing dlt + Cognee, we take API docs from Slack, PayPal, and TicketMaster and built a knowledge graph.
- •
- Tutorials,
- Community
How I went from “I’ll never build a pipeline” to doing it in an hour with Cursor
- Roshni Melwani,
Working Student
Read moreDev takes Alena’s dlt course, then uses AI to build a WHOOP sleep-data pipeline, saving the data to Parquet, demonstrating that beginners can master pipelines quickly.
- •
- Product,
- Industry
We've been using LanceDB to make AI development smoother
- Adrian Brudaru,
Co-Founder & CDO
Read moreWe've been using LanceDB for months at dltHub to build AI systems more quickly. The same setup works locally and in the cloud. Handles structured and vector data in one place.
- •
- Industry,
- Product
Building Engine-Agnostic Data Stacks
- Adrian Brudaru,
Co-Founder & CDO
Read moreMixing Spark, DuckDB, and Snowflake? Iceberg decouples data, Ibis decouples logic, run your analytics anywhere, without rewrites or vendor lock-in.
- •
- Industry,
- Product
Iceberg-First Ingestion: How Taktile cut 70% of costs
- Adrian Brudaru,
Co-Founder & CDO
Read moreTaktile cut 70% of data loading costs by shifting ingestion to Iceberg via Lambda + dlt, keeping Snowflake for analytics. Smart layers, big savings.
- •
- Industry,
- Product
From Singer to simplicity: Why Data Teams choose dlt.
- Adrian Brudaru,
Co-Founder & CDO
Read moreSinger was Stitch's incomplete competitive response to Fivetran. Meltano completed what Stitch never intended to fully open source. dlt learned from both and built the fitting abstraction for pythonic data teams.
- •
- Industry,
- Product
Fivetran vs dlt: Quickstart vs Endgame
- Adrian Brudaru,
Co-Founder & CDO
Read moreA side-by-side look at Fivetran and dlt, covering cost models, customization, and how each approach affects team workflows as your data needs evolve.
- •
- Tutorials
The REST API Integration costs: How AI + dlt is finally making it bearable
- Aman Gupta,
Jr. Data Engineer
Read moreREST API integrations come with hidden costs, pagination, schema drift, rate limits. With dlt + Cursor, you skip the boilerplate and build pipelines in minutes, not days. Less code. Less chaos. More time to build.
- •
- Community
Materializing Multi-Asset REST API Sources with dlt, Dagster, and DuckDB
- Jairus Martinez,
Analytics Engineer at Brooklyn Data Company
Read moreA hands-on guide to combining dlt and Dagster for orchestrating multi-endpoint API ingestion pipelines, with assets materialized into DuckDB. Three patterns. One powerful workflow. Plus, a peek at the new CLI and DuckDB UI.
- •
- Product
Breaking free from SQL: A Normie's guide to portable data pipelines
- Adrian Brudaru,
Co-Founder & CDO
Read moreData engineering shouldn't require rewriting the same logic multiple times for different environments. dlt's dataset interface gives you one consistent way to work with your data, regardless of where it lives.
- •
- Updates
What’s new in dlt for Databricks: built-in staging, zero-config notebooks, no headaches
- Aman Gupta,
Jr. Data Engineer
Read moreIngesting to Databricks should be simple. With dlt, it finally is. No config files, no staging, just Python and go.
- •
- Product
Vibe Coding: Why Building Data Pipelines with LLMs Actually Works
- Adrian Brudaru,
Co-Founder & CDO
Read moreVibe coding so clean, it will make your old code look bad.
- •
- Community
Julian Alves and dlt: when expertise meets simplicity
- Adrian Brudaru,
Co-Founder & CDO
Read moreJulian Alves builds reliable, simple data infrastructure. He partners with dlt to help companies create systems that deliver value, not burden.
- •
- Updates
Celebrating our 3,000th OSS dlt customer as dlt’s momentum accelerates
- Matthaus Krzykowski,
Co-Founder & CEO
Read moredlt has grown from 1,000 to over 3,000 open-source users in just six months, with monthly downloads surpassing 1.4 million. This momentum reflects a growing demand for Python-native, modular, and AI-ready data tools — and dlt is building exactly that.
- •
- Updates
What’s next for dlt in 2025: a simpler solution for solving complex problems
- Marcin Rudolf,
Co-Founder & CTO
Read moredlt started as a tool for handling JSON documents. It was meant for the average Python user that does not want to deal with creating and evolving schemas, SQL models, backends and data engineers that control them.
- •
- Engineering
The future's Re-Composable: Converting Connectors Between Solutions with LLMs
- Adrian Brudaru,
Co-Founder & CDO
Read moreLet's stop reinventing connectors in isolation. Use LLMs to transform scattered integrations into shared, reusable solutions.
- •
- Community
Fabric + dlt, Course and Explorations
- Adrian Brudaru,
Co-Founder & CDO
Read moreAs Rakesh was exploring Fabric, dlt kept showing up in Rakesh's stack. Not by design, but because it just worked. Different projects, same ingestion layer, quietly doing its job.
- •
- Tutorials
AI built the Pipeline, I plugged the leaks
- Adrian Brudaru,
Co-Founder & CDO
Read moreI tried Vibe-coding a Singer tap (Pipedrive) into dlt and it worked, but it needed some user intervention.
- •
- Community
How to run dlt with Airflow (Or any other Python thing)
- Francesco Mucio,
Untitled Data Company
Read moreExplore four ways to run dlt with Apache Airflow, from PythonOperators to KubernetesPods, and learn which setup scales best for clean, reliable pipelines.
- •
- Tutorials
Towards a Benchmark for AI-Generated Data Pipelines
- Adrian Brudaru,
Co-Founder & CDO
Read moreBuilding pipelines with AI isn’t one task, it's many. In this post, we explore how to split and test them individually, so failures are easier to diagnose and fix.
- •
- Engineering
Are you moving the right data? Write. Audit. Publish. (WAP)
- Aman Gupta,
Jr. Data Engineer
Read moreThe Write. Audit. Publish. (WAP) framework brings discipline from software engineering: write in isolation, audit for correctness, quality, and compliance, publish with confidence. But can data engineering really follow suit? Let's discuss.
- •
- Tutorials
Erase tech debt from data loading with dlt + Cursor + LLMs
- Adrian Brudaru,
Co-Founder & CDO
Read moreModernisation at its finest, from trash to cutting edge in seconds. It works amazing, just give it a try, stop paying for tech debt
- •
- Tutorials
From Airbyte YAML to Scalable Python Pipelines with dlt (dltHub), Cursor and LLMs
- Adrian Brudaru,
Co-Founder & CDO
Read moreIn this microblog + video we explore generating python pipelines (dlt REST API) from Airbyte low code yaml spec. tl;dr: it works well.
- •
- Tutorials
Query your API’s data with SQL, no data warehouse needed
- Aman Gupta,
Jr. Data Engineer
Read moreWant to run SELECT * on your API data without setting up a database? dlt datasets let you query API data using SQL without setting up a database or data warehouse. They follow the Write-Audit-Publish (WAP) pattern, enabling direct SQL queries while keeping workflows efficient.
- •
- Community
Why Iceberg + Python is the Future of Open Data Lakes
- Adrian Brudaru,
Co-Founder & CDO
Read moreData lakes are broken. Python + Iceberg fixes them. No lock-in. No silos. Just open, AI-ready data. Read on why and how to switch ->
- •
- Product
Deep dive: Our initial assistants and model context protocol (MCP) on the Continue Hub
- Matthaus Krzykowski,
Co-Founder & CEO
Read moreWe do a deep dive on the initial assistants and model context protocol (MCP) that we published on the Continue Hub
- •
- Product
Integrating CI/CD Practices into Data Engineering
- Adrian Brudaru,
Co-Founder & CDO
Read moreSoftware Engineering Has CI/CD, Data Engineering Has YOLO – Until Now
- •
- Updates
An early path with to make compound AI systems work for data engineering
- Matthaus Krzykowski,
Co-Founder & CEO
Read moreToday we announce our partnership with Continue and the release of initial two assistants, including one where developers can chat with the dlt documentation from the IDE and pass it to the LLM to help you write dlt code. Developers can also access building blocks that allow them to build their own custom assistants. In this post we want to talk about:
- why we think SaaS connector catalog black box solutions have been a dead end for LLMs
- what we have been doing so far to build for AI data engineering compound systems
- our vision for a dlt+ data infrastructure that generates trusted data that will unlock additional data engineering assistants and building blocks in future - •
- Product
Data Engineers, stop testing in production!
- Adrian Brudaru,
Co-Founder & CDO
Read moreSoftware engineers don’t test in production. Why are data engineers still doing it? ELT made loading easy, but debugging in the warehouse is a nightmare. dlt+ Staging fixes that.
- •
- Updates
We are releasing dlt+ Project & Cache in early access
- Matthaus Krzykowski,
Co-Founder & CEO
Read moreWe take the next step in our recent journey from dlt to dlt+ by releasing the initial two features of dlt+, our developer framework for running dlt pipelines in production and at scale:
- dlt+ Project: A declarative yaml collaboration point for your team.
- dlt+ Cache: A database-like portable compute layer for developing, testing and running transformations before loading. - •
- Product
What is dlt+ Cache?
- Adrian Brudaru,
Co-Founder & CDO
Read moreDiscover how dlt+ Cache gives data engineers a lightning-fast staging environment to test, validate, and debug transformations before they hit production!
- •
- Product
What is dlt+ Project?
- Adrian Brudaru,
Co-Founder & CDO
Read moreIntroducing dlt+ Project – the declarative, YAML-powered manifest that transforms data pipeline development!
- •
- Community,
- Tutorials
Stop writing SQL models and let your data pipeline do it automatically
- Aman Gupta,
Jr. Data Engineer
Read moreThis post discusses `sqlmesh init -t dlt` command that integrates dlt’s metadata with SQLMesh’s modeling capabilities. It automatically generates SQL models that accurately handle incremental processing and schema changes. Inspired by David SJ's post, this approach was demonstrated using the Bluesky API, transforming raw data into structured tables without the need for writing SQL.
- •
- Tutorials
Real-time Data Replication with Debezium and Python
- Ismail Simsek,
Senior Data Engineer
Read moreWhen it comes to replicating operational data for analytics, Change Data Capture (CDC) is the gold standard. It offers scalability, near real-time performance, and captures all data modifications, ensuring your analytical datasets are always up-to-date.
- •
- Product
From Commoditization to Democratization: Building the Data Platforms of Tomorrow
- Adrian Brudaru,
Co-Founder & CDO
Read moreMoving data isn’t hard because engineers lack skill. It’s hard because commoditised systems bog us down with complexity disguised as simplicity.
- •
- Product
Unified Data Access with dlt, from local to cloud
- Adrian Brudaru,
Co-Founder & CDO
Read moreTired of juggling multiple tools and formats? Discover how a single interface can simplify how you access, transform, and share your data, no matter where it lives.
- •
- Engineering
Somewhere Between Data Democracy and Data Anarchy ⚔️
- Hiba Jamal,
Working Student
Read moreData democracy is a beautiful thing. People are more empowered, less dependent and unblocked in terms of data curiosity... However, what breaks this utopic dream is when big curious ideas, several undocumented pipelines (with perhaps with the same data) and conflicting dashboards cause confusion and indecision.
- •
- Community
How dltHub consulting partner Mooncoon speeds up complex dlt pipeline development 2x with Cursor
- Marcel Coetzee,
Data and AI Plumber at Mooncoon, an analytics and data agency.
Read moreAI + dlt = 2x faster pipelines. Mooncoon 🦝 shares how Cursor IDE transforms pipeline dev. AI handles boilerplate; you ship faster. Practical workflows & a live demo inside.
- •
- Community
2024 dlt Recap: Moments, Mentions, and Milestones
- Adrian Brudaru,
Co-Founder & CDO - Aman Gupta,
Jr. Data Engineer
Read more2024 was a remarkable year for dltHub. Together with our users and partners, we streamlined workflows, introduced powerful capabilities, and laid a stronger foundation for the future.
- •
- Updates
Our main benefits from the silver consulting partnership with dltHub: Help with complex integrations, co-development of tools we use, increased revenue
- Adrian Brudaru,
Co-Founder & CDO
Read moreIf you are a data engineering consultant or run a data-focused consultancy and want to do more with less, consider joining our partner program.
- •
- Updates
Announcing Our Consulting Partnerships Program
- Adrian Brudaru,
Co-Founder & CDO
Read moredltHub is community driven in partnerships too, featuring an everybody wins model that optimises client satisfaction.
If you're excited about being part of a collaborative ecosystem that amplifies everyone's strengths while delivering exceptional value to clients, we want to hear from you. - •
- Community
10x data engineer with dlt+ and Tower: A Taktile Case Study
- Adrian Brudaru,
Co-Founder & CDO
Read moreWith dlt+ and Tower, anyone who writes a bit of Python can ship production data pipelines in under an hour. Fast, open, and headache-free, this is the future of data engineering.
- •
- Engineering
Cross-Organisational data mesh as a requirement in decentralised energy infrastructure
- Adrian Brudaru,
Co-Founder & CDO
Read moreEurope’s Energiewende data Challenge: Decentralised cross organisational data mesh and environment portability as baseline requirements.
- •
- Community
Shift YOURSELF Left
- Adrian Brudaru,
Co-Founder & CDO
Read moreData changes, let's just accept that. So how do you get in the change loop when the "left" department just won't add you in? Simple: Get yourself in the loop. Instead of shifting the responsibility to the team on the left, shift your ownership left.
- •
- Tutorials
Self hosted tools Benchmarking
- Aman Gupta,
Jr. Data Engineer
Read moreSQL is key in data analysis, especially where production databases are used. We benchmarked Meltano, Airbyte, dlt, and Sling.
- •
- Community
cognee: Scalable Memory Layer for AI Applications
- Vasilije Markovic,
Founder Topoteretes
Read morecognee is an open-source, scalable semantic layer for AI applications. You can now use modular ECL pipelines to connect data and reduce hallucinations.
- •
- Engineering
SQL Benchmarking: comparing data pipeline tools
- Aman Gupta,
Jr. Data Engineer
Read moreIn modern data workflows, transferring data from SQL databases to data warehouses like BigQuery, Redshift, and Snowflake is an important part of modern data workflows. And with various tools available, how do you choose the right one for your needs? We conducted a detailed benchmark test to answer this question, comparing popular tools like Fivetran, Stitch, Airbyte, and the data load tool (dlt).
- •
- Product
Semantic data contracts
- Adrian Brudaru,
Co-Founder & CDO
Read moreData mesh or governance is simplified when using a semantic data contract nstead of a governance api.
- •
- Product
Portability principle: The path to vendor-agnostic Data Platforms
- Adrian Brudaru,
Co-Founder & CDO
Read moreThe current state of the ecosystem towards breaking vendor locks is best described as “incomplete”. By creating a portable data lake as a kind of framework where components are vendor agnostic, we are able to take advantage of the next developments quickly.
- •
- Product,
- Community
Harness builds an end to end data platform with dlt + SQLMesh
Read moreHow Harness chooses dlt + SQLMesh to create an end-to-end next generation data platform.
- •
- Product
Metadata as Glue: A dlt-dbt generator
- Adrian Brudaru,
Co-Founder & CDO
Read moreImagine you go to a burger place and order a cheeseburger. They hand you a paper bag containing the following items:
A package of ready-bake flour. Just add water. A raw beef patty. A slice of cheese. A head of lettuce, a tomato, and an onion. A packet of ketchup and mustard.
Technically, you have everything needed to make a cheeseburger. This scenario mirrors the current state of the modern data stack. - •
- Product,
- Community
dlt-SQLMesh generator: A case of metadata handover
- Adrian Brudaru,
Co-Founder & CDO
Read moreMost tools interact only through the database layer, treating it as a universal translator. However, this approach is limited because it doesn't capture the rich metadata that could enable more intelligent data processing and integration. Without end-to-end metadata flow, the promise of a cohesive data pipeline remains unfulfilled.
- •
- Product
Portable data lake: A development environment for data lakes
- Adrian Brudaru,
Co-Founder & CDO
Read moreWhat if we had a portable data lake? A pip install data platform...
- •
- Tutorials
A guide on how to migrate your Hubspot data pipeline from Fivetran to dlt
- Aman Gupta,
Jr. Data Engineer
Read moreThis guide details the migration of HubSpot data pipelines from Fivetran to the open-source dlt, highlighting dlt's cost-efficiency, speed, and customization capabilities. Providing a step-by-step transition process and strategies for unifying data sources empowers organizations to optimize their data infrastructure for better control and scalability.
- •
- Updates
Celebrating 1,000 dlt OSS customers in production
- Matthaus Krzykowski,
Co-Founder & CEO
Read moreEarlier today, Marcin announced the release of dlt version 1.0.0, marking a significant milestone in its evolution into a stable, production-ready library for data movement.
- •
- Product,
- Updates
Introducing dlt 1.0.0: A Production-Ready Python Library for Data Movement
- Marcin Rudolf,
Co-Founder & CTO
Read moreWe are excited to announce the release of dlt version 1.0.0, a major milestone marking the library’s maturity and readiness for production use. After months of hard work and development, this update integrates key use cases directly into the core library (code for database syncs, files, the REST API toolkit and an SQLAlchemy destination), making dlt more powerful than ever.
- •
- Tutorials
Migrate your SQL data pipeline from Airbyte to dlt
- Aman Gupta,
Jr. Data Engineer
Read moreIn this post, we explore how to migrate your SQL data pipeline from Airbyte to dlt, an open-source solution that offers greater control, speed, and cost-efficiency. If you're ready to take your data strategy to the next level, this guide will show you how to make the switch. Dive in and start your journey.
- •
- Tutorials
Migrate your SQL data pipeline from Stitch data to dlt
- Aman Gupta,
Jr. Data Engineer
Read moreIn this post, we explore how to migrate your SQL data pipeline from Stitch data to dlt, an open-source solution that offers greater control, speed, and cost-efficiency. If you're ready to take your data strategy to the next level, this guide will show you how to make the switch. Dive in and start your journey.
- •
- Tutorials
Migrate your SQL data pipeline from Fivetran to dlt
- Aman Gupta,
Jr. Data Engineer
Read moreIn this post, we explore how to migrate your SQL data pipeline from Fivetran to dlt, an open-source solution that offers greater control, speed, and cost-efficiency. If you're ready to take your data strategy to the next level, this guide will show you how to make the switch. Dive in and start your journey.
- •
- Community
RAG playground: Build your own RAG bot
- Adrian Brudaru,
Co-Founder & CDO
Read moreWe recently held a workshop at Data Talks Club - LLM Zoomcamp on building a Retrieval-Augmented Generation (RAG) system. The session covered loading data from Notion into LanceDB, creating a RAG Bot with Ollama, and interacting with it through practical examples. Below is a summary of the key resources, tools, and steps from the workshop.
- •
- Product
Standardizing Ingestion and its metadata for compliant Data Platforms
- Adrian Brudaru,
Co-Founder & CDO
Read more💡 What if we had some magic documentation about our data sources, before even fiilling any tables? What if we could profile source data based on source info, or in flight before loading? And what if we had a single way of doing it that’s not dependent on the storage solution?
Read on to find how. - •
- Engineering
Data Platform Engineers: The Game-Changers of data teams
- Adrian Brudaru,
Co-Founder & CDO
Read moreImagine a workplace where data governance and access, documentation and infra are seamlessly integrated, empowering every decision. This is the norm for companies with Data Platform Engineers.
Read on to understand who they are, where they come from, and how they can help you. - •
- Community
How dlt uses Apache Arrow
- Jorrit Sandbrink,
Open Source Software Engineer
- •
- Tutorials
Syncing Google Forms data with Notion using dlt
- Aman Gupta,
Jr. Data Engineer
Read moreHello, I'm Aman, and I assist the dlthub team with various data-related tasks. In a recent project, the Operations team needed to gather information through Google Forms and integrate it into a Notion database. Initially, they tried using the Zapier connector as a quick and cost-effective solution, but it didn’t work as expected.
- •
- Tutorials
Slowly Changing Dimension Type2: Explanation and code
- Aman Gupta,
Jr. Data Engineer
- •
- Product
From Pandas to Production: How we built dlt as the right ELT tool for Normies
- Adrian Brudaru,
Co-Founder & CDO
- •
- Product,
- Updates
Instant pipelines with dlt-init-openapi
- Adrian Brudaru,
Co-Founder & CDO
- •
- Tutorials
How I contributed my first data pipeline to the open source.
- Aman Gupta,
Jr. Data Engineer
- •
- Updates,
- Tutorials,
- Engineering
Announcing: REST API Source toolkit from dltHub - A Python-only high level approach to pipelines
- Adrian Brudaru,
Co-Founder & CDO
- •
- Tutorials
On Orchestrators: You Are All Right, But You Are All Wrong Too
- Anuun Chinbat,
Junior Software Engineer
- •
- Tutorials,
- Updates
Replacing SaaS ETL with Python dlt: A painless experience for Yummy.eu
- Adrian Brudaru,
Co-Founder & CDO
- •
- Product
Portable, embeddable ETL - what if pipelines could run anywhere?
- Adrian Brudaru,
Co-Founder & CDO
- •
- Product
The Second Data Warehouse, aka the "disaster recovery" project
- Adrian Brudaru,
Co-Founder & CDO
- •
- Tutorials,
- Updates
Shift Left Data Democracy: the link between democracy, governance, data contracts and data mesh.
- Adrian Brudaru,
Co-Founder & CDO
- •
- Product
Yes code ELT: dlt make easy things easy, and hard things possible
- Adrian Brudaru,
Co-Founder & CDO
- •
- Tutorials
What is so smart about smart dashboarding tools?
- Hiba Jamal,
Working Student
- •
- Tutorials,
- Updates,
- Product
dlt adds Reverse ETL - build a custom destination in minutes
- Adrian Brudaru,
Co-Founder & CDO
- •
- Product
Coding data pipelines is faster than renting connector catalogs
- Matthaus Krzykowski,
Co-Founder & CEO
- •
- Tutorials
Moving away from Segment to a cost-effective do-it-yourself event streaming pipeline with Cloud Pub/Sub and dlt.
- Zaeem Athar,
Jr. Data Engineer
- •
- Tutorials,
- Product
Saving 75% of work for a Chargebee Custom Source via pipeline code generation with dlt
- Adrian Brudaru,
Co-Founder & CDO - Violetta Mishechkina,
Head of Solutions Engineering
- •
- Updates
PyAirbyte - what it is and what it’s not
- Adrian Brudaru,
Co-Founder & CDO
- •
- Product
Single pane of glass for pipelines running on various orchestrators
- Adrian Brudaru,
Co-Founder & CDO
- •
- Tutorials
API playground: Free APIs for personal data projects
- Adrian Brudaru,
Co-Founder & CDO
- •
- Tutorials,
- Updates
dlt & dbt in Semantic Modelling
- Hiba Jamal,
Working Student
- •
- Tutorials
Comparison running dbt-core and dlt-dbt runner on Google Cloud Functions
- Aman Gupta,
Jr. Data Engineer
- •
- Tutorials
The Modern Data Stack with dlt & Mode
- Hiba Jamal,
Working Student
- •
- Community,
- Tutorials
Streaming Pub/Sub JSON to Cloud SQL PostgreSQL on GCP
- William Laroche,
GCP cloud architect * Backend and data engineer
- •
- Community,
- Tutorials
Why Taktile runs dlt on AWS Lambda to process millions of daily tracking events
- Simon Bumm,
Data and Analytics Lead at Taktile
- •
- Tutorials,
- Updates
From Inbox to Insights: AI-enhanced email analysis with dlt and Kestra
- Anuun Chinbat,
Junior Software Engineer
- •
- Tutorials
Exploring data replication of SAP HANA to Snowflake using dlt
- Rahul Joshi,
Jr. Solutions Engineer
- •
- Tutorials,
- Updates
Data Lineage using dlt and dbt.
- Zaeem Athar,
Jr. Data Engineer
- •
- Tutorials
Deploy Google Cloud Functions as webhooks to capture event-based data from GitHub, Slack, or Hubspot
- Aman Gupta,
Jr. Data Engineer
- •
- Product
Solving data ingestion for Python coders
- Adrian Brudaru,
Co-Founder & CDO
- •
- Tutorials
Orchestrating unstructured data pipeline with Dagster and dlt.
- Zaeem Athar,
Jr. Data Engineer
- •
- Tutorials
Semantic Modeling Capabilities of Power BI, GoodData & Metabase: A Comparison
- Hiba Jamal,
Working Student
- •
- Community
Building resilient pipelines in minutes with dlt + Prefect
Read more - •
- Tutorials
DLT & Deepnote in women's wellness and violence trends: A Visual Analysis
- Hiba Jamal,
Working Student
- •
- Engineering
Get 30x speedups when reading databases with ConnectorX + Arrow + dlt
- Marcin Rudolf,
Co-Founder & CTO