dltHub
Blog /

Michael, dlt, and the art of unbreakable API pipelines

  • Adrian Brudaru,
    Co-Founder & CDO

Built another pipeline just to keep a dashboard alive? Then it broke because the API changed?

Michael Shoemaker, a seasoned data analyst and the creative mind behind Data Slinger on YouTube, focuses on practical, hands-on tutorials that cover a wide range of data engineering topics. In two tutorials, he shows how to build API pipelines with dlt that handle schema changes automatically, no weekly patching, no drama.

Why does Michael like dlt?

He likes it because it turns schema changes into zero-touch upkeep: point it at your API, it infers tables, merges by primary key, and quietly absorbs new fields as they appear. No migrations, no downtime, no panic.

Here’s what Michael actually did, step-by-step.

Tutorial 1: Daily weather data pipeline

Michael builds a pipeline fetching daily weather data and loading into BigQuery

What he used:

  • OpenWeather API for daily data
  • dlt for extraction and loading
  • GCP Cloud Functions for serverless orchestration
  • BigQuery as the warehouse

A Cloud Scheduler kicks off the function each morning. dlt fetches fresh data, handles schema, merges new columns automatically, and loads into BigQuery.

💡 dlt even auto-creates your BigQuery dataset and keeps your API keys safe with Secret Manager.

If you want a clean, production-ready pattern for ingesting API data daily with minimal config, this tutorial is a solid starting point.

Tutorial 2: BigQuery schema evolution without tears

Then the OpenWeather API adds new fields (wind direction, weather description, you know, the usual chaos). Instead of a manual migration mess, Michael:

  • updates his transform function
  • redeploys the Cloud Function
  • lets dlt handle the new columns automatically
  • dlt merges updates using a primary key (dt) with zero duplication

No errors, no breakage, pipelines run smoothly as schema evolves

Bonus: If you ever forget to push your cloud function code to GitHub, he shows how to pull it from GCS without losing a beat.

Why this matters?

These aren’t one-off hacks. They’re pipelines you actually need: pulling API data, handling schema changes, and deploying in the cloud.

Michael shows how to build them cleanly, without getting stuck writing boilerplate. Even when the schema shifts or a data type suddenly isn’t what you expected, dlt does the heavy lifting. No surprise migrations. No duct tape.

If data pipelines ever land on your desk, these videos will save you time and headaches.

Ready to build?

“Zero-touch pipeline? You’ll have it running before your coffee’s half gone.”

👉 Watch Michael’s setup and schema evolution videos. Made it this far? You’ll like the dlthub educational newsletter, it’s where the good stuff keeps coming.