dltHub
Blog /

I tracked the Iran-USA conflict, oil prices, and Bitcoin — without a data team

  • Roshni Melwani,
    Working Student

The Strait of Hormuz was all over the news. Blockade threats, ceasefire talks, oil prices swinging. I’m not a finance person — but I was already playing with the dltHub AI Workbench, a toolkit that lets you build data pipelines by just describing what you want to an AI agent.

News volume, oil prices and bitcoin, it's a random combo, I know. But the whole point of the workbench is that it costs you an afternoon, not a data team. So if that’s what I was curious about — why not?

So I opened Claude Code, pointed it at the workbench README, and started describing what I wanted.

What you need

The workbench works with Claude Code, Cursor, and Codex. For Claude Code, type claude in your terminal to get started. If that’s new to you, check the Claude Code docs or watch a quick tutorial first. For Cursor or Codex, check their own docs.

Then paste this link: dltHub AI Workbench — your agent reads the README and walks you through setup from there.

What I wanted to build

I knew I wanted three data streams: news volume, oil prices, bitcoin prices. I’d already figured out which APIs to use by asking Claude in the normal chat app. It pointed me to GDELT for news and Alpha Vantage for market data.

Then I brought that plan to the workbench agent:

The dashboard

Here’s the marimo notebook as it stood on Apr 18, 2026:

View dashboard (Apr 18, 2026)

The clearest signal occurs on Apr 8. As the ceasefire announcement collapses the 'war premium' on Oil by 16%, Bitcoin briefly dips in a 'sell-the-news' reaction before decoupling to begin a steady rise.

The news volume spike tells the rest of the story — Apr 12–13, ceasefire collapses, Hormuz blockade threatened. 45 articles a day, the highest in the 90-day window.

Put all three on one timeline and the picture gets clearer.

Over the full 90 days, WTI is up ~90%. Bitcoin up ~65%.

Worth noting: I’m on Alpha Vantage’s free tier so the last few days of price data are missing.

The session

After I pointed Claude at the README, it understood the workbench setup and walked me through it. And then I said yes a lot :)

Claude laid out the three sources, confirmed the plan, and started building. It ran the find-source skill and looked up what GDELT’s API actually looked like, what endpoints existed, what the response structure was. It then scaffolded the pipeline.

The first real decision: “For GDELT, do you want article list or timeline sentiment?” I picked article list — individual articles with titles and dates, more granular.

For Alpha Vantage it needed an API key. It told me where to get one, I grabbed it and then it used the workspace MCP server to set up the secrets file. It specifically told me not to paste the key in chat, just told me where to drop it in the file, which was the right call. Nice one Claude ;)

When GDELT returned an error (timespan too short), Claude handed off to the debug-pipeline skill automatically. It read the error, tested the right minimum timespan against the live API, fixed the code, re-ran. I didn’t debug anything.

When the pipelines finished I checked what landed in the summary it showed me. GDELT only had 3 articles. I flagged it — Claude explained the 1-hour rolling window, suggested two fixes, re-ran. 252 articles.

That kind of catch-and-fix happened naturally, like pair programming with someone who actually reads the output.

Then came the dashboard. Claude kicked off the explore-data skill. It profiled all three tables, came back with a plan. It proposed 4 charts. I pushed back: “not sure this is meaningful, give me more options.”

It came back with 10, grouped by what question they answer. I picked all of them. At the end I asked if it had any other ideas. it queried the data via the dlt MCP server and suggested 4 more. I added all of them.

One thing to watch: GDELT’s coverage is uneven. I almost kept the “coverage by country” chart that made India look uninterested in the Iran-USA conflict. Claude explained: GDELT just doesn’t index Indian regional press well. Good reminder to understand your source before you trust the chart. I removed it.

Once I confirmed the charts, Claude ran build-notebook — assembled the marimo dashboard, validated it, installed dependencies, and launched it.

What the data shows and what surprised me about the process

In the triple overlay, as news volume surged around the Hormuz blockade, Bitcoin climbed alongside it while oil fell. They're not moving together.

The part that actually surprised me was deployment. Data engineers know how long it takes to get from local to production — different environments, secrets that don't match, Airflow DAGs, cloud setup. Here I just asked Claude if it could schedule the pipelines to run daily.

It used the dlthub-runtime toolkit from the workbench and walked me through it in the same session. It also flagged something I hadn't thought about: DuckDB is local, so a scheduled pipeline running in the cloud can't write to it. It asked if I had MotherDuck and walked me through getting a token.

The only thing I had to do was click the link it gave me to log in to the dltHub Runtime and in 10 minutes I had these pipelines also deployed which is pretty damn impressive.

The whole thing: two pipelines, 13 charts, a working dashboard, deployed — in one afternoon.

Try it yourself

Open Claude Code. Give it the workbench README. It tells you exactly what to do next.

What question would you answer if deploying a pipeline took an afternoon?