Skip to main content

4 posts tagged with "full code etl"

View All Tags

· 9 min read
Adrian Brudaru

What is the REST API Source toolkit?

tip

tl;dr: You are probably familiar with REST APIs.

  • Our new REST API Source is a short, declarative configuration driven way of creating sources.
  • Our new REST API Client is a collection of Python helpers used by the above source, which you can also use as a standalone, config-free, imperative high-level abstraction for building pipelines.

Want to skip to docs? Links at the bottom of the post.

Why REST configuration pipeline? Obviously, we need one!

But of course! Why repeat write all this code for requests and loading, when we could write it once and re-use it with different APIs with different configs?

Once you have built a few pipelines from REST APIs, you can recognise we could, instead of writing code, write configuration.

We can call such an obvious next step in ETL tools a “focal point” of “convergent evolution”.

And if you’ve been in a few larger more mature companies, you will have seen a variety of home-grown solutions that look similar. You might also have seen such solutions as commercial products or offerings.

But ours will be better…

So far we have seen many REST API configurators and products — they suffer from predictable flaws:

  • Local homebrewed flavors are local for a reason: They aren’t suitable for the broad audience. And often if you ask the users/beneficiaries of these frameworks, they will sometimes argue that they aren’t suitable for anyone at all.
  • Commercial products are yet another data product that doesn’t plug into your stack, brings black boxes and removes autonomy, so they simply aren’t an acceptable solution in many cases.

So how can dlt do better?

Because it can keep the best of both worlds: the autonomy of a library, the quality of a commercial product.

As you will see further, we created not just a standalone “configuration-based source builder” but we also expose the REST API client used enabling its use directly in code.

Hey community, you made us do it!

The push for this is coming from you, the community. While we had considered the concept before, there were many things dlt needed before creating a new way to build pipelines. A declarative extractor after all, would not make dlt easier to adopt, because a declarative approach requires more upfront knowledge.

Credits:

  • So, thank you Alex Butler for building a first version of this and donating it to us back in August ‘23: https://github.com/dlt-hub/dlt-init-openapi/pull/2.
  • And thank you Francesco Mucio and Willi Müller for re-opening the topic, and creating video tutorials.
  • And last but not least, thank you to dlt team’s Anton Burnashev (also known for gspread library) for building it out!

The outcome? Two Python-only interfaces, one declarative, one imperative.

  • dlt’s REST API Source is a Python dictionary-first declarative source builder, that has enhanced flexibility, supports callable passes, native config validations via python dictionaries, and composability directly in your scripts. It enables generating sources dynamically during runtime, enabling straightforward, manual or automated workflows for adapting sources to changes.
  • dlt’s REST API Client is the low-level abstraction that powers the REST API Source. You can use it in your imperative code for more automation and brevity, if you do not wish to use the higher level declarative interface.

Useful for those who frequently build new pipelines

If you are on a team with 2-3 pipelines that never change much you likely won’t see much benefit from our latest tool. What we observe from early feedback a declarative extractor is great at is enabling easier work at scale. We heard excitement about the REST API Source from:

  • companies with many pipelines that frequently create new pipelines,
  • data platform teams,
  • freelancers and agencies,
  • folks who want to generate pipelines with LLMs and need a simple interface.

How to use the REST API Source?

Since this is a declarative interface, we can’t make things up as we go along, and instead need to understand what we want to do upfront and declare that.

In some cases, we might not have the information upfront, so we will show you how to get that info during your development workflow.

Depending on how you learn better, you can either watch the videos that our community members made, or follow the walkthrough below.

Video walkthroughs

In these videos, you will learn at a leisurely pace how to use the new interface. Playlist link.

Workflow walkthrough: Step by step

If you prefer to do things at your own pace, try the workflow walkthrough, which will show you the workflow of using the declarative interface.

In the example below, we will show how to create an API integration with 2 endpoints. One of these is a child resource, using the data from the parent endpoint to make a new request.

Configuration Checklist: Before getting started

In the following, we will use the GitHub API as an example.

We will also provide links to examples from this Google Colab tutorial.

  1. Collect your api url and endpoints, Colab example:

    • An URL is the base of the request, for example: https://api.github.com/.
    • An endpoint is the path of an individual resource such as:
      • /repos/{OWNER}/{REPO}/issues;
      • or /repos/{OWNER}/{REPO}/issues/{issue_number}/comments which would require the issue number from the above endpoint;
      • or /users/{username}/starred etc.
  2. Identify the authentication methods, Colab example:

  3. Identify if you have any dependent request patterns such as first get ids in a list, then use id for requesting details. For GitHub, we might do the below or any other dependent requests. Colab example.:

    1. Get all repos of an org https://api.github.com/orgs/{org}/repos.
    2. Then get all contributors https://api.github.com/repos/{owner}/{repo}/contributors.
  4. How does pagination work? Is there any? Do we know the exact pattern? Colab example.

    • On GitHub, we have consistent pagination between endpoints that looks like this link_header = response.headers.get('Link', None).
  5. Identify the necessary information for incremental loading, Colab example:

    • Will any endpoints be loaded incrementally?
    • What columns will you use for incremental extraction and loading?
    • GitHub example: We can extract new issues by requesting issues after a particular time: https://api.github.com/repos/{repo_owner}/{repo_name}/issues?since={since}.

Configuration Checklist: Checking responses during development

  1. Data path:
  2. Unless you had full documentation at point 4 (which we did), you likely need to still figure out some details on how pagination works.
    1. To do that, we suggest using curl or a second python script to do a request and inspect the response. This gives you flexibility to try anything. Colab example.
    2. Or you could print the source as above - but if there is metadata in headers etc, you might miss it.

Applying the configuration

Here’s what a configured example could look like:

  1. Base URL and endpoints.
  2. Authentication.
  3. Pagination.
  4. Incremental configuration.
  5. Dependent resource (child) configuration.

If you are using a narrow screen, scroll the snippet below to look for the numbers designating each component (n).

# This source has 2 resources:
# - issues: Parent resource, retrieves issues incl. issue number
# - issues_comments: Child resource which needs the issue number from parent.

import os
from rest_api import RESTAPIConfig

github_config: RESTAPIConfig = {
"client": {
"base_url": "https://api.github.com/repos/dlt-hub/dlt/", #(1)
# Optional auth for improving rate limits
# "auth": { #(2)
# "token": os.environ.get('GITHUB_TOKEN'),
# },
},
# The paginator is autodetected, but we can pass it explicitly #(3)
# "paginator": {
# "type": "header_link",
# "next_url_path": "paging.link",
# }
# We can declare generic settings in one place
# Our data is stateful so we load it incrementally by merging on id
"resource_defaults": {
"primary_key": "id", #(4)
"write_disposition": "merge", #(4)
# these are request params specific to GitHub
"endpoint": {
"params": {
"per_page": 10,
},
},
},
"resources": [
# This is the first resource - issues
{
"name": "issues",
"endpoint": {
"path": "issues", #(1)
"params": {
"sort": "updated",
"direction": "desc",
"state": "open",
"since": {
"type": "incremental", #(4)
"cursor_path": "updated_at", #(4)
"initial_value": "2024-01-25T11:21:28Z", #(4)
},
}
},
},
# Configuration for fetching comments on issues #(5)
# This is a child resource - as in, it needs something from another
{
"name": "issue_comments",
"endpoint": {
"path": "issues/{issue_number}/comments", #(1)
# For child resources, you can use values from the parent resource for params.
"params": {
"issue_number": {
# Use type "resolve" to define child endpoint wich should be resolved
"type": "resolve",
# Parent endpoint
"resource": "issues",
# The specific field in the issues resource to use for resolution
"field": "number",
}
},
},
# A list of fields, from the parent resource, which will be included in the child resource output.
"include_from_parent": ["id"],
},
],
}

And that’s a wrap — what else should you know?

  • As we mentioned, there’s also a REST Client - an imperative way to use the same abstractions, for example, the auto-paginator - check out this runnable snippet:

    from dlt.sources.helpers.rest_client import RESTClient

    # Initialize the RESTClient with the Pokémon API base URL
    client = RESTClient(base_url="https://pokeapi.co/api/v2")

    # Using the paginate method to automatically handle pagination
    for page in client.paginate("/pokemon"):
    print(page)
  • We are going to generate a bunch of sources from OpenAPI specs — stay tuned for an update in a couple of weeks!

Next steps

· 3 min read
Adrian Brudaru

About Yummy.eu

Yummy is a Lean-ops meal-kit company streamlines the entire food preparation process for customers in emerging markets by providing personalized recipes, nutritional guidance, and even shopping services. Their innovative approach ensures a hassle-free, nutritionally optimized meal experience, making daily cooking convenient and enjoyable.

Yummy is a food box business. At the intersection of gastronomy and logistics, this market is very competitive. To make it in this market, Yummy needs to be fast and informed in their operations.

Pipelines are not yet a commodity.

At Yummy, efficiency and timeliness are paramount. Initially, Martin, Yummy’s CTO, chose to purchase data pipelining tools for their operational and analytical needs, aiming to maximize time efficiency. However, the real-world performance of these purchased solutions did not meet expectations, which led to a reassessment of their approach.

What’s important: Velocity, Reliability, Speed, time. Money is secondary.

Martin was initially satisfied with the ease of setup provided by the SaaS services.

The tipping point came when an update to Yummy’s database introduced a new log table, leading to unexpectedly high fees due to the vendor’s default settings that automatically replicated new tables fully on every refresh. This situation highlighted the need for greater control over data management processes and prompted a shift towards more transparent and cost-effective solutions.

10x faster, 182x cheaper with dlt + async + modal

Motivated to find a solution that balanced cost with performance, Martin explored using dlt, a tool known for its simplicity in building data pipelines. By combining dlt with asynchronous operations and using Modal for managed execution, the improvements were substantial:

  • Data processing speed increased tenfold.
  • Cost reduced by 182 times compared to the traditional SaaS tool.
  • The new system supports extracting data once and writing to multiple destinations without additional costs.

For a peek into on how Martin implemented this solution, please see Martin's async Postgres source on GitHub..

salo-martin-tweet

Taking back control with open source has never been easier

Taking control of your data stack is more accessible than ever with the broad array of open-source tools available. SQL copy pipelines, often seen as a basic utility in data management, do not generally differ significantly between platforms. They perform similar transformations and schema management, making them a commodity available at minimal cost.

SQL to SQL copy pipelines are widespread, yet many service providers charge exorbitant fees for these simple tasks. In contrast, these pipelines can often be set up and run at a fraction of the cost—sometimes just the price of a few coffees.

At dltHub, we advocate for leveraging straightforward, freely available resources to regain control over your data processes and budget effectively.

Setting up a SQL pipeline can take just a few minutes with the right tools. Explore these resources to enhance your data operations:

For additional support or to connect with fellow data professionals, join our community.

· 8 min read
Adrian Brudaru

embeddable etl

The versatility that enables "one way to rule them all"... requires a devtool

A unified approach to ETL processes centers around standardization without compromising flexibility. To achieve this, we need to be enabled to build and run custom code, bu also have helpers to enable us to standardise and simplify our work.

In the data space, we have a few custom code options, some of which portable. But what is needed to achieve universality and portability is more than just a code standard.

So what do we expect from such a tool?

  • It should be created for our developers
  • it should be easily pluggable into existing tools and workflows
  • it should perform across a variety of hardware and environments.

Data teams don't speak Object Oriented Programming (OOP)

Connectors are nice, but when don't exist or break, what do we do? We need to be able to build and maintain those connectors simply, as we work with the rest of our scripts.

The data person has a very mixed spectrum of activities and responsibilities, and programming is often a minor one. Thus, across a data team, while some members can read or even speak OOP, the team will not be able to do so without sacrificing other capabilities.

This means that in order to be able to cater to a data team as a dev team, we need to aknowledge a different abstraction is needed.

Goodbye OOP, hello @decorators!

Data teams often navigate complex systems and workflows that prioritize functional clarity over object-oriented programming (OOP) principles. They require tools that simplify process definition, enabling quick, readable, and maintainable data transformation and movement. Decorators serve this purpose well, providing a straightforward way to extend functionality without the overhead of class hierarchies and inheritance.

Decorators in Python allow data teams to annotate functions with metadata and operational characteristics, effectively wrapping additional behavior around core logic. This approach aligns with the procedural mindset commonly found in data workflows, where the emphasis is on the transformation steps and data flow rather than the objects that encapsulate them.

By leveraging decorators, data engineers can focus on defining what each part of the ETL process does—extract, transform, load—without delving into the complexities of OOP. This simplification makes the code more accessible to professionals who may not be OOP experts but are deeply involved in the practicalities of data handling and analysis.

The ability to run embedded is more than just scalability

Most traditional ETL frameworks are architected with the assumption of relatively abundant computational resources. This makes sense given the resource-intensive nature of ETL tasks when dealing with massive datasets.

However, this assumption often overlooks the potential for running these processes on smaller, more constrained infrastructures, such as directly embedded within an orchestrator or on edge devices.

The perspective that ETL processes necessarily require large-scale infrastructure is ripe for challenge. In fact, there is a compelling argument to be made for the efficiency and simplicity of executing ETL tasks, particularly web requests for data integration, on smaller systems. This approach can offer significant cost savings and agility, especially when dealing with less intensive data loads or when seeking to maintain a smaller digital footprint.

Small infrastructure ETL runs can be particularly efficient in situations where real-time data processing is not required, or where data volumes are modest. By utilizing the orchestrator's inherent scheduling and management capabilities, one can execute ETL jobs in a leaner, more cost-effective manner. This can be an excellent fit for organizations that have variable data processing needs, where the infrastructure can scale down to match lower demands, thereby avoiding the costs associated with maintaining larger, underutilized systems.

Running on small workers is easier than spinning up infra

Running ETL processes directly on an orchestrator can simplify architecture by reducing the number of moving parts and dependencies. It allows data teams to quickly integrate new data sources and destinations with minimal overhead. This methodology promotes a more agile and responsive data architecture, enabling businesses to adapt more swiftly to changing data requirements.

It's important to recognize that this lean approach won't be suitable for all scenarios, particularly where data volumes are large or where the complexity of transformations requires the robust computational capabilities of larger systems. Nevertheless, for a significant subset of ETL tasks, particularly those involving straightforward data integrations via web requests, running on smaller infrastructures presents an appealing alternative that is both cost-effective and simplifies the overall data processing landscape.

Dealing with spiky loads is easier on highly parallel infras like serverless functions

Serverless functions are particularly adept at managing spiky data loads due to their highly parallel and elastic nature. These platforms automatically scale up to handle bursts of data requests and scale down immediately after processing, ensuring that resources are utilized only when necessary. This dynamic scaling not only improves resource efficiency but also reduces costs, as billing is based on actual usage rather than reserved capacity.

The stateless design of serverless functions allows them to process multiple, independent tasks concurrently. This capability is crucial for handling simultaneous data streams during peak times, facilitating rapid data processing that aligns with sudden increases in load. Each function operates in isolation, mitigating the risk of one process impacting another, which enhances overall system reliability and performance.

Moreover, serverless architectures eliminate the need for ongoing server management and capacity planning. Data engineers can focus solely on the development of ETL logic without concerning themselves with underlying infrastructure issues. This shift away from operational overhead to pure development accelerates deployment cycles and fosters innovation.

Some examples of embedded portability with dlt

Dagster's embedded ETL now supports dlt - enabling devs to do what they love - build.

The "Stop Reinventing Orchestration: Embedded ELT in the Orchestrator" blog post by Pedram from Dagster Labs, introduces the concept of Embedded ELT within an orchestration framework, highlighting the transition in data engineering from bulky, complex systems towards more streamlined, embedded solutions that simplify data ingestion and management. This evolution is seen in the move away from heavy tools like Airbyte or Meltano towards utilizing lightweight, performant libraries which integrate seamlessly into existing orchestration platforms, reducing deployment complexity and operational overhead. This approach leverages the inherent capabilities of orchestration systems to handle concerns typical to data ingestion, such as state management, error handling, and observability, thereby enhancing efficiency and developer experience.

dlt was built for just such a scenario and we are happy to be adopted into it. Besides adding connectors, dlt adds a simple way to build custom pipelines.

Read more about it on Dagster blog post on dlt.

Dagworks' dlt + duckdb + ibis + Hamilton demo

The DAGWorks Substack post introduces a highly portable pipeline of all libraries, and leverages a blend of open-source Python libraries: dlt, Ibis, and Hamilton. This integration exemplifies the trend towards modular, decentralized data systems, where each component specializes in a segment of the data handling process—dlt for extraction and loading, Ibis for transformation, and Hamilton for orchestrating complex data flows. These technologies are not just tools but represent a paradigm shift in data engineering, promoting agility, scalability, and cost-efficiency in deploying serverless microservices.

The post not only highlights the technical prowess of combining these libraries to solve practical problems like message retention and thread summarization on Slack but also delves into the meta aspects of such integrations. It reflects on the broader implications of adopting a lightweight stack that can operate within diverse infrastructures, from cloud environments to embedded systems, underscoring the shift towards interoperability and backend agnosticism in data engineering practices. This approach illustrates a shift in the data landscape, moving from monolithic systems to flexible, adaptive solutions that can meet specific organizational needs without heavy dependencies or extensive infrastructure.

Read more about it on Dagworks blog post on dlt.

Closing thoughts

The concepts discussed here—portability, simplicity, and scalability—are central to modern data engineering practices. They reflect a shift towards tools that not only perform well but also integrate seamlessly across different environments, from high-powered servers to minimal infrastructures like edge devices. This shift emphasizes the importance of adaptability in tools used by data teams, catering to a broad spectrum of deployment scenarios without sacrificing performance.

In this landscape, dlt exemplifies the type of tool that embodies these principles. It's not just about being another platform; it's about providing a framework that supports the diverse needs of developers and engineers. dlt's design allows it to be embedded directly within various architectures, enabling teams to implement robust data processes with minimal overhead. This approach reduces complexity and fosters an environment where innovation is not hindered by the constraints of traditional data platforms.

We invite the community to engage with these concepts through dlt, contributing to its evolution and refinement. By participating in this collaborative effort, you can help ensure that the tool remains at the forefront of data engineering technology, providing effective solutions that address the real-world challenges of data management and integration.

Join the conversation and share your insights in our Slack community or contribute directly to the growing list of projects using us. Your expertise can drive the continuous improvement of dlt, shaping it into a tool that not only meets current demands but also anticipates future needs in the data engineering field.

· 8 min read
Adrian Brudaru

The concept of simplicity and automation in a programming language is not new. Perl scripting language had the motto "Perl makes easy things easy and hard things possible".

The reason for this motto, was the difficulty of working with C, which requires more manual handling of resources and also a compilation step.

Perl scripts could be written and executed rapidly, making it ideal for tasks that needed quick development cycles. This ease of use and ability to handle complex tasks without cumbersome syntax made Perl incredibly popular in its heyday.

Perl was introduced as a scripting language that emphasized getting things done. It was created as a practical extraction and reporting tool, which quickly found its place in system administration, web development, and network programming.

History repeats, Python is a language for humans

human-building

Python took the philosophy of making programming more accessible and human-friendly even further. Guido van Rossum created Python with the goal of removing the drudgery from coding, choosing to prioritize readability and simplicity. This design philosophy makes Python an intuitive language not just for seasoned developers but for beginners as well. Its syntax is clean and expressive, allowing developers to write fewer lines of code for tasks that would require more in Perl or other languages. Python's extensive standard library, along with its powerful data structures, contribute to its ability to handle complex applications with ease.

Python's widespread adoption across various domains, from web development to data science and machine learning, is largely attributed to its accessibility.

Its simple syntax resembles natural language, which lowers the barrier to entry for programming. Compared to Perl, Python offers an even more organized and readable approach to coding, making it an ideal teaching language that prepares new developers for future challenges in software development.

And just like perl, it's used for data extraction and visualisation - but now it's done by normie humans, not sysadmins or devs.

dlt makes easy things fast, and hard things accessible

Following the principles of Perl and Python, dlt aimed to simplify the data engineering process. dlt focuses on making the extraction and loading of data as straightforward as possible.

dlt makes easy things fast

Starting from a simple abstraction like pipeline.run(data, table_name="table"), where data can be any iterable such as a generator or dataframe, dlt enables robust loading. Here is what the above function does, so you don't have to.

  • It will (optionally) unpack nested lists into separate tables with generated join keys, and flatten nested dictionaries into a main row.
  • If given a generator, it will consume it via microbatching, buffering to disk or external drives, never running out of memory (customisable).
  • it will create "extract packages" of extracted data so if the downstream steps fail, it can resume/retry later.
  • It will normalise the data into a shape that naturally fits the database (customisable).
  • It will create "load packages" of normalised data so if the downstream steps fail, it can retry later.
  • It infers and loads with the correct data types, for example from ISO timestamp strings (configurable).
  • It can accept different types of write dispositions declaratively such as 'append', 'merge' and 'replace'.
  • It will evolve the schema if we load a second time something with new columns, and it can alert the schema changes.
  • It will even create type variant columns if data types change (and alert if desired).
  • Or you can stop the schema from evolving and use the inferred schema or a modified one as a data contract
  • It will report load packages associated with new columns, enabling passing down column level lineage

That's a lot of development and maintenance pain solved only at its simplest. You could say, the dlt loader doesn't break, as long as it encounters common data types. If an obscure type is in your data, it would need to be added to dlt or converted beforehand.

From robust loading to robust extraction

Building on the simple loading abstraction, dlt is more than a tool for simple things.

The next step in dlt usage is to leverage it for extraction. dlt offers the concepts of 'source' and 'resource', A resource is the equivalent of a single data source, while a source is the group we put resources in to bundle them for usage.

For example, an API extractor from a single API with multiple endpoints, would be built as a source with multiple resources.

Resources enable you to easily configure how the data in that resource is loaded. You can create a resource by decorating a method with the '@resource' decorator, or you can generate them dynamically.

Examples of dynamic resources

  • If we have an api with multiple endpoints, we can put the endpoints in a list and iterate over it to generate resources
  • If we have an endpoint that gives us datapoints with different schemas, we could split them by a column in the data.
  • Similarly, if we have a webhook that listens to multiple types of events, it can dispatch each event type to its own table based on the data.
  • Or, if we want to shard a data stream into day-shards, we could append a date suffix in the resource name dynamically.

Once we group resources into a source, we can run them together (or, we could still run the resources independently)

Examples of reasons to group resources into sources.

  • We want to run (load) them together on the same schedule
  • We want to configure them together or keep their schemas together
  • They represent a single API and we want to publish them in a coherent, easy to use way.

So what are the efforts you spare when using dlt here?

  • A source can function similar to a class, but simpler, encouraging code reuse and simplicity.
  • Resources offer more granular configuration options
  • Resources can also be transformers, passing data between them in a microbatched way enabling patters like enrichments or list/detail endpoints.
  • Source schemas can be configured with various options such as pushing down top level columns into nested structures
  • dlt's requests replacement has built in retries for non-permanent error codes. This safeguards the progress of long extraction jobs that could otherwise break over and over (if retried as a whole) due to network or source api issues.

What else does dlt bring to the table?

Beyond the ease of data extraction and loading, dlt introduces several advanced features that further simplify data engineering tasks:

Asynchronous operations: dlt harnesses the power of asynchronous programming to manage I/O-bound and network operations efficiently. This means faster data processing, better resource utilization, and more responsive applications, especially when dealing with high volumes of data or remote data sources.

Flexible destinations and reverse ETL: dlt isn't just about pulling data in; it's about sending it where it needs to go. Whether it's a SQL database, a data lake, or a cloud-based storage solution or a custom reverse etl destination, dlt provides the flexibility to integrate with various destinations.

Optional T in ETL: With dlt, transformations are not an afterthought but a core feature. You can define transformations as part of your data pipelines, ensuring that the data is not just moved but refined, enriched, and shaped to fit your analytical needs. This capability allows for more sophisticated data modeling and preparation tasks to be streamlined within your ELT processes.

Data quality and observability: dlt places a strong emphasis on data quality and observability. It includes features for schema evolution tracking, data type validation, and error handling, and data contracts, which are critical for maintaining the integrity of your data ecosystem. Observability tools integrated within dlt help monitor the health and performance of your pipelines, providing insights into data flows, bottlenecks, and potential issues before they escalate.

Community and ecosystem: One of the most significant advantages of dlt is its growing community and ecosystem. Similar to Python, dlt benefits from contributions that extend its capabilities, including connectors, plugins, and integrations. This collaborative environment ensures that dlt remains at the forefront of data engineering innovation, adapting to new challenges and opportunities.

In essence, dlt is not just a tool but a comprehensive one stop shop that addresses the end-to-end needs of modern data ingestion. By combining the simplicity of Python with the robustness of enterprise-grade tools, dlt democratizes data engineering, making it accessible to a broader audience. Whether you're a data scientist, analyst, or engineer, dlt empowers you to focus on what matters most: deriving insights and value from your data.

Conclusion

As Perl and Python have made programming more accessible, dlt is set to transform data engineering by making sophisticated data operations accessible to all. This marks a significant shift towards the democratization of technology, enabling more individuals to contribute to and benefit from the digital landscape. dlt isn't just about making easy things fast and hard things accessible; it's about preparing a future where data engineering becomes an integral part of every data professional's toolkit.

This demo works on codespaces. Codespaces is a development environment available for free to anyone with a Github account. You'll be asked to fork the demo repository and from there the README guides you with further steps.
The demo uses the Continue VSCode extension.

Off to codespaces!

DHelp

Ask a question

Welcome to "Codex Central", your next-gen help center, driven by OpenAI's GPT-4 model. It's more than just a forum or a FAQ hub – it's a dynamic knowledge base where coders can find AI-assisted solutions to their pressing problems. With GPT-4's powerful comprehension and predictive abilities, Codex Central provides instantaneous issue resolution, insightful debugging, and personalized guidance. Get your code running smoothly with the unparalleled support at Codex Central - coding help reimagined with AI prowess.