Load Dell PowerScale OneFS data in Python using dltHub
Build a Dell PowerScale OneFS-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.
Last updated:
In this guide, we'll set up a complete Dell Isilon OneFS data pipeline from API credentials to your first data load in just 10 minutes. You'll end up with a fully declarative Python pipeline based on dlt's REST API connector, like in the partial example code below:
Example code
Why use dlt to generate Python pipelines?
- Accelerate pipeline development with AI-native context
- Debug pipelines, validate schemas and data with the integrated Pipeline Dashboard
- Build Python notebooks for end users of your data
- Low maintenance thanks to schema evolution with type inference, resilience and self-documenting REST API connectors. A shallow learning curve makes the pipeline easy to extend by any team member
dltis the tool of choice for Pythonic Iceberg Lakehouses, bringing mature data loading to Iceberg with or without catalogs
What you’ll do
We’ll show you how to generate a readable and easily maintainable Python script that fetches data from Dell PowerScale OneFS's API and loads it into Iceberg, DataFrames, files, or a database of your choice. Here are some of the endpoints you can load:
- Session: Manage user sessions and authentication.
- Hardware: Access hardware-related information and configuration.
- Instances: Manage and retrieve information about instances.
- License API: Handle licensing-related operations.
- S3 Buckets: Manage S3 bucket configurations and settings.
- SMB Shares: Configure and manage SMB shares.
- File Pools: Manage file pools and their settings.
- NFS Exports: Handle NFS exports and configurations.
- Access Zones: Manage access zones for users.
- Alerts: Configure and retrieve global alerts.
You will then debug the Dell Isilon OneFS pipeline using our Pipeline Dashboard tool to ensure it is copying the data correctly, before building a Notebook to explore your data and build reports.
Setup & steps to follow
💡Before getting started, set up a virtual environment (instructions) and install the
dltworkspace:uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"
Now you're ready to get started!
-
Install the
dltAI WorkbenchConfigure the workbench for your coding assistant:
dlt ai init --agent <your-agent> # <agent>: claude | cursor | codexThis installs project rules, a secrets management skill, appropriate ignore files, and configures the
dltMCP server for your agent.Learn more about the dltHub AI Workbench and setup details for each assistant →
-
Install the
rest-api-pipelinetoolkitThe AI Workbench provides different toolkits for each phase of the data engineering lifecycle. To start you need to install the
rest-api-pipelinetoolkit:dlt ai toolkit rest-api-pipeline installThis loads different skills and contexts about
dltthe agent uses to build the pipeline iteratively, efficiently, and safely. Importantly, it does not need to ask you for credentials directly. Indlt, API credentials are provided via asecrets.tomlfile (learn more about secrets management →), and the agent should use the MCP tools to see their shape and detect misconfigurations. It never needs to access the file directly. -
Start LLM-assisted coding
Here's a prompt to get you started:
PromptUse /find-source to load data from the Dell PowerScale OneFS API into DuckDB.The AI Workbench
rest-api-pipelinetoolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and then follows a structured workflow to scaffold, debug, and validate the pipeline step by step. -
View the result
After the
rest-api-pipelineworkflow has finished, you will end up with a working REST API source with validated endpoints and a pipeline that writes data into a local dataset you have inspected and verified.> python dell_powerscale_onefs_pipeline.py Pipeline dell_powerscale_onefs load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset dell_powerscale_onefs_data The duckdb destination used duckdb:/dell_powerscale_onefs.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobsBy launching the Pipeline Dashboard, you can see various information about the pipeline and the loaded data
- Pipeline overview: State, load metrics
- Data's schema: tables, columns, types, hints
- You can query the data itself
dlt pipeline dell_powerscale_onefs_pipeline show
Running into errors?
It's important to use root credentials for testing API operations, and ensure that SSL/TLS is enabled for basic authentication. The API may return nulls in deeply nested fields, and pagination keywords are case-sensitive. Some operations may not be supported, particularly in ECS S3 APIs. Be aware of rate limits and ensure correct syntax for requests.
Next steps
You can go to the next phases of your data engineering journey by handing over to other toolkits of the dltHub AI Workbench:
data-exploration— Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.dlthub-runtime— Deploy, schedule, and monitor your pipeline in production
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install
Or explore the following resources for more information:
Was this page helpful?
Community Hub
Need more dlt context for Dell PowerScale OneFS?
Request dlt skills, commands, AGENT.md files, and AI-native context.