Rhino Python API Docs | dltHub
Build a Rhino-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.
Last updated:
Rhino Compute is a stateless REST API that exposes Rhino and Grasshopper SDK functions for creating and manipulating 2D/3D geometry remotely. The REST API base URL is https://compute.rhino3d.com/ and Optional API key or service token; local/self‑hosted instances typically run without external authentication..
dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Rhino data in under 10 minutes.
What data can I load from Rhino?
Here are some of the endpoints you can load from Rhino:
| Resource | Endpoint | Method | Data selector | Description |
|---|---|---|---|---|
| sdk | sdk | GET | Returns SDK/endpoint index for the running Compute server (list of available endpoints). | |
| solutions | sdk/grasshopper/solutions | GET | Returns available Grasshopper solutions exposed by the server. | |
| plugins | plugins | GET | Lists installed Compute plugins and available operations. | |
| history | history | GET | Returns server operation history (if enabled). | |
| ping | ping | GET | Health/ping endpoint returning server status. | |
| evaluate | rhino/geometry/evaluate | POST/GET* | result | Geometry operations return result objects (responses are JSON objects — list keys vary by operation). |
How do I authenticate with the Rhino API?
Rhino Compute can be run locally without authentication or hosted with an API key/token sent in request headers as configured on the server.
1. Get your credentials
- Deploy or access a Compute server (cloud or self‑hosted). 2) For cloud/managed installs, configure your API key/token in the Compute server settings or Rhino account console (see Compute deployment guide). 3) For self‑hosted, start Compute (default port 6500) and use local endpoints; if you enable auth on the server, create API tokens in server configuration. 4) Use the configured token in requests per the server documentation.
2. Add them to .dlt/secrets.toml
[sources.rhino_source] api_key = "YOUR_COMPUTE_API_KEY"
dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.
How do I set up and run the pipeline?
Set up a virtual environment and install dlt:
uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"
1. Install the dlt AI Workbench:
dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex
This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →
2. Install the rest-api-pipeline toolkit:
dlt ai toolkit rest-api-pipeline install
This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →
3. Start LLM-assisted coding:
Use /find-source to load data from the Rhino API into DuckDB.
The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.
4. Run the pipeline:
python rhino_pipeline.py
If everything is configured correctly, you'll see output like this:
Pipeline rhino_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset rhino_data The duckdb destination used duckdb:/rhino.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs
Inspect your pipeline and data:
dlt pipeline rhino_pipeline show
This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.
Python pipeline example
This example loads sdk and ping from the Rhino API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:
import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def rhino_source(api_key=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://compute.rhino3d.com/", "auth": { "type": "api_key", "api_key": api_key, }, }, "resources": [ {"name": "sdk", "endpoint": {"path": "sdk"}}, {"name": "ping", "endpoint": {"path": "ping"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="rhino_pipeline", destination="duckdb", dataset_name="rhino_data", ) load_info = pipeline.run(rhino_source()) print(load_info)
To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.
How do I query the loaded data?
Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.
Python (pandas DataFrame):
import dlt data = dlt.pipeline("rhino_pipeline").dataset() sessions_df = data.sdk.df() print(sessions_df.head())
SQL (DuckDB example):
SELECT * FROM rhino_data.sdk LIMIT 10;
In a marimo or Jupyter notebook:
import dlt data = dlt.pipeline("rhino_pipeline").dataset() data.sdk.df().head()
See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.
What destinations can I load Rhino data to?
dlt supports loading into any of these destinations — only the destination parameter changes:
| Destination | Example value |
|---|---|
| DuckDB (local, default) | "duckdb" |
| PostgreSQL | "postgres" |
| BigQuery | "bigquery" |
| Snowflake | "snowflake" |
| Redshift | "redshift" |
| Databricks | "databricks" |
| Filesystem (S3, GCS, Azure) | "filesystem" |
Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.
Troubleshooting
Authentication failures
If your Compute server is configured to require an API key or token, requests without valid credentials will be rejected by the server. Check your Compute server configuration for the expected header (server admin‑configured header or Authorization bearer token) and confirm the token in your secrets.toml matches the server config.
Rate limits and quotas
Compute itself does not impose public, documented rate limits — rate limiting is determined by the hosting environment or custom server configuration. If you receive 429 responses, check the hosting proxy/load balancer for rate‑limit policies.
Pagination and response shapes
Most Compute endpoints are RPC‑style and return operation‑specific JSON objects rather than paginated lists. When an endpoint returns multiple items, the structure and top‑level key are endpoint‑specific (consult the server /sdk endpoint to discover the exact response shapes). For programmatic selectors, call GET /sdk on the target server and inspect the returned JSON to find the key that contains the records you intend to extract.
Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.
Next steps
Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:
data-exploration— Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.dlthub-runtime— Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install
Was this page helpful?
Community Hub
Need more dlt context for Rhino?
Request dlt skills, commands, AGENT.md files, and AI-native context.