Policeroleplay Community Python API Docs | dltHub

Build a Policeroleplay Community-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.

Last updated:

Policeroleplay Community API is the PRC Private Server API that exposes ER:LC private server data (server status, players, logs, vehicles, staff, bans, etc.). The REST API base URL is https://api.policeroleplay.community and All requests require a Server-Key header for authentication (server-specific) or Authorization for global keys..

dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Policeroleplay Community data in under 10 minutes.


What data can I load from Policeroleplay Community?

Here are some of the endpoints you can load from Policeroleplay Community:

ResourceEndpointMethodData selectorDescription
server/v1/server or /v2/serverGET(top-level object)Fetch server status and metadata
server_players/v1/server/playersGET(top-level array)List of players currently in server
server_joinlogs/v1/server/joinlogsGET(top-level array)Player join/leave logs
server_queue/v1/server/queueGET(top-level array)Queue of Roblox user IDs
server_killlogs/v1/server/killlogsGET(top-level array)Kill events log
server_commandlogs/v1/server/commandlogsGET(top-level array)Command usage logs
server_modcalls/v1/server/modcallsGET(top-level array)Moderator call logs
server_bans/v1/server/bansGET(top-level object)Current bans (object mapping)
server_vehicles/v1/server/vehiclesGET(top-level array)Vehicles spawned in server
server_staff/v1/server/staffGET(top-level object)Server staff (admins/mods/helpers)
server_command/v1/server/commandPOSTn/aExecute a server command (requires Server-Key)

How do I authenticate with the Policeroleplay Community API?

Include the server API key in the Server-Key header on every request. Global authorization keys, when used, are provided via the Authorization header. Example header: Server-Key: YOUR_API_KEY

1. Get your credentials

  1. Purchase/enable the ER:LC API server pack for your private server. 2) Join the private server and open the ER:LC settings UI. 3) Locate "API key" under the ER:LC API header and copy the Server Key. 4) For global keys, obtain from your PRC account/dashboard if available; global keys should be used via the Authorization header.

2. Add them to .dlt/secrets.toml

[sources.policeroleplay_community_source] server_key = "your_server_key_here"

dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.


How do I set up and run the pipeline?

Set up a virtual environment and install dlt:

uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"

1. Install the dlt AI Workbench:

dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex

This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →

2. Install the rest-api-pipeline toolkit:

dlt ai toolkit rest-api-pipeline install

This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →

3. Start LLM-assisted coding:

Use /find-source to load data from the Policeroleplay Community API into DuckDB.

The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.

4. Run the pipeline:

python policeroleplay_community_pipeline.py

If everything is configured correctly, you'll see output like this:

Pipeline policeroleplay_community_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset policeroleplay_community_data The duckdb destination used duckdb:/policeroleplay_community.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs

Inspect your pipeline and data:

dlt pipeline policeroleplay_community_pipeline show

This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.


Python pipeline example

This example loads server and server_players from the Policeroleplay Community API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:

import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def policeroleplay_community_source(server_key=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://api.policeroleplay.community", "auth": { "type": "api_key", "server_key": server_key, }, }, "resources": [ {"name": "server", "endpoint": {"path": "v2/server"}}, {"name": "server_players", "endpoint": {"path": "v1/server/players"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="policeroleplay_community_pipeline", destination="duckdb", dataset_name="policeroleplay_community_data", ) load_info = pipeline.run(policeroleplay_community_source()) print(load_info)

To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.


How do I query the loaded data?

Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.

Python (pandas DataFrame):

import dlt data = dlt.pipeline("policeroleplay_community_pipeline").dataset() sessions_df = data.server.df() print(sessions_df.head())

SQL (DuckDB example):

SELECT * FROM policeroleplay_community_data.server LIMIT 10;

In a marimo or Jupyter notebook:

import dlt data = dlt.pipeline("policeroleplay_community_pipeline").dataset() data.server.df().head()

See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.


What destinations can I load Policeroleplay Community data to?

dlt supports loading into any of these destinations — only the destination parameter changes:

DestinationExample value
DuckDB (local, default)"duckdb"
PostgreSQL"postgres"
BigQuery"bigquery"
Snowflake"snowflake"
Redshift"redshift"
Databricks"databricks"
Filesystem (S3, GCS, Azure)"filesystem"

Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.


Troubleshooting

Authentication failures

If you receive 403 Unauthorized, confirm you are sending the Server-Key header (case-insensitive) with a valid server-specific key. Global keys use Authorization header; server-scoped endpoints require Server-Key.

Rate limits and bans

Respect rate limit headers; excessive requests may result in temporary or permanent API bans. POST endpoints may have higher limits—consult the Rate Limits page.

Pagination and array responses

Most list endpoints return top-level arrays (e.g., /v1/server/players returns an array). /v2/server returns a top-level object that may include arrays under keys like Players, JoinLogs, KillLogs when requested via query params.

Common API errors

  • 400 Bad Request: malformed payload or invalid query
  • 403 Unauthorized: missing/invalid Server-Key or Authorization
  • 422 Unprocessable Entity: domain-specific (e.g., no players when executing command)
  • 500 Server Error: problem communicating with Roblox or internal error

Notes on exact data selectors Documentation examples show list endpoints return top-level arrays for players, joinlogs, killlogs, commandlogs, modcalls, vehicles, queue (integer array). /v1/server/bans and /v1/server/staff return objects. /v2/server returns a top-level object with optional array fields (Players, JoinLogs, KillLogs, CommandLogs, ModCalls, Vehicles) when query params are used.

Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.


Next steps

Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:

  • data-exploration — Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.
  • dlthub-runtime — Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install

Was this page helpful?

Community Hub

Need more dlt context for Policeroleplay Community?

Request dlt skills, commands, AGENT.md files, and AI-native context.