Google Drive Python API Docs | dltHub

Build a Google Drive-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.

Last updated:

Google Drive API is a REST API that lets applications access and manage files, permissions, drives, and Drive metadata stored in Google Drive. The REST API base URL is https://www.googleapis.com/drive/v3 and all requests require OAuth 2.0 Bearer tokens (access token).

dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Google Drive data in under 10 minutes.


What data can I load from Google Drive?

Here are some of the endpoints you can load from Google Drive:

ResourceEndpointMethodData selectorDescription
filesGET /drive/v3/filesGETfilesLists files in the user's Drive (supports q, pageSize, pageToken, fields)
fileGET /drive/v3/files/{fileId}GETGets a file's metadata or content by ID
revisionsGET /drive/v3/files/{fileId}/revisionsGETrevisionsLists revisions for a file
permissionsGET /drive/v3/files/{fileId}/permissionsGETpermissionsLists a file's permissions
drivesGET /drive/v3/drivesGETdrivesLists the user's shared drives
changesGET /drive/v3/changesGETchangesLists changes for a user or shared drive (use startPageToken)
commentsGET /drive/v3/files/{fileId}/commentsGETcommentsLists comments on a file
aboutGET /drive/v3/aboutGETGets metadata about the user’s Drive and capabilities
files_exportGET /drive/v3/files/{fileId}/exportGETExports Google Docs content to requested MIME type (returns bytes)

How do I authenticate with the Google Drive API?

Google Drive uses OAuth 2.0 for authorization. Requests must include an Authorization header: Authorization: Bearer <ACCESS_TOKEN>. Service accounts (JWT) or OAuth client tokens and refresh tokens are supported for server-to-server or user-consent flows.

1. Get your credentials

  1. Open Google Cloud Console (APIs & Services). 2) Create/select a project. 3) Enable the "Google Drive API". 4) Under Credentials, create an OAuth 2.0 Client ID (or a Service Account for server flows). 5) For OAuth client: configure consent screen and add redirect URIs; for service account: generate JSON key. 6) Exchange the authorization code (or service account JWT) for an access token and use it in Authorization: Bearer .

2. Add them to .dlt/secrets.toml

[sources.google_drive_source] access_token = "your_oauth2_access_token_here"

dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.


How do I set up and run the pipeline?

Set up a virtual environment and install dlt:

uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"

1. Install the dlt AI Workbench:

dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex

This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →

2. Install the rest-api-pipeline toolkit:

dlt ai toolkit rest-api-pipeline install

This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →

3. Start LLM-assisted coding:

Use /find-source to load data from the Google Drive API into DuckDB.

The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.

4. Run the pipeline:

python google_drive_pipeline.py

If everything is configured correctly, you'll see output like this:

Pipeline google_drive_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset google_drive_data The duckdb destination used duckdb:/google_drive.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs

Inspect your pipeline and data:

dlt pipeline google_drive_pipeline show

This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.


Python pipeline example

This example loads files and drives from the Google Drive API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:

import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def google_drive_source(access_token=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://www.googleapis.com/drive/v3", "auth": { "type": "bearer", "access_token": access_token, }, }, "resources": [ {"name": "files", "endpoint": {"path": "files", "data_selector": "files"}}, {"name": "drives", "endpoint": {"path": "drives", "data_selector": "drives"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="google_drive_pipeline", destination="duckdb", dataset_name="google_drive_data", ) load_info = pipeline.run(google_drive_source()) print(load_info)

To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.


How do I query the loaded data?

Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.

Python (pandas DataFrame):

import dlt data = dlt.pipeline("google_drive_pipeline").dataset() sessions_df = data.files.df() print(sessions_df.head())

SQL (DuckDB example):

SELECT * FROM google_drive_data.files LIMIT 10;

In a marimo or Jupyter notebook:

import dlt data = dlt.pipeline("google_drive_pipeline").dataset() data.files.df().head()

See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.


What destinations can I load Google Drive data to?

dlt supports loading into any of these destinations — only the destination parameter changes:

DestinationExample value
DuckDB (local, default)"duckdb"
PostgreSQL"postgres"
BigQuery"bigquery"
Snowflake"snowflake"
Redshift"redshift"
Databricks"databricks"
Filesystem (S3, GCS, Azure)"filesystem"

Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.


Troubleshooting

Authentication failures (401 / invalid_grant)

If you receive 401 Unauthorized or invalid_grant: ensure the access token is valid and not expired, include header Authorization: Bearer <ACCESS_TOKEN>, and verify scopes include required Drive scopes (e.g., https://www.googleapis.com/auth/drive.readonly). For service accounts, ensure domain-wide delegation is configured if accessing user data.

Permission denied / insufficient permissions (403)

A 403 may indicate scopes are insufficient or the authenticated user lacks access to the resource. Confirm requested scopes, and that file/shared drive permissions allow the operation.

Rate limits and quota errors (429 / 403: user rate limit exceeded)

Respect Retry-After headers, implement exponential backoff for 429 and 5xx errors, and monitor quotas in Google Cloud Console. Consider batching or reducing pageSize.

Pagination (nextPageToken)

List endpoints return a nextPageToken when more results exist (files.list uses 'nextPageToken' and the list of items is in the 'files' field). Use pageToken to paginate until no token is returned.

Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.


Next steps

Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:

  • data-exploration — Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.
  • dlthub-runtime — Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install

Was this page helpful?

Community Hub

Need more dlt context for Google Drive?

Request dlt skills, commands, AGENT.md files, and AI-native context.