Findymail Python API Docs | dltHub

Build a Findymail-to-database pipeline in Python using dlt with AI Workbench support for Claude Code, Cursor, and Codex.

Last updated:

Findymail is a REST API platform for finding and verifying B2B professional emails and related contact/company enrichment. The REST API base URL is https://app.findymail.com and all requests require a Bearer token in the Authorization header.

dlt is an open-source Python library that handles authentication, pagination, and schema evolution automatically. dlthub provides AI context files that enable code assistants to generate production-ready pipelines. Install with uv pip install "dlt[workspace]" and start loading Findymail data in under 10 minutes.


What data can I load from Findymail?

Here are some of the endpoints you can load from Findymail:

ResourceEndpointMethodData selectorDescription
email_verifier/api/verifyPOSTVerify an email address (returns object with email, verified, provider)
lists/api/listsGETlistsGet contact lists
contact/api/contacts/get/{id}GETGet single contact by id (response is object)
email_finder_name/api/search/namePOSTcontactFind verified email by name + domain (response: {"contact": {...}})
email_finder_domain/api/search/domainPOSTFind emails by domain (response documented as object/array)
intellimatch_search/api/intellimatch/searchPOSTdataLead-finder / company search (response: {"data": [...]})
intellimatch_status/api/intellimatch/statusGETGet Intellimatch search status (response object)
intellimatch_data/api/intellimatch/dataGETdataRetrieve Intellimatch search results (response: {"data": [...]})
usage_credits/api/creditsGETGet remaining credits and usage summary (response object)
search_employees/api/search/employeesPOSTFind people at a company (response may be top-level array)

How do I authenticate with the Findymail API?

The API uses simple Bearer authentication. Include header: Authorization: Bearer YOUR_API_KEY and Content-Type: application/json for JSON requests.

1. Get your credentials

  1. Sign up or log in at https://app.findymail.com/register or https://app.findymail.com
  2. Open the dashboard and navigate to API / Integrations (or "Get API Key")
  3. Copy the provided API key (secret token) and store it securely; use it as the Bearer token in requests.

2. Add them to .dlt/secrets.toml

[sources.findymail_source] api_key = "your_api_key_here"

dlt reads this automatically at runtime — never hardcode tokens in your pipeline script. For production environments, see setting up credentials with dlt for environment variable and vault-based options.


How do I set up and run the pipeline?

Set up a virtual environment and install dlt:

uv venv && source .venv/bin/activate uv pip install "dlt[workspace]"

1. Install the dlt AI Workbench:

dlt ai init --agent <your-agent> # <agent>: claude | cursor | codex

This installs project rules, a secrets management skill, appropriate ignore files, and configures the dlt MCP server for your agent. Learn more →

2. Install the rest-api-pipeline toolkit:

dlt ai toolkit rest-api-pipeline install

This loads the skills and context about dlt the agent uses to build the pipeline iteratively, efficiently, and safely. The agent uses MCP tools to inspect credentials — it never needs to read your secrets.toml directly. Learn more →

3. Start LLM-assisted coding:

Use /find-source to load data from the Findymail API into DuckDB.

The rest-api-pipeline toolkit takes over from here — it reads relevant API documentation, presents you with options for which endpoints to load, and follows a structured workflow to scaffold, debug, and validate the pipeline step by step.

4. Run the pipeline:

python findymail_pipeline.py

If everything is configured correctly, you'll see output like this:

Pipeline findymail_pipeline load step completed in 0.26 seconds 1 load package(s) were loaded to destination duckdb and into dataset findymail_data The duckdb destination used duckdb:/findymail.duckdb location to store data Load package 1749667187.541553 is LOADED and contains no failed jobs

Inspect your pipeline and data:

dlt pipeline findymail_pipeline show

This opens the Pipeline Dashboard where you can verify pipeline state, load metrics, schema (tables, columns, types), and query the loaded data directly.


Python pipeline example

This example loads email_finder_name and intellimatch_data from the Findymail API into DuckDB. It mirrors the endpoint and data selector configuration from the table above:

import dlt from dlt.sources.rest_api import RESTAPIConfig, rest_api_resources @dlt.source def findymail_source(api_key=dlt.secrets.value): config: RESTAPIConfig = { "client": { "base_url": "https://app.findymail.com", "auth": { "type": "bearer", "token": api_key, }, }, "resources": [ {"name": "email_finder_name", "endpoint": {"path": "api/search/name", "data_selector": "contact"}}, {"name": "intellimatch_data", "endpoint": {"path": "api/intellimatch/data", "data_selector": "data"}} ], } yield from rest_api_resources(config) def get_data() -> None: pipeline = dlt.pipeline( pipeline_name="findymail_pipeline", destination="duckdb", dataset_name="findymail_data", ) load_info = pipeline.run(findymail_source()) print(load_info)

To add more endpoints, append entries from the resource table to the "resources" list using the same name, path, and data_selector pattern.


How do I query the loaded data?

Once the pipeline runs, dlt creates one table per resource. You can query with Python or SQL.

Python (pandas DataFrame):

import dlt data = dlt.pipeline("findymail_pipeline").dataset() sessions_df = data.email_finder_name.df() print(sessions_df.head())

SQL (DuckDB example):

SELECT * FROM findymail_data.email_finder_name LIMIT 10;

In a marimo or Jupyter notebook:

import dlt data = dlt.pipeline("findymail_pipeline").dataset() data.email_finder_name.df().head()

See how to explore your data in marimo Notebooks and how to query your data in Python with dataset.


What destinations can I load Findymail data to?

dlt supports loading into any of these destinations — only the destination parameter changes:

DestinationExample value
DuckDB (local, default)"duckdb"
PostgreSQL"postgres"
BigQuery"bigquery"
Snowflake"snowflake"
Redshift"redshift"
Databricks"databricks"
Filesystem (S3, GCS, Azure)"filesystem"

Change the destination in dlt.pipeline(destination="snowflake") and add credentials in .dlt/secrets.toml. See the full destinations list.


Troubleshooting

Authentication failures

If you receive 401/403, verify your Authorization header uses the Bearer token: Authorization: Bearer YOUR_API_KEY. Ensure the key is active in the dashboard and not rotated.

Rate limits and concurrency

Findymail enforces a concurrent rate limit of 300 simultaneous requests by default. Reduce concurrency or add retries/backoff on 429 or connection throttling.

Quotas / credits

Many endpoints (email finder, phone finder, intellimatch) consume credits when successful; 402 responses indicate not enough credits. Check /api/credits for balance.

Pagination / data selectors

Some endpoints (intellimatch/data, lists) return results under the "data" or "lists" keys. For email finder by name the response places the record under "contact". Verify the JSON key in responses when mapping selectors; example responses in docs show these exact keys.

Ensure that the API key is valid to avoid 401 Unauthorized errors. Also, verify endpoint paths and parameters to avoid 404 Not Found errors.


Next steps

Continue your data engineering journey with the other toolkits of the dltHub AI Workbench:

  • data-exploration — Build custom notebooks, charts, and dashboards for deeper analysis with marimo notebooks.
  • dlthub-runtime — Deploy, schedule, and monitor your pipeline in production.
dlt ai toolkit data-exploration install dlt ai toolkit dlthub-runtime install

Was this page helpful?

Community Hub

Need more dlt context for Findymail?

Request dlt skills, commands, AGENT.md files, and AI-native context.