Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.tryprofound.com/llms.txt

Use this file to discover all available pages before exploring further.

The Cookbook is a collection of end-to-end recipes that show how to recreate each chart, KPI, and table you see in the Profound app using only public API calls. Every recipe is self-contained: it lists the exact endpoints used, explains the request shape, and shows runnable Python and curl you can copy into your own code. If you just need the endpoint reference (every parameter, every metric), see the REST API tab. The Cookbook is the layer above that — common things people want to build with the API.

Give your AI assistant context

Building with Claude, ChatGPT, Cursor, or another AI coding assistant? Paste this URL — it’s a compact index of every page in these docs that your assistant can fetch on demand:
https://docs.tryprofound.com/llms.txt

What you’ll need

1

An API key

Generate one in Settings → API Keys in the Profound app. See Authentication if you don’t have one yet.
2

A category ID

Every report query is scoped to a category. See Find your category ID to look one up by name.
3

The Python SDK (optional)

The recipes default to the Python SDK because it’s the most common path. Install with pip install profound. Every recipe also has a curl tab if you’d rather hit the REST API directly.

One-time setup

Every recipe starts by constructing a client like this — you only need it once per script, even if you’re chaining several recipes:
import os
from profound import Profound

client = Profound(api_key=os.environ["PROFOUND_API_KEY"])

Conventions used in every recipe

Three things to internalize before you start — these come up in every recipe and trip up most new users:
The API parses end_date at the start of day, so it excludes the date you send. To include all of 2026-05-10, send end_date="2026-05-11".Display the inclusive value to your users; send +1 day to the API.
Every response includes info.query.metrics and info.query.dimensions, which echo back the exact order the API used when packing each row’s metrics and dimensions arrays.
order = res.info.query["metrics"]   # e.g. ["visibility_score"]
i = order.index("visibility_score")
score = res.data[0].metrics[i]
Don’t hardcode positions. The order may not match the order you requested.
Period-over-period changes (the green/red +1.0 pp you see on every KPI tile) are computed client-side. Run the same call twice — once for the current window, once for the previous window of equal length — and diff the two scores in your code.

Before you build

Conventions & gotchas

Rate limits, exclusive end-dates, info.query ordering, errors, pagination. Read this once, paste into your AI assistant.

Data model

How Categories, Topics, Prompts, Tags, Assets, and Personas relate.

Endpoints at a glance

Every endpoint, its metrics/dimensions, and what it’s for — one scannable page.

Recipes

Setup

Find your category ID

List the categories your key can see and pick one programmatically.

List your owned assets

Get every asset in a category with its is_owned flag and domains.

Visibility

Visibility Score for one asset

An asset’s score for the window plus the change vs the prior window.

Visibility over time

Build the daily / weekly / monthly line chart for an asset.

Headline and daily, together

Fetch both at once. Understand why they’re different calls.

Top-N leaderboard

Rank every asset in a category by any visibility metric.

Compare to competitors

Multi-line chart of any hand-picked set of assets.

Segment by model / region / persona

Break a single asset’s score down by the AI surface that answered.

Citations

Citation Share + delta

Owned-domain share of all citations, with the period-over-period change.

Citation Share and volume over time

Daily owned-domain share line and daily total citation volume, both from one call.

Citation Rank by domain

Every cited domain ranked by share, with the per-row pp delta column.