Skip to main content

How to Set Up pgpulse

Get your Postgres instance monitored in under 5 minutes. This guide walks you through both setup modes from start to finish.


Before you begin

You'll need:

  • A PostgreSQL instance on version 12 or later
  • Ability to run SQL on that instance (to create a monitoring user)
  • A pgpulse account — sign up free at app.pgpulse.io

Step 1 — Pick your setup mode

Choose based on how your database is hosted:

AgentlessSelf-Hosted Collector
How it workspgpulse connects directly to your DBYou run a lightweight binary on your infra
Install requiredNoneDocker image or single binary
Best forCloud-managed DBs (Supabase, RDS, Neon…)Private networks, VPS, bare metal
Setup time~2 minutes~5 minutes
Not sure which to pick?

If your database has a publicly reachable hostname (like a Supabase project URL or RDS endpoint), go with Agentless. If your DB is only accessible inside a private network, use Self-Hosted.


Path A — Agentless

No binary to download. pgpulse connects to your Postgres using credentials you provide.

Step 1 — Create a monitoring user

Run this on your database before adding the instance:

CREATE USER pgpulse_monitor WITH PASSWORD 'strong-password';
GRANT pg_monitor TO pgpulse_monitor;
note

pgpulse only runs read-only queries. Your credentials are stored encrypted at rest and never used for writes.

Step 2 — Add your instance in the dashboard

  1. Go to app.pgpulse.io and sign in.
  2. Click Add Instance → select Agentless.
  3. Fill in your connection details:
FieldExample
Hostdb.abcdefgh.supabase.co
Port5432
Databasepostgres
Userpgpulse_monitor
Passwordstrong-password
SSL moderequire (recommended)

Step 3 — Save and go live

Click Save. pgpulse runs a connection test immediately. If it passes, data starts flowing within 1–2 minutes.

You'll see your Pulse score and Golden Signals appear on the dashboard automatically.

Provider-specific notes
  • Supabase: Use the Direct connection host from Project Settings → Database. Set SSL to require.
  • Amazon RDS: Add pgpulse's IP ranges to your security group inbound rules (port 5432).
  • Neon: Use the direct connection, not the pooled endpoint — pooled connections may not expose all system statistics.

→ Full Agentless reference & IP allowlist


Path B — Self-Hosted Collector

Run the collector binary on your own infrastructure. Ideal for private networks and VPS setups.

Step 1 — Create a monitoring user

Same as Agentless — run this on your Postgres instance:

CREATE USER pgpulse_monitor WITH PASSWORD 'strong-password';
GRANT pg_monitor TO pgpulse_monitor;

Step 2 — Add an instance and get your API key

  1. Go to app.pgpulse.io and sign in.
  2. Click Add Instance → select Self-Hosted.
  3. Navigate to Settings → API Keys and copy your pk_live_... key.
Keep your API key safe

Your pk_live_ key authenticates the collector to pgpulse. Do not commit it to source control — use an environment variable or a secrets manager.

Step 3 — Deploy the collector

The fastest way to get running. Mount the buffer volume so data survives restarts.

docker run -d \
--name pgpulse-collector \
--restart unless-stopped \
-e PGPULSE_API_KEY=pk_live_your_api_key \
-e PGPULSE_POSTGRES_HOST=your-db-host \
-e PGPULSE_POSTGRES_USER=pgpulse_monitor \
-e PGPULSE_POSTGRES_PASSWORD=strong-password \
-e PGPULSE_POSTGRES_DATABASE=postgres \
-e PGPULSE_POSTGRES_SSLMODE=require \
-e PGPULSE_COLLECTOR_INSTANCE_NAME=prod-primary \
-v pgpulse-buffer:/var/lib/pgpulse/buffer \
pgpulse/collector:latest

Step 4 — Validate the connection

Before checking the dashboard, confirm everything is wired up correctly:

./pgpulse-collector --config collector.yaml --validate

This command checks your config, confirms the Postgres host is reachable, opens a connection, and verifies your API key — then exits cleanly.

Expected output:

✓ Config loaded
✓ Postgres reachable (db.host:5432)
✓ Connection opened (pgpulse_monitor@postgres)
✓ API key verified
All checks passed. Ready to collect.

Exit code 0 = you're good to go. Any non-zero exit code means something needs fixing — the error message will tell you exactly what.

→ Full Self-Hosted reference & multi-DB config


Optional — Enable richer metrics

Two Postgres extensions unlock additional dashboard data. Install them if you want full coverage:

-- Query throughput metrics (strongly recommended)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

-- Memory / buffer cache metrics
CREATE EXTENSION IF NOT EXISTS pg_buffercache;

The collector skips both gracefully if they're not present — you won't see errors, just fewer metrics.


What you'll see in the dashboard

Once data is flowing, the dashboard shows:

SectionWhat it tells you
Pulse ScoreA single 0–100 health score for your instance, updated every minute
Golden SignalsCPU, memory, connections, and IO trends over time
Query InsightsSlow queries, sequential scans, and lock contention
AdvisorySpecific recommendations: index suggestions, config tuning, vacuum gaps

Initial data takes 60–120 seconds to appear on a fresh instance. Refresh the dashboard after that window.


Enable alerts

Navigate to the Alerts tab to set up notifications for critical events like:

  • Connection pool saturation
  • CPU load crossing a threshold
  • Long-running queries or lock waits

Supported channels: Slack, Email, and webhooks.

→ Setting up Alerts


Troubleshooting

SymptomLikely causeFix
No data after 5 minutesUser missing pg_monitorRun GRANT pg_monitor TO pgpulse_monitor;
Connection test failsFirewall blocking pgpulseAllow the pgpulse IP ranges on port 5432 — see Agentless IP Allowlist
FATAL: PGPULSE_API_KEY is requiredNo API key configuredAdd api.api_key to your YAML or set PGPULSE_API_KEY
preflight TCP check failedPostgres host unreachableCheck the host, port, and firewall rules
401 at startupInvalid API keyRe-copy the key from Settings → API Keys
403 at startupHost mismatchFor Supabase pooler, embed the project ref in the username: postgres.<ref>
Dashboard empty after first loadNormal on initial collectionWait 60–120 seconds and refresh