How to Set Up pgpulse
Get your Postgres instance monitored in under 5 minutes. This guide walks you through both setup modes from start to finish.
Before you begin
You'll need:
- A PostgreSQL instance on version 12 or later
- Ability to run SQL on that instance (to create a monitoring user)
- A pgpulse account — sign up free at app.pgpulse.io
Step 1 — Pick your setup mode
Choose based on how your database is hosted:
| Agentless | Self-Hosted Collector | |
|---|---|---|
| How it works | pgpulse connects directly to your DB | You run a lightweight binary on your infra |
| Install required | None | Docker image or single binary |
| Best for | Cloud-managed DBs (Supabase, RDS, Neon…) | Private networks, VPS, bare metal |
| Setup time | ~2 minutes | ~5 minutes |
If your database has a publicly reachable hostname (like a Supabase project URL or RDS endpoint), go with Agentless. If your DB is only accessible inside a private network, use Self-Hosted.
Path A — Agentless
No binary to download. pgpulse connects to your Postgres using credentials you provide.
Step 1 — Create a monitoring user
Run this on your database before adding the instance:
CREATE USER pgpulse_monitor WITH PASSWORD 'strong-password';
GRANT pg_monitor TO pgpulse_monitor;
pgpulse only runs read-only queries. Your credentials are stored encrypted at rest and never used for writes.
Step 2 — Add your instance in the dashboard
- Go to app.pgpulse.io and sign in.
- Click Add Instance → select Agentless.
- Fill in your connection details:
| Field | Example |
|---|---|
| Host | db.abcdefgh.supabase.co |
| Port | 5432 |
| Database | postgres |
| User | pgpulse_monitor |
| Password | strong-password |
| SSL mode | require (recommended) |
Step 3 — Save and go live
Click Save. pgpulse runs a connection test immediately. If it passes, data starts flowing within 1–2 minutes.
You'll see your Pulse score and Golden Signals appear on the dashboard automatically.
- Supabase: Use the Direct connection host from Project Settings → Database. Set SSL to
require. - Amazon RDS: Add pgpulse's IP ranges to your security group inbound rules (port 5432).
- Neon: Use the direct connection, not the pooled endpoint — pooled connections may not expose all system statistics.
→ Full Agentless reference & IP allowlist
Path B — Self-Hosted Collector
Run the collector binary on your own infrastructure. Ideal for private networks and VPS setups.
Step 1 — Create a monitoring user
Same as Agentless — run this on your Postgres instance:
CREATE USER pgpulse_monitor WITH PASSWORD 'strong-password';
GRANT pg_monitor TO pgpulse_monitor;
Step 2 — Add an instance and get your API key
- Go to app.pgpulse.io and sign in.
- Click Add Instance → select Self-Hosted.
- Navigate to Settings → API Keys and copy your
pk_live_...key.
Your pk_live_ key authenticates the collector to pgpulse. Do not commit it to source control — use an environment variable or a secrets manager.
Step 3 — Deploy the collector
- Docker (recommended)
- systemd (Linux)
- Environment Variables
- Supabase
The fastest way to get running. Mount the buffer volume so data survives restarts.
docker run -d \
--name pgpulse-collector \
--restart unless-stopped \
-e PGPULSE_API_KEY=pk_live_your_api_key \
-e PGPULSE_POSTGRES_HOST=your-db-host \
-e PGPULSE_POSTGRES_USER=pgpulse_monitor \
-e PGPULSE_POSTGRES_PASSWORD=strong-password \
-e PGPULSE_POSTGRES_DATABASE=postgres \
-e PGPULSE_POSTGRES_SSLMODE=require \
-e PGPULSE_COLLECTOR_INSTANCE_NAME=prod-primary \
-v pgpulse-buffer:/var/lib/pgpulse/buffer \
pgpulse/collector:latest
For running the collector as a managed Linux service.
# 1. Download the binary
curl -L https://releases.pgpulse.io/collector/latest/pgpulse-collector-linux-amd64 \
-o /usr/local/bin/pgpulse-collector
chmod +x /usr/local/bin/pgpulse-collector
# 2. Create a system user
useradd --system --no-create-home --shell /bin/false pgpulse
# 3. Create config and buffer directories
mkdir -p /etc/pgpulse /var/lib/pgpulse/buffer
chown pgpulse:pgpulse /var/lib/pgpulse/buffer
Create your config at /etc/pgpulse/collector.yaml:
postgres:
host: "your-db-host"
port: 5432
user: "pgpulse_monitor"
password: "strong-password"
database: "postgres"
sslmode: "require"
api:
endpoint: "https://api.pgpulse.io/v1/collect"
api_key: "pk_live_your_api_key_here"
collector:
instance_name: "prod-primary"
# Secure the config file
chown root:pgpulse /etc/pgpulse/collector.yaml
chmod 640 /etc/pgpulse/collector.yaml
Create /etc/systemd/system/pgpulse-collector.service:
[Unit]
Description=pgpulse Postgres Collector
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=pgpulse
ExecStart=/usr/local/bin/pgpulse-collector --config /etc/pgpulse/collector.yaml
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
# Enable and start
systemctl daemon-reload
systemctl enable --now pgpulse-collector
# Check logs
journalctl -u pgpulse-collector -f
No config file needed. Useful for PaaS platforms like Heroku, Render, and Railway.
export PGPULSE_API_KEY=pk_live_your_api_key
export PGPULSE_POSTGRES_HOST=your-db-host
export PGPULSE_POSTGRES_PORT=5432
export PGPULSE_POSTGRES_USER=pgpulse_monitor
export PGPULSE_POSTGRES_PASSWORD=strong-password
export PGPULSE_POSTGRES_DATABASE=postgres
export PGPULSE_POSTGRES_SSLMODE=require
export PGPULSE_COLLECTOR_INSTANCE_NAME=prod-primary
./pgpulse-collector
All available variables:
| Variable | Required | Description |
|---|---|---|
PGPULSE_API_KEY | Yes | Your pk_live_ API key |
PGPULSE_POSTGRES_CONNECTION_STRING | No | Full DSN — overrides individual fields |
PGPULSE_POSTGRES_HOST | Yes* | Postgres host |
PGPULSE_POSTGRES_PORT | No | Postgres port (default 5432) |
PGPULSE_POSTGRES_USER | Yes* | Postgres user |
PGPULSE_POSTGRES_PASSWORD | Yes* | Postgres password |
PGPULSE_POSTGRES_DATABASE | Yes* | Database name |
PGPULSE_POSTGRES_SSLMODE | No | disable · prefer · require · verify-full |
PGPULSE_COLLECTOR_INSTANCE_NAME | No | Label shown on the dashboard |
PGPULSE_COLLECTOR_COLLECT_ALL_DATABASES | No | true to enable multi-database mode |
PGPULSE_LOGGING_LEVEL | No | debug · info · warn · error (default info) |
*Not required if PGPULSE_POSTGRES_CONNECTION_STRING is set.
Connecting a Supabase project? Use one of these configs depending on your connection type.
Direct connection (recommended):
postgres:
host: "db.<your-project-ref>.supabase.co"
port: 5432
user: "postgres"
password: "<your-db-password>"
database: "postgres"
sslmode: "require"
api:
endpoint: "https://api.pgpulse.io/v1/collect"
api_key: "pk_live_your_api_key_here"
collector:
instance_name: "supabase-prod"
Pooler connection (session mode only):
postgres:
host: "aws-<zone-id>-<region>.pooler.supabase.com"
port: 5432
user: "postgres.<your-project-ref>" # ← project ref in the username
password: "<your-db-password>"
database: "postgres"
sslmode: "require"
api:
endpoint: "https://api.pgpulse.io/v1/collect"
api_key: "pk_live_your_api_key_here"
collector:
instance_name: "supabase-prod"
The Supabase pooler in transaction mode is not supported. Use session mode or the direct connection.
Step 4 — Validate the connection
Before checking the dashboard, confirm everything is wired up correctly:
./pgpulse-collector --config collector.yaml --validate
This command checks your config, confirms the Postgres host is reachable, opens a connection, and verifies your API key — then exits cleanly.
Expected output:
✓ Config loaded
✓ Postgres reachable (db.host:5432)
✓ Connection opened (pgpulse_monitor@postgres)
✓ API key verified
All checks passed. Ready to collect.
Exit code 0 = you're good to go. Any non-zero exit code means something needs fixing — the error message will tell you exactly what.
→ Full Self-Hosted reference & multi-DB config
Optional — Enable richer metrics
Two Postgres extensions unlock additional dashboard data. Install them if you want full coverage:
-- Query throughput metrics (strongly recommended)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Memory / buffer cache metrics
CREATE EXTENSION IF NOT EXISTS pg_buffercache;
The collector skips both gracefully if they're not present — you won't see errors, just fewer metrics.
What you'll see in the dashboard
Once data is flowing, the dashboard shows:
| Section | What it tells you |
|---|---|
| Pulse Score | A single 0–100 health score for your instance, updated every minute |
| Golden Signals | CPU, memory, connections, and IO trends over time |
| Query Insights | Slow queries, sequential scans, and lock contention |
| Advisory | Specific recommendations: index suggestions, config tuning, vacuum gaps |
Initial data takes 60–120 seconds to appear on a fresh instance. Refresh the dashboard after that window.
Enable alerts
Navigate to the Alerts tab to set up notifications for critical events like:
- Connection pool saturation
- CPU load crossing a threshold
- Long-running queries or lock waits
Supported channels: Slack, Email, and webhooks.
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
| No data after 5 minutes | User missing pg_monitor | Run GRANT pg_monitor TO pgpulse_monitor; |
| Connection test fails | Firewall blocking pgpulse | Allow the pgpulse IP ranges on port 5432 — see Agentless IP Allowlist |
FATAL: PGPULSE_API_KEY is required | No API key configured | Add api.api_key to your YAML or set PGPULSE_API_KEY |
preflight TCP check failed | Postgres host unreachable | Check the host, port, and firewall rules |
401 at startup | Invalid API key | Re-copy the key from Settings → API Keys |
403 at startup | Host mismatch | For Supabase pooler, embed the project ref in the username: postgres.<ref> |
| Dashboard empty after first load | Normal on initial collection | Wait 60–120 seconds and refresh |