Loop amplification
CriticalIndex Scan using customers_pkey on customers executed 18,544 loops.
High loop counts turn small per-row costs into the dominant runtime. This is where nested-loop plans often become production incidents.
PostgreSQL plan forensics
Turn a raw EXPLAIN plan into an incident narrative: what the planner believed, where execution diverged, which node created the blast radius, and what evidence to collect before changing production.
Review Question
Which part of this plan is actually responsible for the incident, and what evidence would change the next production decision?
At A Glance
The page is designed to answer these production questions and search intents without losing the operational context.
How To Use It
This page should feel like a structured point-in-time review, not a dashboard mashup. First give it the raw plan text. Then read the verdict, the key signals, and the starter checks that move the investigation forward.
01
Use EXPLAIN ANALYZE output with buffers when possible. The better the evidence, the better the review.
02
See whether this looks like an incident signature, a risky plan, or just something worth reviewing.
03
Use the starter SQL to inspect the query family and planner stats before you change production.
Paste raw EXPLAIN output here first. This is the input the rest of the page reads.
Runs locally
Everything happens in the browser. The page does not connect to your database.
Best input
Prefer EXPLAIN (ANALYZE, BUFFERS, VERBOSE) or FORMAT JSON so loops, reads, and spills are visible.
Main goal
Separate planner shape from runtime evidence before changing indexes, stats, or memory settings.
This plan already shows a production-grade failure signature. Fixes should be guided by evidence, not guesswork or generic tuning.
Execution time
185.1 ms
Prefer EXPLAIN ANALYZE with Execution Time.
Worst estimate gap
18544x
Planner rows compared with actual rows across loops.
Largest read node
12,811
Largest hit signal: 842 blocks.
Rows filtered late
812,456
Rows removed by filter, join filter, or index recheck.
These are the signals the page extracted from the plan text.
Prefer EXPLAIN ANALYZE with Execution Time.
Planner rows compared with actual rows across loops.
Largest hit signal: 842 blocks.
Rows removed by filter, join filter, or index recheck.
Nodes executing 1,000+ loops.
Disk sort/hash spill reported by the plan.
Sequential scans / index-backed scans.
Batches above 1 indicate memory pressure.
Risk-ranked interpretations of what the plan is showing.
Index Scan using customers_pkey on customers executed 18,544 loops.
High loop counts turn small per-row costs into the dominant runtime. This is where nested-loop plans often become production incidents.
Index Scan using customers_pkey on customers expected 1 row and produced 18,544. Worst ratio: 18544x.
When estimates are this wrong, PostgreSQL can choose the wrong join order, wrong access path, or a nested loop that only looks cheap on paper.
Plan reports approximately 128 MB written by disk sort/hash operations.
Disk-backed sorts and hashes add latency variance and usually get worse under concurrency.
Use these queries to continue the investigation. They are starter checks, not a replacement for historical monitoring.
-- 1. Find out whether this plan belongs to a recurring query family.
SELECT queryid, calls, total_exec_time, mean_exec_time,
rows, shared_blks_hit, shared_blks_read,
temp_blks_read, temp_blks_written
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;
-- 2. Re-check planner statistics before changing access paths.
SELECT schemaname, tablename, attname,
n_distinct, correlation, most_common_vals, most_common_freqs
FROM pg_stats
WHERE tablename = 'your_table'
ORDER BY attname;
-- 3. Re-run the same query with enough evidence to compare nodes.
EXPLAIN (ANALYZE, BUFFERS, VERBOSE, SETTINGS)
SELECT ...;
-- 4. If your tooling prefers JSON, keep the same evidence in machine-readable form.
EXPLAIN (ANALYZE, BUFFERS, VERBOSE, SETTINGS, FORMAT JSON)
SELECT ...;What This Review Should Clarify
First collect the plan evidence. Then interpret the verdict. Only after that should you decide whether the fix belongs in statistics, query shape, indexes, or memory settings.
Understand planner decisions, statistics, and plan regressions.
Find which query plans deserve attention first.
Track the query families that deserve plan review first.
Benchmark 48 HNSW configurations against your real pgvector data in 10 minutes. Get the optimal m, ef_construction, and ef_search plus zero-downtime migration SQL.
Model a proposed PostgreSQL index as a production rollout: DDL shape, lock level, WAL pressure, write amplification, replica lag risk, validation SQL, and rollback criteria.
This page should help with a real decision right now. It should not pretend to replace the historical evidence that production teams need after a deploy, during an incident, or across a longer rollout window.