pg_stat_statements — The Foundation
Required for everything in MonPG. How it works, how to verify it's loaded, what we read from it.
pg_stat_statements is the only PostgreSQL extension MonPG installs automatically during hosted onboarding. Without it, the Queries, Query Advisor, Compare, and Index Advisor pages have no data — the entire query-level half of the product depends on it. Every other extension on the list is optional and offers something incremental.
How it works
It's a contrib module that hooks the executor and accumulates per-query statistics in a shared-memory hash table. Each unique normalized query (constants stripped — your SELECT * FROM users WHERE id = 47 and SELECT * FROM users WHERE id = 99 become the same row) gets one entry. The counters are cumulative since server start or the last pg_stat_statements_reset(); we compute deltas by snapshotting and subtracting.
What we read from it
The queryid + query for the fingerprint shown in the UI. Counters: calls, total_time, mean_time, min_time, max_time, stddev_time. Row counts via rows. Buffer activity from shared_blks_hit/read/written/dirtied, plus local_blks_* and temp_blks_* for temp/local table activity. blk_read_time and blk_write_time come through only if you have track_io_timing = on (the most common reason MonPG's I/O latency reports zero is that flag being off). On PG 13+, we also pull WAL pressure per query from wal_records, wal_fpi, and wal_bytes.
Verifying it's actually loaded
SELECT * FROM pg_extension WHERE extname = 'pg_stat_statements';
One row means the extension is installed. Then check the view itself has data:
SELECT count(*) FROM pg_stat_statements;
If that's greater than 0 after a few queries have run, you're set. If it stays at 0 indefinitely, the .so isn't loaded into shared memory — check that pg_stat_statements is in shared_preload_libraries and that you've actually restarted PG after adding it.
Tuning
The settings worth touching:
pg_stat_statements.max = 10000. Default 5000 evicts long-tail queries on high-QPS clusters, and the long tail is exactly where weird queries hide.pg_stat_statements.track = top. Top-level only is the default; switch toallonly if you specifically need nested function calls in the catalog.pg_stat_statements.track_utility = on. Captures VACUUM, CREATE, etc. — useful for the maintenance view.pg_stat_statements.save = on. Persist stats across restart so you don't reset the catalog every time PG bounces.
Cost
Negligible. Roughly a microsecond hash-table insert per query, similar to enabling log_min_duration_statement. Memory cost is about 2KB per tracked query times the max value — so 10MB of shared memory at the default 5000 cap. You don't need to plan capacity around this.