5 min read

Slow Inserts in Postgres: It's Almost Always One of Three Things

When inserts get slow, the cause is reliably one of: too many indexes, too many constraints, or vacuum starvation. Here is how to tell which.

When the application starts complaining that inserts are slow, the cause is almost always one of three things. Knowing the categories saves time.

The symptom: an insert that used to take 2ms now takes 50ms. The application's write throughput drops. The database itself is not obviously stressed.

Here is the diagnostic order I use.

1. Too many indexes

Every index has a write cost. An insert into a table with one index pays for the heap write plus one B-tree update. An insert into a table with eight indexes pays for the heap write plus eight B-tree updates. The cost is multiplicative.

The diagnostic:

SELECT
  schemaname, tablename,
  count(*) AS index_count,
  pg_size_pretty(sum(pg_relation_size(indexrelid))) AS total_index_size
FROM pg_indexes
JOIN pg_class ON pg_class.relname = indexname
GROUP BY schemaname, tablename
ORDER BY index_count DESC;

A table with 10+ indexes is a likely candidate. Each insert touches all of them.

The fix is to identify and drop unused indexes:

SELECT
  schemaname, relname AS tablename, indexrelname AS indexname,
  idx_scan, idx_tup_read, idx_tup_fetch,
  pg_size_pretty(pg_relation_size(indexrelid)) AS size
FROM pg_stat_user_indexes
WHERE idx_scan = 0
ORDER BY pg_relation_size(indexrelid) DESC;

Indexes with idx_scan = 0 have not been used since the last stats reset. If that period is long enough (a month for sure, ideally a quarter), the index is genuinely unused.

Dropping unused indexes:

DROP INDEX CONCURRENTLY old_unused_idx;

One team I worked with had a table with 14 indexes, of which 4 had been added during exploration and never used in production. Dropping them cut insert latency by 30%.

2. Foreign key checks

Foreign key validation runs on every insert that touches a FK column. Postgres has to lookup the parent row to confirm it exists. This is fast when the parent's primary key is a single integer, slower when it is a UUID or compound.

The diagnostic:

SELECT
  conname, conrelid::regclass AS table, confrelid::regclass AS references_table
FROM pg_constraint
WHERE contype = 'f'
  AND conrelid = 'orders'::regclass;

More foreign keys = more lookups per insert. For high-throughput tables, this adds up.

The fix is rarely "drop the FK" — referential integrity matters. Instead:

  • Make sure the parent's PK lookup is fast. A primary key on a typed column with no bloat. If the parent is a heavily-updated table with a bloated PK index, FK checks suffer.
  • For very high write throughput, consider deferring FK checks. DEFERRABLE INITIALLY DEFERRED lets the FK validate at COMMIT instead of per-INSERT. Useful for batch inserts.

3. Vacuum starvation and bloat

If the table is bloated, every insert involves writing to a page with dead tuples on it. The page may need to be split or expanded. Bloated indexes have the same issue.

The diagnostic:

SELECT
  relname,
  n_live_tup, n_dead_tup,
  round(100.0 * n_dead_tup / nullif(n_live_tup + n_dead_tup, 0), 1) AS dead_pct,
  last_autovacuum
FROM pg_stat_user_tables
WHERE relname = 'orders';

If dead_pct > 20% and last_autovacuum is older than expected, vacuum is not keeping up.

The fix:

  • Tune autovacuum to be more aggressive on this table.
    ALTER TABLE orders SET (
      autovacuum_vacuum_scale_factor = 0.05,
      autovacuum_analyze_scale_factor = 0.02
    );
    
  • Manual vacuum if currently bloated: VACUUM (ANALYZE) orders;.
  • pg_repack for severely bloated tables.

4. (Bonus) WAL pressure

If WAL write throughput is saturated, every commit waits. The diagnostic:

SELECT * FROM pg_stat_bgwriter;

If buffers_backend is much larger than buffers_clean, backends are doing the writes that bgwriter should be doing. This usually correlates with checkpoint pressure.

Fix: tune max_wal_size, bgwriter_lru_maxpages, checkpoint_timeout. See the checkpoint tuning post.

A real diagnostic

Real scenario: a high-write events table where inserts went from 3ms to 40ms over six months.

Diagnostic sequence:

-- Index count: 9 (high, but several were genuinely used)
-- Unused indexes: 2 (added during exploration, now unused)
-- FK count: 3 (reasonable)
-- Bloat: dead_pct = 35%, last_autovacuum = 2 weeks ago

The primary issue was vacuum starvation. The table had grown faster than autovacuum thresholds expected. Default autovacuum_vacuum_scale_factor = 0.2 meant autovacuum waited until 20% of the table was dead before running, by which time the table was huge and vacuum took forever, blocking other vacuums.

Fix:

ALTER TABLE events SET (
  autovacuum_vacuum_scale_factor = 0.02,
  autovacuum_analyze_scale_factor = 0.01
);
VACUUM (ANALYZE) events;

And dropped the 2 unused indexes.

Insert latency back to 4-5ms within 24 hours.

What I check first

For any "inserts got slow" investigation:

  1. Index count on the table. If high, run the unused-index query.
  2. Bloat percentage. If high, vacuum settings need attention.
  3. FK count and parent table state. Less common cause but check.
  4. WAL pressure (checkpoint stats). For very high-throughput tables.

80% of slow-insert problems land on indexes or vacuum. The other 20% are interesting and require more investigation, but the obvious checks first usually resolve the case.