The DELETE statement is fast. Marking a tuple dead in Postgres takes a few microseconds. When DELETE feels slow, the slowness is almost always something other than the delete itself.
The categories I check first.
1. Foreign key cascade work
The most common cause. A DELETE on a parent table cascades to child tables, which may cascade further.
DELETE FROM customers WHERE id = 47;
-- Postgres has to find and delete:
-- - All orders with customer_id = 47
-- - All payments tied to those orders
-- - All audit_log entries tied to those orders
-- - And so on
For each cascade, Postgres needs an index on the FK column. Without it, the cascade is a sequential scan.
Diagnostic:
SELECT
c.conrelid::regclass AS child_table,
string_agg(a.attname, ',' ORDER BY array_position(c.conkey, a.attnum)) AS fk_columns
FROM pg_constraint c
JOIN pg_attribute a
ON a.attrelid = c.conrelid AND a.attnum = ANY(c.conkey)
WHERE c.contype = 'f'
AND c.confrelid = 'customers'::regclass
AND NOT EXISTS (
SELECT 1 FROM pg_index i
WHERE i.indrelid = c.conrelid
AND (c.conkey::int[]) <@ (i.indkey::int[]::int[])
)
GROUP BY c.conrelid, c.conname, c.conkey;
This lists FKs pointing to customers whose child tables do not have a matching index. Each one is a candidate for slow cascade.
Fix: add the missing indexes.
CREATE INDEX CONCURRENTLY orders_customer_id_idx ON orders (customer_id);
For a multi-million-row child table without an FK index, this is the single biggest delete performance fix available.
2. Trigger overhead
Triggers on the deleted table or its children fire for every deleted row. A trigger that does a write (audit log, recompute aggregates) multiplies the work.
Diagnostic:
SELECT
trigger_name,
event_manipulation,
action_statement
FROM information_schema.triggers
WHERE event_object_table = 'orders';
For each trigger, ask: "Is this trigger doing significant work?" If yes, deleting many rows is the trigger's work multiplied.
Fix: depending on the trigger, options include:
- Disable triggers for bulk deletes:
ALTER TABLE orders DISABLE TRIGGER ALL;before, re-enable after. Risky — triggers exist for reasons. - Batch the delete: smaller batches mean less peak trigger load.
- Rewrite the trigger: sometimes the trigger does more than it needs to.
3. Index maintenance
Every index on the deleted table needs to be updated. Each delete updates every index. Tables with many indexes have proportionally slower deletes.
Diagnostic:
SELECT count(*), pg_size_pretty(sum(pg_relation_size(indexrelid)))
FROM pg_indexes i
JOIN pg_class c ON c.relname = i.indexname
WHERE i.tablename = 'orders';
If the table has 10+ indexes, each delete updates 10+ B-trees. That is 10+ I/O operations per delete.
Fix: drop unused indexes (see the slow-inserts post for the methodology).
4. Bloat in the table or its indexes
A bloated table has dead tuples on every page. Deletes have to navigate around them. Indexes with low leaf density are slow to update.
Diagnostic:
SELECT
relname,
n_live_tup, n_dead_tup,
round(100.0 * n_dead_tup / nullif(n_live_tup + n_dead_tup, 0), 1) AS dead_pct
FROM pg_stat_user_tables
WHERE relname = 'orders';
If dead_pct > 30%, the table is bloated. Vacuum it.
5. The DELETE itself is finding rows slowly
The least common cause but real: the WHERE clause does not have an index.
DELETE FROM orders WHERE created_at < '2020-01-01';
Without an index on created_at, this seq-scans the table to find matching rows, then deletes them.
Diagnostic:
EXPLAIN DELETE FROM orders WHERE created_at < '2020-01-01';
Look for Index Scan (good) or Seq Scan (the issue).
Fix: index the WHERE column.
A worked example
Real scenario: a DELETE FROM customers WHERE id = X was taking 30 seconds.
Diagnostic:
- customers had only 100,000 rows. The delete itself should be milliseconds.
- orders table had 50M rows.
- orders had no index on
customer_id. - The cascade had to seq-scan orders to find rows with
customer_id = X.
Fix:
CREATE INDEX CONCURRENTLY orders_customer_id_idx ON orders (customer_id);
Delete time: 30 seconds → 200 milliseconds.
The fix was one index. The diagnosis took longer than the fix.
Bulk deletes
Deleting many rows in one statement is different from deleting many rows one at a time. Two patterns:
Single bulk delete:
DELETE FROM orders WHERE created_at < '2020-01-01';
-- One transaction, one set of locks, one cascade pass
Chunked deletes:
WITH deleted AS (
DELETE FROM orders
WHERE id IN (
SELECT id FROM orders
WHERE created_at < '2020-01-01'
LIMIT 1000
)
RETURNING 1
)
SELECT count(*) FROM deleted;
-- Repeat until count is 0
For very large bulk deletes, chunking is usually better — it does not hold locks for a long time, vacuum can keep up, the application can run alongside.
Soft delete instead
For large datasets where DELETE is structurally expensive, soft delete is the alternative:
UPDATE orders SET deleted_at = now() WHERE id = X;
UPDATE is faster than DELETE for many cases (no FK cascades on UPDATE unless the cascading column changed). Application queries filter WHERE deleted_at IS NULL to exclude soft-deleted rows.
A periodic hard-delete job removes soft-deleted rows past retention.
What I check first
For any "DELETE is slow" investigation:
- FK indexes on child tables. Run the missing-FK-index query.
- Triggers on the table. List them; estimate their work.
- Number of indexes on the table. Drop unused ones.
- Bloat percentage. Vacuum if high.
- Index on the WHERE column. Add if missing.
Most cases resolve at step 1. The fix is usually adding one index that makes a cascade go from seq-scan to index-scan.