A query that needs sorted output either gets it from an index — for free — or computes it with a sort step. The sort step's cost depends almost entirely on whether the data fits in work_mem. In-memory sort is fast. Disk-spill sort is much slower. The factor between them can be 10x for the same data.
Most "why is my query slow" investigations on sorted output come down to this distinction. Here is how to read the signal and what to do.
How sorts execute
In EXPLAIN ANALYZE, a sort node tells you exactly what method it used:
Sort (actual time=2000.000..2500.000 rows=87 loops=1)
Sort Key: created_at DESC
Sort Method: external merge Disk: 50MB
Sort Methods:
quicksort— fits inwork_mem. Fast.top-N heapsort— forORDER BY ... LIMIT Nwith small N. Postgres maintains a heap of the top N rows during the scan. Very fast, low memory.external merge— does not fit inwork_mem. Spills to disk, sorts in chunks, merges. Much slower.external sort— older method, similar to external merge.
If you see external merge or Disk: in the sort node, the sort spilled. Either work_mem is too low or the data really is too big to fit in any reasonable work_mem.
The free fix: an index that already provides the sort
For a query like ORDER BY created_at DESC LIMIT 10, the right index removes the sort entirely:
CREATE INDEX events_created_at_idx ON events (created_at DESC);
Now the query plan reads from the index in pre-sorted order, takes 10, done. No sort step. Memory usage is constant regardless of table size.
For compound sorts, the index column order matters:
-- Query: ORDER BY tenant_id, created_at DESC
CREATE INDEX events_tenant_created
ON events (tenant_id, created_at DESC);
The leading column is tenant_id (matches the first sort key), the trailing column is created_at DESC (matches the second). Postgres can scan in this order without sorting.
If the query is ORDER BY created_at DESC, tenant_id, you need a different index:
CREATE INDEX events_created_tenant
ON events (created_at DESC, tenant_id);
The index column order has to match the ORDER BY column order. There is no general-purpose "sort index" — each one supports specific orderings.
When the sort is unavoidable, raise work_mem
For analytical queries that genuinely need to sort large data:
SET work_mem = '256MB';
SELECT ... ORDER BY ...;
RESET work_mem;
Doing this per-session is critical. work_mem is per-operation; a sort, a hash join, and a hash aggregate can each allocate their own. A globally-set 256MB on a 100-connection server can use 25GB+ of RAM.
For known heavy queries, set per-statement is cleaner:
SET LOCAL work_mem = '256MB';
SELECT ... ;
SET LOCAL resets at transaction end, no RESET needed.
How big does work_mem need to be?
The rule of thumb: if the sort is going to handle N rows of W bytes each, you want roughly 2 * N * W of work_mem. Postgres needs space for the data plus overhead.
For 1 million rows of 200 bytes each, that is 400MB of work_mem. If you set 256MB, the sort spills.
For ORDER BY ... LIMIT N with small N, top-N heapsort kicks in and uses only enough memory for N rows. The full table size does not matter:
-- Sorts 1M rows but only keeps 10 in memory
SELECT * FROM events ORDER BY created_at DESC LIMIT 10;
This is one of the most underused Postgres optimizations. If you only need the top N, the sort cost is bounded.
When to materialize the sort
For very large analytics that always need the same sort, materialize it:
CREATE MATERIALIZED VIEW recent_events AS
SELECT * FROM events
ORDER BY created_at DESC
LIMIT 10000;
CREATE INDEX recent_events_created ON recent_events (created_at DESC);
The MV is sorted at refresh time. Queries against it return pre-sorted results without any per-query work. The trade is staleness (set a refresh schedule that fits the use case).
A practical decision tree
When a sort is showing up as expensive:
- Can an index provide the order? If yes, create it. The sort step disappears.
- Is it
ORDER BY ... LIMIT N? Make sure the plan uses top-N heapsort. If it does not, the LIMIT might be pushed down through unexpected layers. - Is the sort method
external merge? Raisework_memfor the session. Re-run; check if it switches toquicksort. - Is the data legitimately too big to sort in memory? Materialize a sorted snapshot.
- Is the application even using the sorted result? Sometimes the answer is to remove the ORDER BY because nobody needed it.
Sorts are one of the cheapest performance wins available in Postgres because they often have a clear, single fix. The trick is reading the EXPLAIN output to see which fix applies.