Most PostgreSQL memory tuning guides tell you to set shared_buffers to 25% of RAM and call it a day. That's not wrong, exactly. It's just incomplete in ways that will bite you.
I've tuned memory on everything from tiny cloud instances to 512GB RAM servers. The principles are the same, but the tradeoffs are different. Here's what I've actually learned.
shared_buffers: Your First Cache
shared_buffers is PostgreSQL's private cache — data that PostgreSQL has read from disk and keeps in memory for fast reuse. More shared_buffers means fewer disk reads.
The classic advice: set it to 25% of RAM. This is reasonable, but:
- On systems with lots of RAM (256GB+), 25% might be 64GB of shared_buffers. That's often too much — the OS page cache also caches disk pages, and you're double-caching.
- On small systems (4GB RAM), 25% = 1GB, which might not be enough for your working set.
My starting points:
- Less than 8GB RAM: start at 25%, maybe go up to 40%
- 8-64GB RAM: 25% is about right
- 64GB+ RAM: 25% but cap around 16-32GB unless you've specifically measured that more helps
# postgresql.conf
shared_buffers = 8GB # on a 32GB RAM system
effective_cache_size = 22GB # estimate of total usable cache (RAM - OS - connections)
effective_cache_size doesn't actually allocate memory — it's a hint to the query planner about how much cache is available. Set it to about 75% of RAM (including what the OS caches). Getting this wrong makes the planner too pessimistic about index scans.
work_mem: The One That Bites Everyone
This is the most commonly misunderstood PostgreSQL memory parameter, and getting it wrong causes either OOM kills or horribly slow sorts.
The critical thing: work_mem is not per connection. It's per sort operation. A complex query can have dozens of sort/hash operations, each getting work_mem bytes. And with 100 connections each running complex queries, you can be using 100 × 20 × work_mem memory at once.
Default is 4MB. That's conservative and safe.
# The naive approach (dangerous)
work_mem = 256MB # DON'T do this if you have many connections
# Better: set it low globally, override per-session for heavy queries
work_mem = 16MB # global default
# In your application or reporting session:
SET work_mem = '256MB'; # just for this session
For OLTP workloads (lots of small queries, many connections): keep work_mem low, 8-32MB.
For analytics/reporting workloads (few connections, complex queries): you can set it higher, 64-512MB.
The sign that work_mem is too low: queries spilling to disk. Look for this in EXPLAIN output:
EXPLAIN (ANALYZE, BUFFERS) SELECT ...;
-- Look for:
-- Sort Method: external merge Disk: 45678kB ← spilling to disk, increase work_mem
-- Batches: 8 ← hash join used multiple batches, work_mem too small
maintenance_work_mem: For the Big Operations
maintenance_work_mem is used by VACUUM, CREATE INDEX, ALTER TABLE, and similar maintenance operations. This can and should be set much higher than work_mem.
maintenance_work_mem = 1GB # safe for most systems with >8GB RAM
A common mistake: not increasing this before running a CREATE INDEX on a large table. More maintenance_work_mem = faster index builds. I've seen index builds go from 2 hours to 20 minutes just by bumping this.
For an autovacuum session, this is controlled separately:
autovacuum_work_mem = 256MB # separate from maintenance_work_mem
# -1 means use maintenance_work_mem
wal_buffers: Smaller Than You Think
WAL (Write-Ahead Log) is written synchronously before data is written to disk. wal_buffers controls how much WAL is buffered in shared memory before being flushed.
Default in recent PostgreSQL versions is -1 (auto-tune, usually ends up around 1/32 of shared_buffers, max 16MB). This is almost always fine.
wal_buffers = 16MB # explicit setting, fine for most workloads
Only increase this if you have very write-heavy workloads and see WAL-related wait events.
Huge Pages on Linux
On Linux, huge pages (2MB instead of 4KB default) significantly reduce TLB pressure for large shared_buffers values. This can improve performance 5-20% on systems with large memory.
# Check current huge page settings
grep Huge /proc/meminfo
# Calculate needed huge pages: shared_buffers / 2MB (rounded up)
# Example: 8GB shared_buffers = 4096 huge pages
echo "vm.nr_hugepages = 4200" >> /etc/sysctl.conf
sysctl -p
# postgresql.conf
huge_pages = on # try, on, or off
If PostgreSQL can't get huge pages, it falls back to regular pages with huge_pages = try. Start with try to be safe.
Checking Your Current Memory Usage
-- How much shared_buffers is actually being used?
SELECT
count(*) AS buffer_count,
pg_size_pretty(count(*) * 8192) AS total_size,
round(100 * count(*) / (SELECT setting::int FROM pg_settings WHERE name = 'shared_buffers'), 1) AS used_pct
FROM pg_buffercache
WHERE relfilenode IS NOT NULL;
This requires the pg_buffercache extension:
CREATE EXTENSION IF NOT EXISTS pg_buffercache;
-- Per-table cache usage
SELECT
c.relname,
pg_size_pretty(count(*) * 8192) AS buffered,
round(100.0 * count(*) / (SELECT setting::int FROM pg_settings WHERE name = 'shared_buffers'), 1) AS pct_of_buffers
FROM pg_buffercache bc
JOIN pg_class c ON bc.relfilenode = pg_relation_filenode(c.oid)
WHERE bc.relfilenode IS NOT NULL
GROUP BY c.relname
ORDER BY count(*) DESC
LIMIT 20;
This shows you which tables are consuming your shared_buffers cache. If your most-queried table doesn't appear near the top, it might not fit in cache — or it's being evicted by other tables.
A Realistic Starting Config
For a dedicated PostgreSQL server with 32GB RAM, mixed OLTP/analytics:
# Memory
shared_buffers = 8GB
effective_cache_size = 24GB
work_mem = 32MB # global; set higher per-session for analytics
maintenance_work_mem = 1GB
wal_buffers = 16MB
huge_pages = try
# Checkpoint tuning (affects memory indirectly)
checkpoint_completion_target = 0.9
wal_compression = on # saves I/O at cost of CPU
max_wal_size = 4GB
There's no config that's right for all workloads. But this is a reasonable starting point that won't cause obvious problems — and you tune from there based on what pg_stat_bgwriter, pg_buffercache, and MonPG's monitoring actually show you.
The biggest mistake in PostgreSQL memory tuning is set-and-forget. Memory needs, query patterns, and concurrent connection counts change over time. Review these settings quarterly.