Here's something that surprises a lot of developers: PostgreSQL spawns a new OS process for every single connection. Not a thread — a process. Each process has its own memory space, its own stack, its own everything. On a modern system, that's roughly 5-10MB of RAM per connection just for PostgreSQL's overhead, before your queries even run.
This is fine when you have 20 connections. At 500 connections, you're burning 2-5GB of RAM just on process overhead. At 2,000 connections — which I've seen in the wild — the PostgreSQL process manager is spending more time managing worker processes than actually running queries.
The answer is a connection pooler. And the right answer for most PostgreSQL deployments is PgBouncer.
What PgBouncer Actually Does
PgBouncer sits between your application and PostgreSQL. Your application opens connections to PgBouncer. PgBouncer maintains a pool of actual PostgreSQL connections and routes application requests through them.
The key insight: your application might think it has 1,000 connections. PgBouncer might only be maintaining 100 actual PostgreSQL connections. When an app connection needs to do something, PgBouncer grabs an available PostgreSQL connection from the pool, runs the query, and releases it back.
This works because most connections in a web application aren't doing anything at any given moment. They're idle, waiting for the next request. With pooling, you only need as many real PostgreSQL connections as you have concurrent active queries — not concurrent connections.
Installing and Basic Configuration
# Ubuntu/Debian
apt install pgbouncer
# Or with Docker
docker run -d --name pgbouncer \
-e DATABASE_URL="postgres://user:pass@postgres:5432/mydb" \
-p 5432:5432 \
bitnami/pgbouncer:latest
The main config file is /etc/pgbouncer/pgbouncer.ini:
[databases]
mydb = host=127.0.0.1 port=5432 dbname=mydb
[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 5432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25
reserve_pool_size = 5
reserve_pool_timeout = 3
server_idle_timeout = 600
client_idle_timeout = 0
And userlist.txt:
"myuser" "md5passwordhash"
To get the md5 hash: echo -n "passwordmyuser" | md5sum, prepend "md5".
Session vs Transaction vs Statement Mode
This is the most important decision you'll make. Get it wrong and you'll have weird bugs that are very hard to diagnose.
Session mode: One PostgreSQL connection is assigned to a client connection for its entire lifetime. Functionally identical to direct PostgreSQL connections. Safe for all features — prepared statements, advisory locks, SET commands, everything. But you don't get much multiplexing — you need almost as many PostgreSQL connections as app connections.
Transaction mode: A PostgreSQL connection is assigned for the duration of a single transaction. This is where the real efficiency gains are. A client can open 100 connections but if only 10 are in active transactions at once, you only need 10 PostgreSQL connections. This is what I run in production.
Statement mode: A PostgreSQL connection is assigned per statement. Breaks transactions. Don't use this for anything real.
Honestly, if you're still using session mode with PgBouncer in 2026, you're leaving performance on the table. Transaction mode is where the wins are. But it has a gotcha.
The Prepared Statement Trap
Transaction mode breaks server-side prepared statements. This trips up every team the first time.
Prepared statements in PostgreSQL are connection-scoped. When your app does:
PREPARE my_query (text) AS SELECT * FROM users WHERE email = $1;
That prepared statement exists only on that specific PostgreSQL connection. In transaction mode, your next transaction might be routed to a different PostgreSQL connection — where the prepared statement doesn't exist. You'll get errors like "prepared statement does not exist."
Solutions:
Use protocol-level prepared statements only (not SQL-level PREPARE). Most ORMs and drivers use protocol-level prepared statements, which PgBouncer handles differently depending on the
ignore_startup_parameterssetting.Set
ignore_startup_parameters = extra_float_digitsin your pgbouncer.ini (some clients send this on startup).Configure your app to not use server-side prepared statements. In psycopg2:
cursor_factory = psycopg2.extras.RealDictCursorwithprepare_threshold = None. In JDBC: add?prepareThreshold=0to your connection URL.Use pgBouncer's
server_reset_queryto reset connection state between uses.
For most applications, option 3 — disabling prepared statements — is the easiest. You'll lose a tiny bit of query parsing overhead, but you'll gain the full efficiency of transaction mode.
Sizing Your Pool
The formula I use:
default_pool_size = (CPU cores on PostgreSQL server × 2) + number of active disks
For a 4-core server with SSDs: about 10 connections per database. For a 16-core server: about 35 connections.
This isn't a magic formula — it's a starting point. The real constraint is: more connections than PostgreSQL can actually run in parallel doesn't help. CPU cores are the limiting factor for CPU-bound queries. For I/O-bound queries, more connections can help, but there's a ceiling.
I've seen teams set default_pool_size = 200 and then wonder why their 4-core PostgreSQL server is thrashing. You're not getting 200× the performance. You're getting context-switching overhead.
# For a typical production setup
default_pool_size = 20 # adjust based on your PostgreSQL server
max_client_conn = 500 # max application connections PgBouncer accepts
reserve_pool_size = 5 # extra connections for urgent situations
reserve_pool_timeout = 3 # seconds to wait before using reserve
min_pool_size = 5 # keep at least this many connections alive
Monitoring PgBouncer
PgBouncer has a virtual admin console you can connect to:
psql -h localhost -p 5432 -U pgbouncer pgbouncer
-- Current pool stats
SHOW POOLS;
-- Active client connections
SHOW CLIENTS;
-- Active server connections
SHOW SERVERS;
-- Statistics
SHOW STATS;
The numbers to watch in SHOW POOLS:
cl_active: client connections currently executing queriescl_waiting: clients waiting for a server connection (this should be near zero)sv_active: server connections in usesv_idle: server connections sitting idle
If cl_waiting is consistently above zero, your pool is too small. Increase default_pool_size or reduce your application's concurrency.
MonPG can track PgBouncer metrics alongside PostgreSQL metrics — so you see the full picture of where time is being spent.
Common Mistakes
Not setting server_idle_timeout: By default, PgBouncer keeps connections open indefinitely. If you have 25 connections in the pool but your traffic drops to near zero overnight, those 25 PostgreSQL processes are still running, burning RAM. Set server_idle_timeout = 600 to clean them up.
Setting max_client_conn too low: If this is lower than your application's connection pool size, you'll get connection refused errors under load. Set it significantly higher than your app's pool.
Not configuring health checks: If a PostgreSQL server dies and comes back, PgBouncer might not detect the stale connections. Set server_check_query = SELECT 1 and server_check_delay = 30.
Running without authentication: PgBouncer supports HBA-style authentication. Use it.
The bottom line: PgBouncer in transaction mode with properly sized pools has single-handedly saved dozens of PostgreSQL databases from connection-induced slowdowns. It's not optional for any serious deployment.