Most developers I have worked with treat transaction isolation the way they treat NOT NULL: it sounds important, they leave it at the default, and they hope nothing weird happens. Then something weird happens, and the bug report says "sometimes the inventory count is wrong."
This post is the conversation I have when that bug shows up. Three isolation levels Postgres supports, what each one prevents, and when to actually change from the default.
The default is Read Committed
Under Read Committed, a query sees a snapshot of the database at the moment the query begins. Two queries within the same transaction can see different data. Other transactions' writes become visible immediately after they commit.
This is what people generally mean when they say "normal transaction behavior." It catches the most obvious bugs (you do not read uncommitted data from other transactions) but lets some surprising things through.
The surprise that catches people:
BEGIN;
SELECT balance FROM accounts WHERE id = 1; -- returns 100
-- (another transaction commits a withdrawal of 50)
SELECT balance FROM accounts WHERE id = 1; -- now returns 50
COMMIT;
Within one transaction, the same query returned different values. Read Committed allows this, and most application code is OK with it because applications usually do not need a stable view across multiple statements within a transaction.
When Read Committed bites you
The classic bug is the "check-then-act" race:
BEGIN;
SELECT balance FROM accounts WHERE id = 1; -- 100
-- application checks: balance >= 50, OK
-- (another transaction concurrently does the same check, also sees 100)
UPDATE accounts SET balance = balance - 50 WHERE id = 1;
-- (the other transaction also runs UPDATE balance = balance - 50)
COMMIT;
-- net result: balance is 0, but only one withdrawal should have succeeded
Under Read Committed, both transactions read 100, both decide they can withdraw, both commit. The application logic is wrong without an explicit lock.
Fix options:
SELECT ... FOR UPDATE— locks the row so the other transaction waits.- Optimistic concurrency with a version column —
UPDATE ... WHERE id = 1 AND version = 7. Ifaffected_rows = 0, retry. - Move to a higher isolation level.
Most applications use SELECT FOR UPDATE here. It is the cheapest fix for a single-row check-then-act.
Repeatable Read
Repeatable Read gives you a snapshot at the start of the transaction. Every query in the transaction sees the same data, even if other transactions commit in the meantime.
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN;
SELECT balance FROM accounts WHERE id = 1; -- 100
-- (another transaction commits a withdrawal)
SELECT balance FROM accounts WHERE id = 1; -- still 100
COMMIT;
The trade: when you try to write a row that another transaction has modified since your snapshot started, your transaction fails with could not serialize access due to concurrent update. The application has to retry.
Useful for: long-running reports, analytical transactions, anything where reading consistent data across many tables is important.
Not useful for: most short-lived OLTP transactions, where Read Committed plus row-level locking is cheaper.
Serializable
Serializable adds detection for serialization anomalies that Repeatable Read does not catch. The classic example:
Two doctors are on call. Each can take time off only if at least one other doctor is still on call. Two transactions try to mark themselves off-duty simultaneously:
- T1: SELECT count(*) WHERE on_call = true → 2. UPDATE me set on_call = false.
- T2: SELECT count(*) WHERE on_call = true → 2. UPDATE me set on_call = false.
- Both commit. Now there are zero doctors on call.
Under Serializable, the database detects that the two transactions could not have happened in any sequential order without one of them seeing the other's effect. One of them aborts with a serialization error. The application retries.
Serializable is expensive — Postgres has to track read-write dependencies between transactions — and produces serialization errors that the application has to handle. But for workloads where correctness is critical and the developers are not going to remember to add FOR UPDATE everywhere, it is the safest option.
A practical rule
For most applications:
- Default to Read Committed.
- Use
SELECT ... FOR UPDATEfor single-row check-then-act. - Use Serializable for transactions that touch multiple rows in a way where concurrent commits could violate a business invariant, and you cannot easily express that with row locks.
- Use Repeatable Read for analytical transactions that need a stable snapshot.
Do not toggle between levels in the same connection casually. The Postgres docs are clear about which level applies when, but operational confusion creates real bugs.
How to tell what isolation level you are running
SHOW transaction_isolation;
SHOW default_transaction_isolation;
The first is the current transaction; the second is the session default. The session default is set per connection and the most common cause of "I thought we were running Serializable."
What I check in a code review
- Is there a
SELECTfollowed by aWHERE-conditionalUPDATE? Likely a check-then-act race. SuggestFOR UPDATEor move to Serializable. - Is there a long-running transaction doing analytics? Suggest Repeatable Read.
- Is the application catching
serialization_failureand retrying? If using Serializable or Repeatable Read, this is mandatory. If not, the application will surface those errors to users.
Isolation is one of the few places where the Postgres defaults are conservative for performance, not correctness. The right level is whichever one prevents your specific bug class without paying for prevention you do not need.