Advisory locks are Postgres's mechanism for application-defined locking — locks on arbitrary integers that have no meaning to the database. The application decides what each lock means; Postgres just enforces "only one holder at a time."
They solve a class of problems that row locks cannot. Knowing they exist saves you from the elaborate workarounds people invent in their absence.
What they are
-- Acquire a transaction-scoped lock on integer 12345
SELECT pg_advisory_xact_lock(12345);
-- Or a session-scoped lock
SELECT pg_advisory_lock(12345);
The number is just an integer. The application chooses what it means: a tenant ID, a job ID, a hash of a string. Two processes asking for the same lock cannot both hold it.
pg_advisory_xact_lock is released automatically when the transaction commits or rolls back. pg_advisory_lock (session-scoped) must be released explicitly with pg_advisory_unlock or by ending the session.
When advisory locks are the right tool
1. Cross-process serialization. "Only one instance of this cron job should run at a time." Standard alternative is a flag in a table; advisory locks are simpler and self-cleaning.
# Cron job entry point
def run_cron():
with db.transaction():
if not db.execute("SELECT pg_try_advisory_xact_lock($1)", JOB_LOCK_ID).first()[0]:
print('Another instance is running, exiting')
return
do_the_work()
pg_try_advisory_xact_lock returns true if the lock was acquired, false otherwise. Nonblocking. Perfect for the "if someone else has it, skip" pattern.
2. Idempotency at the request level. "Don't process this webhook twice if it arrives twice." Hash the webhook ID into a lock; only one handler can run at a time.
lock_id = hash_to_int(webhook_id)
db.execute("SELECT pg_advisory_xact_lock($1)", lock_id)
# Now we hold the lock for this webhook ID
# Process it
# Lock released at COMMIT
3. Distributed leader election. "Among all instances of this service, one of them is the leader." The lock holder is the leader; if it dies, the lock releases and another can acquire.
Two-argument advisory locks
For namespace separation, advisory locks have a two-argument variant:
SELECT pg_advisory_xact_lock(class_id, object_id);
Useful for organizing locks: class_id = 1 is the "tenant" namespace, class_id = 2 is the "job" namespace, etc. Locks in different namespaces do not collide.
What advisory locks do NOT do
They do not lock rows or tables. A pg_advisory_lock(42) does not prevent anyone from updating any row anywhere. The lock is purely conceptual.
They are not enforced across the application. If your code forgets to acquire the lock for some operation, that operation runs unprotected. Discipline is on the application.
They do not prevent deadlocks across mixed lock types. A transaction that holds a row lock AND tries to acquire an advisory lock can deadlock with another transaction doing the opposite. Postgres detects the deadlock and aborts one, but the application has to handle it.
Common pitfalls
Forgetting session-scoped locks are explicit-release. Using pg_advisory_lock (session-scoped) and forgetting to call pg_advisory_unlock leaves the lock held until the session disconnects. On a connection pool, this means the lock might be held for hours, blocking everyone else.
Default: use pg_advisory_xact_lock (transaction-scoped). It releases automatically.
Lock IDs colliding with other parts of the application. Two unrelated subsystems both pick lock ID 1, accidentally serializing each other.
Fix: namespace via the two-argument form, or hash a string to get a less-collision-prone ID:
SELECT pg_advisory_xact_lock(hashtext('cron-job-' || job_name));
hashtext is a Postgres function that hashes a string to a bigint. Hash collisions are possible but rare for typical IDs.
Confusing wait-acquire vs try-acquire. pg_advisory_xact_lock blocks until it acquires. pg_try_advisory_xact_lock returns immediately. Pick the right one for the use case.
Holding a lock across a long operation. An advisory lock held for 30 minutes during a batch job blocks everyone who would have acquired the same lock. The same advice as for any long-held lock applies — break the work into smaller chunks.
A worked example: cron job exclusion
import hashlib
def get_lock_id(name):
h = hashlib.md5(name.encode()).digest()
# Take first 8 bytes as int64
return int.from_bytes(h[:8], 'big', signed=True)
def run_with_advisory_lock(name, work):
lock_id = get_lock_id(name)
with db.connection() as conn:
with conn.transaction():
acquired = conn.execute(
"SELECT pg_try_advisory_xact_lock($1)",
lock_id
).first()[0]
if not acquired:
print(f'{name}: another instance running, skipping')
return
work()
# Lock released at transaction commit
Used as:
run_with_advisory_lock('refresh-mv-customers', refresh_mv_customers)
Multiple cron schedulers (multiple servers, redundant cron jobs) cannot run the same task concurrently. Self-cleaning, no extra tables, no flags to leak.
When NOT to use advisory locks
Advisory locks are not free. They consume backend connection state. For very high-frequency operations (thousands of locks per second), they add up.
For purely process-level coordination (single application instance, multiple threads), in-process locks are simpler. Advisory locks shine when coordination spans multiple processes or instances.
What I keep around
A helper function in most production codebases:
from contextlib import contextmanager
import hashlib
@contextmanager
def advisory_lock(name, blocking=True):
lock_id = int.from_bytes(hashlib.md5(name.encode()).digest()[:8], 'big', signed=True)
with db.transaction() as conn:
if blocking:
conn.execute("SELECT pg_advisory_xact_lock($1)", lock_id)
yield True
else:
acquired = conn.execute("SELECT pg_try_advisory_xact_lock($1)", lock_id).first()[0]
yield acquired
Used everywhere there is cross-process coordination. The pattern becomes a one-liner; the database does the work.