PostgreSQL 18 was released on September 25, 2025. If your team still talks about it as a future release, the first correction is simple: it is already here. The current question is not "when does PostgreSQL 18 ship?" The useful question is "which part of PostgreSQL 18 solves a problem we actually have?"
That distinction matters. Major releases always arrive with a long feature list. Production teams need a shorter list: the changes that affect query latency, upgrade risk, observability, authentication, data modeling, and operational cleanup.
Here is how I would evaluate PostgreSQL 18 before putting it on a production upgrade plan.
The framework: feature, pain point, proof
I do not upgrade because a feature sounds good. I map each feature to a pain point and then decide how to prove the benefit:
- Performance: does it improve a measured slow path?
- Operability: does it make failures easier to see before customers do?
- Modeling: does it let the schema express a rule currently hidden in application code?
- Security: does it remove an authentication or compliance weak spot?
- Upgrade risk: does it change behavior we rely on today?
That lens makes PostgreSQL 18 much more useful than a generic "what's new" summary.
1. Asynchronous I/O is the headline, but benchmark your workload
The biggest architectural change in PostgreSQL 18 is the new asynchronous I/O subsystem. It allows backends to queue multiple read requests, which can help operations like sequential scans, bitmap heap scans, and vacuum-heavy work.
This is not a promise that every query gets faster. It is most interesting for workloads that spend real time waiting on reads: analytical scans, large bitmap heap scans, maintenance tasks, and databases where I/O latency is a visible part of p95 or p99.
SHOW server_version;
SHOW io_method;
SHOW effective_io_concurrency;
SHOW maintenance_io_concurrency;
The proof should be boring: capture representative queries on PostgreSQL 17, run them on PostgreSQL 18 with the same data and hardware class, and compare wall time, buffers, and I/O wait. If your bottleneck is lock contention, bad query shape, or CPU-heavy expression work, async I/O will not save you.
2. Skip scan makes some multicolumn indexes more useful
PostgreSQL 18 adds B-tree skip scan support. In plain terms, PostgreSQL can use certain multicolumn indexes in more cases even when the query does not constrain the first indexed column with equality.
This matters because production indexes are rarely perfect. A team may have an index like this:
CREATE INDEX orders_tenant_status_created_idx
ON orders (tenant_id, status, created_at DESC);
Some queries include tenant_id. Some operational reports only filter by status and created_at. Before PostgreSQL 18, the planner had fewer useful choices for those later-column restrictions. Skip scan can make this kind of index useful in more query shapes.
The caution: skip scan is not permission to design sloppy indexes. It is a planner improvement, not a replacement for workload-specific indexing. Test with EXPLAIN (ANALYZE, BUFFERS) and verify that the new plan actually reduces work.
3. pg_upgrade retaining optimizer statistics reduces post-upgrade turbulence
One underrated PostgreSQL 18 improvement is that pg_upgrade can retain optimizer statistics. Anyone who has done major upgrades on busy systems knows the old pain: the cluster comes up, but plans can be unstable until analyze catches up.
Keeping statistics does not remove the need for a careful upgrade runbook. It does reduce one common source of post-upgrade plan volatility.
SELECT
schemaname,
relname,
last_analyze,
last_autoanalyze
FROM pg_stat_user_tables
ORDER BY last_analyze NULLS FIRST
LIMIT 20;
After upgrade, I still watch slow query fingerprints, plan changes for high-traffic statements, and tables with stale or missing statistics. The feature helps, but it does not replace observation.
4. Virtual generated columns move some derived data back to the schema
PostgreSQL 18 introduces virtual generated columns and makes them the default generated-column behavior. Stored generated columns are still available with STORED.
The production question is simple: is this derived value a data rule or a display convenience?
CREATE TABLE users (
id bigserial PRIMARY KEY,
email text NOT NULL,
email_key text GENERATED ALWAYS AS (lower(email))
);
A virtual generated column computes on read. A stored generated column computes on write and occupies storage. The right choice depends on read frequency, write volume, index needs, and whether the derived value should behave like persisted serving data.
For migration planning, this default matters. If your team previously assumed generated columns were stored, be explicit when creating new schema on PostgreSQL 18.
5. UUIDv7 is a practical default for many new tables
PostgreSQL 18 adds uuidv7(), which generates timestamp-ordered UUIDs. This is useful for teams that want globally unique IDs without the index locality pain of fully random UUIDv4 values.
CREATE TABLE events (
id uuid PRIMARY KEY DEFAULT uuidv7(),
created_at timestamptz NOT NULL DEFAULT now(),
payload jsonb NOT NULL
);
UUIDv7 is not automatically better than bigserial. For internal OLTP tables with one writer and simple joins, integer keys are still excellent. UUIDv7 becomes attractive when IDs cross service boundaries, get generated outside the database, or need better insertion locality than random UUIDs.
6. OLD and NEW in RETURNING makes data-change APIs cleaner
PostgreSQL 18 lets INSERT, UPDATE, DELETE, and MERGE return old and new values explicitly in RETURNING. That is a small feature with real application impact.
UPDATE subscriptions
SET plan = 'pro',
updated_at = now()
WHERE id = $1
RETURNING
old.plan AS previous_plan,
new.plan AS current_plan,
new.updated_at;
This can remove extra reads in audit logging, billing changes, admin panels, and event publication. It also makes the SQL statement own the before-and-after semantics instead of reconstructing them in application code.
7. Temporal constraints are a quiet modeling upgrade
PostgreSQL 18 adds temporal constraints: WITHOUT OVERLAPS for primary and unique constraints over ranges, and PERIOD support for foreign keys.
This matters for systems that model validity windows: pricing periods, room bookings, contract versions, entitlement windows, schedule assignments, and slowly changing dimensions.
CREATE TABLE room_bookings (
room_id bigint NOT NULL,
booking_period tstzrange NOT NULL,
guest_id bigint NOT NULL,
PRIMARY KEY (room_id, booking_period WITHOUT OVERLAPS)
);
Before this, teams often enforced non-overlap rules with exclusion constraints, triggers, or application checks. Those can still be valid, but native temporal constraints make an important class of product rules easier to express in the schema.
8. Monitoring gets more useful around I/O, WAL, vacuum, and analyze
PostgreSQL 18 improves several observability surfaces. Release notes call out per-backend I/O statistics, byte-level pg_stat_io activity, WAL I/O activity rows, per-backend WAL statistics, and more detailed vacuum/analyze timing.
SELECT *
FROM pg_stat_io
LIMIT 20;
This is the part of PostgreSQL 18 I like for on-call teams. Better stats reduce the number of incidents where the team argues from vibes: "maybe storage is slow," "maybe WAL is stuck," "maybe vacuum is hurting us."
The feature only helps if you wire it into dashboards and alerts. PostgreSQL exposing a metric is not the same as your team noticing it during an incident.
9. OAuth arrives, and MD5 password auth starts its exit
PostgreSQL 18 adds OAuth authentication support and deprecates MD5 password authentication. MD5 is not removed in 18, but the direction is clear: teams should move password-based authentication toward SCRAM and review where OAuth fits their environment.
SELECT rolname, rolpassword LIKE 'md5%' AS uses_md5_hash
FROM pg_authid
WHERE rolcanlogin;
Run that only as a superuser and treat the result as sensitive. The operational takeaway is not "turn on OAuth tomorrow." It is "inventory authentication before the next major version forces the issue."
10. Logical replication and generated columns get better together
PostgreSQL 18 allows generated column values to be logically replicated. This matters for mixed systems, downstream consumers, and migrations where the subscriber should receive the generated value instead of recomputing it.
If you use logical replication for data movement, this is worth testing directly. Generated columns often encode business rules, and replication behavior around those rules needs to be explicit.
The upgrade risks I would review first
PostgreSQL 18 is a major version, so upgrading from 17 or older requires pg_upgrade, dump/restore, or logical replication. I would review these compatibility points early:
- Data checksums: new clusters enable checksums by default via
initdb.pg_upgraderequires matching checksum settings. - MD5 authentication: deprecated, so plan SCRAM migration even if nothing breaks today.
- Generated columns: virtual is now the default; specify
STOREDwhen that is what you mean. - Full text search and pg_trgm: collation behavior changes may require reindexing after upgrade in some clusters.
- Older clients: review
psqland\copybehavior around CSV end markers. - 18.0-18.2: if you adopted early, read the 18.3 release notes for post-update steps around
json_strip_nulls()volatility.
The migration plan I trust
- Upgrade a production-sized copy first, not a toy dataset.
- Replay representative read and write traffic.
- Compare top query plans, p95, p99, WAL rate, vacuum behavior, and replica lag.
- Check extension compatibility.
- Validate authentication and client versions.
- Rehearse rollback before scheduling the real cutover.
PostgreSQL 18 has enough useful work in it to justify serious testing. But the best upgrade story is not "we moved because it was new." It is "we moved because these features solved these measured problems, and the migration had a rehearsed failure path."
My default recommendation
Test PostgreSQL 18 now. Use the current minor release, not an early 18.0 build. Prioritize it if you have scan-heavy workloads, expensive vacuum behavior, multicolumn index pain, UUID locality problems, generated-column modeling needs, temporal validity rules, or observability gaps around I/O and WAL.
Do not rush it for every database. PostgreSQL 17 remains a strong production release. PostgreSQL 18 becomes compelling when one of its features maps directly to a production pain point you can prove.