9 min read

Multi-Tenant Postgres: Pick the Blast Radius You Can Afford to Recover From

The multi-tenant schema choice is rarely about elegance. It is about which tenant problem you want to be solvable at 3 AM. Here is how I think about the three options.

Multi-tenant design is one of those calls you make once when the team is small and then live with for years. I have inherited all three common shapes — shared schema, schema-per-tenant, and database-per-tenant — and the failure modes are reliably different. None of them are wrong. They optimize for different operational pains.

The question I ask now is not which is "best." It is: when one tenant has a problem, what should the radius of that problem be?

Option 1: Shared schema (one tenant_id column everywhere)

Every row carries a tenant_id. Every query filters on it. Every index includes it as the first column when it makes sense. This is the cheapest to operate, the easiest to scale early, and the most painful when a tenant misbehaves.

CREATE TABLE orders (
  id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
  tenant_id BIGINT NOT NULL,
  customer_id BIGINT NOT NULL,
  total_cents INTEGER NOT NULL,
  created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);

CREATE INDEX orders_tenant_created_idx
  ON orders (tenant_id, created_at DESC);

The upside: one Postgres instance, simple deploys, easy cross-tenant analytics, predictable resource sharing. Adding a new tenant is a row in a tenants table and zero schema changes.

The downside: any forgotten WHERE tenant_id = ? filter is a data leak that gets you on the front page. Backups are all-or-nothing — you cannot easily restore one tenant. A single tenant with a runaway query can degrade everyone. "Drop tenant 47's data within 30 days for GDPR" becomes a DELETE that locks tables for the entire workload.

Where I pick this: B2B SaaS where tenants are small, the team is small, and the tenant boundaries do not need legal or compliance separation. Most early-stage products belong here.

Option 2: Schema-per-tenant

Each tenant gets its own Postgres schema. Tables are duplicated in each schema. Application sets search_path per request.

CREATE SCHEMA tenant_acme;
CREATE SCHEMA tenant_globex;

CREATE TABLE tenant_acme.orders (...);
CREATE TABLE tenant_globex.orders (...);

-- In the application:
SET search_path = tenant_acme, public;

The upside: tenant isolation is enforced by Postgres, not by application discipline. Per-tenant backups are a pg_dump --schema=tenant_acme. Per-tenant restores are a pg_restore. "Delete tenant 47" is DROP SCHEMA tenant_47 CASCADE. Cross-tenant analytics is harder but still possible with views over pg_class.

The downside: the catalog grows linearly with tenants. With 5,000 tenants and 30 tables each, you have 150,000 entries in pg_class and pg_attribute. Connection setup gets slower because the planner has more to consider. Schema migrations have to run per-tenant — for thousands of tenants, that is a non-trivial operation that needs its own tooling.

Where I pick this: B2B with hundreds of tenants of moderate size, where customer pressure or compliance pressure makes per-tenant operations (export, restore, delete) routinely necessary, and where you have the engineering capacity to build tenant-aware migration tooling.

Option 3: Database-per-tenant

Every tenant gets its own database (or its own Postgres cluster, on the larger end of this spectrum).

The upside: hardest possible isolation. "Customer X is on EU infrastructure with EU-only credentials" is provable from the architecture. Per-tenant resource limits are real. Backups, restores, and deletes are tenant-scoped at the cluster level.

The downside: cost scales linearly with tenants, even idle ones. Connection pooling has to be tenant-aware. Cross-tenant queries (admin dashboards, billing rollups) become a federation problem. Schema migrations are essentially N independent deploys with N independent failure modes.

Where I pick this: enterprise-only customers with low count and high revenue, where one tenant's data isolation is a contract requirement, or where regulated industries make data colocation impossible.

How the choice plays out in practice

The interesting failure modes:

  • Shared schema: the day a noisy tenant breaks p99 for everyone. The fix is row-level security combined with pg_stat_statements tagged per tenant. Painful but possible.
  • Schema-per-tenant: the day you have 12,000 tenants and a CREATE TABLE migration takes four hours. The fix is async migrations and a tooling investment that pays back over years.
  • Database-per-tenant: the day your CI bill triples because every change runs N integration tests. The fix is sampling, but every sampling decision is a bet.

None of these are dealbreakers. They are just the operational realities that need to fit in your team's capacity.

A pragmatic shortcut

If you do not know which to pick, start with shared schema, build careful boundaries (row-level security, per-tenant connection contexts, tagged metrics), and pay close attention to the first three customers who push the model. They will tell you whether you need to evolve.

The migration from shared schema to schema-per-tenant is real work, but it is bounded work. The migration from "we have no boundaries" to "we have any boundaries at all" is unbounded and unpleasant. Build the seams early even if you stay on a single schema.