Postgres connection strings have an inconvenient property: they are routinely committed to git, pasted in Slack, exported as environment variables, and left in shell history. None of this is intentional. It is just easier than the alternative, until the alternative becomes mandatory.
Here is the framework that actually works, in increasing order of strictness.
Tier 0: Environment variables in plaintext
The baseline most teams start with:
DATABASE_URL=postgres://user:pass@host:5432/dbname
What is wrong:
- It ends up in
.envfiles that get committed to git by accident. - It is visible in
ps,/proc/<pid>/environ, and process exports. - It is in shell history (
historyshows the connection if you've ever pasted it). - Cloud providers' UIs sometimes log environment variables.
Is it acceptable? For local development, yes. For production, no.
Tier 1: Secrets managed by the platform
Docker secrets, K8s secrets, AWS Secrets Manager, Azure Key Vault, GCP Secret Manager, HashiCorp Vault — all variations of "a service that stores secrets and gives them to authorized callers."
The application reads the secret at startup or on demand, not from an environment variable that a sibling process can see.
Setup, in pseudocode:
import boto3
client = boto3.client('secretsmanager')
response = client.get_secret_value(SecretId='prod/postgres/password')
password = json.loads(response['SecretString'])['password']
# Construct connection string at runtime, never persist
What this gives you:
- Secrets do not exist in container images, environment variables, or config files.
- Access is auditable — the secret manager logs every read.
- Rotation can be automated.
- Fine-grained access — only the application's role can read the secret.
This is the right tier for most production workloads.
Tier 2: Short-lived, dynamically issued credentials
For stricter security, the password itself is short-lived. The application does not get a static password — it gets a token that is valid for an hour.
AWS RDS IAM authentication: configure RDS to accept IAM tokens. The application generates a token using its IAM role:
import boto3
rds_client = boto3.client('rds')
token = rds_client.generate_db_auth_token(
DBHostname='your-db.region.rds.amazonaws.com',
Port=5432,
DBUsername='app_user',
Region='us-east-1'
)
# Connect with this token as the password — valid for 15 minutes
No password is stored anywhere. The IAM role is the credential. The token is generated fresh, used, discarded.
HashiCorp Vault dynamic credentials: Vault generates a Postgres user with a TTL of 1 hour. After expiry, Vault revokes it.
This is overkill for most workloads but appropriate for high-security environments. The operational cost is real (more moving parts, more failure modes).
Tier 3: Mutual TLS authentication
No passwords at all. The client presents a TLS certificate; Postgres validates it; the certificate identifies the user.
# pg_hba.conf
hostssl all app_user 10.0.0.0/8 cert clientcert=verify-full
The client connects with a certificate signed by your CA. Postgres extracts the CN from the cert, looks up the corresponding role.
This eliminates the password problem entirely. The cost: certificate management. Each client needs a cert; certs need rotation; revocation needs a CRL or OCSP.
For environments with existing PKI (most enterprise IT), this is reasonable. Without one, it is a multi-quarter setup.
What I see go wrong
The failure modes I encounter:
Password committed to git. Hard to undo. Even after rewrite, the password is in clones, in CI logs, in old PR descriptions. Treat it as compromised — rotate immediately.
Password in CI environment variable, leaked through CI logs. The CI prints env at some point and the password is in the log. Logs are searchable; logs are kept; the password is exposed.
Password in the psql command-line history. ~/.psql_history collects every command, including connection strings used as psql connstring. Mode of failure: laptop is shared or compromised, the file gets exfiltrated.
Password reused across environments. Production password is also dev password. The dev environment has weaker security; a dev compromise becomes a prod compromise.
Password handed off in Slack. "Here is the prod password — paste it into your config." Slack archives forever. Anyone with retroactive access reads it.
Each of these is preventable but only with the discipline to use the right tier.
Pragmatic recommendations
For most teams:
- Local dev: passwords in
.env,.envis in.gitignore(and never committed by accident). - Dev/staging: secrets in the platform's secret manager. Rotated quarterly.
- Production: secrets in the platform's secret manager. Rotated quarterly or after any team change.
- High-security production: IAM-based auth (RDS IAM, similar managed equivalents) or Vault dynamic creds.
The stepwise upgrade is more important than picking the strictest tier upfront. A team that uses Tier 1 well is in better shape than a team that tries Tier 3 and fails operationally.
The rotation question
Rotation is what most teams skip. The argument is always: "we will rotate when we change anything." Then nothing changes, and the password is the same one for years.
The right cadence: every 90 days for production, immediately after any team member departure, immediately after any suspected compromise. Automate the 90-day rotation if at all possible — manual rotation is the kind of work that gets postponed.
Detection: the canary user
A technique I have seen work well: create a Postgres user that no application uses. Set up a query that fires an alert if anyone ever uses it. If that user shows up in pg_stat_activity, someone is using a credential that should not exist.
It is a small effort that catches real attacks where someone has stolen a password and is reconnecting from somewhere unexpected.
Connection string security is one of the few areas where the right answer is "add operational complexity for security." The complexity is real, but the cost of a leak is much higher than the cost of doing it right.