The team I helped recover from a financial reconciliation incident had stored prices as DOUBLE PRECISION. The accounting reports were off by amounts that looked like rounding errors. They added up over millions of transactions. The bug had been there for three years.
The rule, simply stated: floats are for measurements, not for money. Knowing why takes thirty seconds; not knowing it costs you everything.
What DOUBLE PRECISION actually stores
DOUBLE PRECISION is IEEE 754 64-bit floating point. It can represent some numbers exactly and many numbers approximately. Specifically, it can represent any integer up to 2^53 exactly. It cannot represent most decimal fractions exactly.
The canonical example:
SELECT 0.1 + 0.2;
?column?
----------
0.3 -- SQL standard says this should be 0.3
SELECT 0.1::double precision + 0.2::double precision;
?column?
--------------------
0.30000000000000004 -- IEEE 754 cannot represent 0.1 or 0.2 exactly
The difference is small. The difference accumulated over millions of operations is large.
What NUMERIC (or DECIMAL) stores
NUMERIC(precision, scale) stores arbitrary-precision decimal numbers. Postgres handles them as variable-size structures with exact arithmetic. There is no rounding error.
SELECT 0.1::numeric + 0.2::numeric;
?column?
----------
0.3 -- exact
The cost is performance. NUMERIC arithmetic is much slower than DOUBLE PRECISION — the difference is roughly 10-100x for raw arithmetic operations. For a column doing millions of SUM(price) per day, this matters.
When to use which
Use NUMERIC for:
- Money. Always.
NUMERIC(12, 2)for typical amounts in cents-precision currencies,NUMERIC(20, 8)for crypto-precision. - Anything where exact decimal arithmetic matters: tax rates, accounting calculations, anything that produces user-facing numbers.
- Identifiers that look like numbers but should not lose precision (some external IDs, but usually those are TEXT).
Use DOUBLE PRECISION for:
- Scientific measurements: temperature, distance, signal strength.
- Statistics derived from floating-point sources: averages, ratios, percentages computed from approximate inputs.
- Performance-sensitive calculations where the imprecision is acceptable: ML feature values, geo coordinates (sometimes).
The rule is simpler than it looks: if the number's exact value matters legally or financially, NUMERIC. Otherwise, DOUBLE PRECISION is fine.
The BIGINT alternative for money
A pattern that combines exactness with performance: store amounts as integer cents (or smallest currency unit).
CREATE TABLE orders (
total_cents BIGINT NOT NULL
);
A BIGINT can represent every penny up to a quintillion-ish. Arithmetic is fast (CPU integer operations). Conversion to display is total_cents / 100.0.
The tradeoff:
- The application has to remember to handle the cents-to-dollars conversion.
- Multi-currency support is a separate problem (different currencies have different smallest units; JPY has none).
- Calculations involving fractions (5% tax) need careful integer math to avoid rounding bugs.
For most applications dealing with a single currency at typical precision, BIGINT-cents is the most performant correct option. For complex financial logic, NUMERIC is more flexible.
Performance comparison
In my benchmarks on RDS:
-- Aggregate over 10 million rows
SELECT sum(amount) FROM transactions;
-- DOUBLE PRECISION: ~0.5 seconds
-- NUMERIC: ~3-4 seconds
-- BIGINT (cents): ~0.4 seconds
For analytical queries on financial data, this matters. For OLTP queries that touch a few rows, the difference is invisible.
Fixing existing schemas
Migrating a money column from DOUBLE PRECISION to NUMERIC or BIGINT is the kind of project that takes weeks because of the data audit involved.
The steps:
- Add a new column with the correct type.
- Backfill: convert existing values, accepting that some have accumulated error.
- Reconcile with source-of-truth (accounting system) for a sanity check.
- Make application write both columns.
- Verify they stay consistent.
- Cut over.
The reconciliation step is the hard part. Three years of accumulated rounding errors can produce diffs in the hundreds of dollars across millions of records. Fixing each one requires either business sign-off ("we accept these as the corrected values") or a more elaborate rollback.
Better to pick the right type the first time.
What I check on every new schema
When reviewing a migration that adds a column for money or anything finance-adjacent:
- Is the type DOUBLE PRECISION? If yes, suggest NUMERIC or BIGINT.
- Is the precision specified for NUMERIC?
NUMERICwithout precision is unlimited but slower;NUMERIC(12, 2)is well-defined. - Are there CHECK constraints for sane ranges?
CHECK (amount_cents >= 0)catches obvious bugs.
This is a 30-second check that prevents a class of bugs that are hard to remediate after the fact.