From: Marcelo Fernandes <[email protected]> Date: Monday, November 3, 2025 at 5:24 AM To: [email protected] <[email protected]> Subject: Increasing a NUMERIC column precision doesn't cause a table rewrite. Why?
Hi everyone, A numeric field is defined as: NUMERIC(precision, scale) [0]. After doing some playing around with different operations against a numeric column, namely: increasing/decreasing the precision or increasing/decreasing the scale, I noticed that the table is rewritten in all cases except when increasing the precision number. Take the following snippet as example (tested on postgres 15): ```sql CREATE DATABASE test_precision_field; -- \c test_precision_field SET client_min_messages=debug1; DROP TABLE IF EXISTS foo; CREATE TABLE foo (id SERIAL PRIMARY KEY, bar NUMERIC(10, 2)); -- Only increase precision but keep the scale: -- This does not rewrite the table ALTER TABLE foo ALTER COLUMN bar TYPE numeric(20, 2); -- Only decrease the precision but keep the scale: -- This rewrites the table. ALTER TABLE foo ALTER COLUMN bar TYPE numeric(10, 2); -- DEBUG: rewriting table "foo" -- DEBUG: building index "foo_pkey" on table "foo" serially -- DEBUG: index "foo_pkey" can safely use deduplication -- Only increase the scale, but keep the precision: -- This rewrites the table. ALTER TABLE foo ALTER COLUMN bar TYPE numeric(10, 4); -- DEBUG: rewriting table "foo" -- DEBUG: building index "foo_pkey" on table "foo" serially -- DEBUG: index "foo_pkey" can safely use deduplication -- Only decrease the scale, but keep the precision: -- This rewrites the table. ALTER TABLE foo ALTER COLUMN bar TYPE numeric(10, 2); -- DEBUG: rewriting table "foo" -- DEBUG: building index "foo_pkey" on table "foo" serially -- DEBUG: index "foo_pkey" can safely use deduplication ``` My assumption is that once the precision is increased, Postgres can still interpret the old values that were generated with a lower precision, whereas this is not possible for any other types of changes on the numeric field. Is that assumption correct? How can I verify it? For example, is there a catalog table I can have a look to see how these values are defined, along with some form of documentation to understand how they are interpreted by Postgres? Thank you, Marcelo. - [0] https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fdocs%2Fcurrent%2Fdatatype-numeric.html%23DATATYPE-NUMERIC-DECIMAL&data=05%7C02%7Cluis.weck%40pismo.io%7C27632e42b9a1441b1e5d08de1ab275e8%7C38305e12e15d4ee888b9c4db1c477d76%7C0%7C0%7C638977550961297908%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=KTGfeCLZXuC8s7hk%2B5K%2BDNNu4BCuAjnCo9VDSSdxfZ0%3D&reserved=0<https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-NUMERIC-DECIMAL> My understanding is that Postgres can guarantee that all the values in the table will fit the new precision without having to check. If you change the scale, it might be the case that some values won’t fit anymore and Postgres must return an error. Numeric by itself is a variable length type, similar to TEXT (in a sense). That’s why we can modify its size without requiring a table rewrite at all!
