On 8 November 2018 at 10:02, Robert Haas <robertmh...@gmail.com> wrote: > IMHO, documenting that you can get up to 1600 integer columns but only > 1002 bigint columns doesn't really help anybody, because nobody has a > table with only one type of column, and people usually want to have > some latitude to run ALTER TABLE commands later. > > It might be useful for some users to explain that certain things will > should work for values < X, may work for values between X and Y, and > will definitely not work above Y. Or maybe we can provide a narrative > explanation rather than just a table of numbers. Or both. But I > think trying to provide a table of exact cutoffs is sort of like > tilting at windmills.
I added something along those lines in a note below the table. Likely there are better ways to format all this, but trying to detail out what the content should be first. Hopefully I I've addressed the other things mentioned too. -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & ServicesTitle: Appendix B. Database Limitations
Appendix B. Database Limitations
The following table describes the limits of PostgreSQL
Table B.1. PostgreSQL limitations
Item | Limit | Comment |
---|---|---|
Maximum Database Size | Unlimited | |
Maximum Number of Databases | Unlimited | |
Maximum Relation Size | 32 TB | Limited to 2^32 - 1 pages per relation. |
Maximum Columns per Table | 1600 | Further limited by tuple size fitting on a single page. See note below |
Maximum Field Size | 1 GB | |
Maximum Identifier Length | 63 characters | Can be increased by recompiling PostgreSQL |
Maximum Rows per Table | Unlimited | |
Maximum Indexes per Table | Unlimited | |
Maximum Indexed Columns | 32 | Can be increased by recompiling PostgreSQL. Limit includes
any INCLUDE columns |
Maximum Partition Keys | 32 | Can be increased by recompiling PostgreSQL |
Maximum Relations per Database | Unlimited | |
Maximum Partitions per Partitioned Relations | 268,435,456 | May be increased by using sub-partitioning |
Note
The maximum number of columns for a table is further reduced as the tuple
being stored must fit on a single heap page. Variable length fields such
as TEXT
, VARCHAR
and
CHAR
can have their values stored out of line in the
table's TOAST table when the values are large enough to require it. Only
an 18 byte pointer must remain inside the tuple in the table's heap. For
shorter length variable length fields either a 4 byte or 1 byte field
header is used, and the value is stored inside the heap tuple. Often this
can mean the actual maximum number of columns that you can store inside a
table is further reduced as the tuple can become too large to fit inside a
single 8192 byte heap page. For example, excluding the tuple header, a
tuple made up of 1600 INT columns would consume 6400 bytes and could be
stored in a heap page, but a tuple of 1600 BIGINT columns would consume
12800 bytes, therefore not fit inside a heap page.
Columns which have been dropped from the table also contribute to the maximum column limit, although the dropped column values for newly created tuples are internally marked as NULL in the tuples null bitmap, which does occupy space.
v3-0001-Add-documentation-section-appendix-detailing-some.patch
Description: Binary data