On 4/24/15 7:11 PM, Álvaro Hernández Tortosa wrote:
On 24/04/15 05:24, Tom Lane wrote:
...
TBH, I've got very little enthusiasm for fixing this given the number
of reports of trouble from the field, which so far as I recall is zero.
Álvaro's case came up through intentionally trying to create an
unreasonable number of tables, not from real usage.  This thread likewise
appears to contain lots of speculation and no reports of anyone hitting
a problem in practice.

     It is certainly true that this was a very synthetic case. I
envision, however, certain use cases where we may hit a very large
number of tables:

The original case has NOTHING to do with the number of tables and everything to do with the number of toasted values a table can have. If you have to toast 4B attributes in a single relation it will fail. In reality, if you get anywhere close to that things will fall apart due to OID conflicts.

This case isn't nearly as insane as 4B tables. A table storing 10 text fields each of which is 2K would hit this limit with only 400M rows. If my math is right that's only 8TB; certainly not anything insane space-wise or rowcount-wise.

Perhaps it's still not fixing, but I think it's definitely worth documenting.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to