Tom Lane wrote:
David Wall <[EMAIL PROTECTED]> writes:
Since large objects use OIDs, does PG 8.3 have a limit of 4 billion large objects across all of my various tables

Yup, and in practice you'd better have a lot less than that or assigning
a new OID might take a long time.

What's a rough estimate of "a lot less"? Are we talking 2 billion, 3 billion, 1 billion?


(actually, I presume OIDs are used elsewhere besides just large objects)?

They are, but this isn't relevant to large objects.  The uniqueness
requirement is only per-catalog.
Isn't there just one catalog per postmaster instance (pg_catalog)? The issue we have is that one postmaster runs a large number of databases (let's say 100 for easy calculations), so even with the max 4 billion potential OIDs, that would drop each DB to 40 million each.

Part of this is just architectural to us. We do heavy encryption/compression of data (in particular digitally signed XML text) and use large objects to store these, but we may need to change these to use bytea since they wouldn't use up OIDs and the actual data size tends not to be too large (perhaps 10KB compressed and encrypted binary data) and can be done in a block. All that character escaping of binary data, though, makes the JDBC-to-Postmaster interface a tad bit ugly, though.


Is there any plan on allowing large objects to support more than 2GB?

No, it's not on the radar screen really.

Too bad, but again, we can always work around it, even if means a layer that bundles large objects sort of like large objects bundle bytea. We prefer not to store it outside the database as the large files can get out of sync from the database (ACID properties) and of course need to be backed up separately from the database backups and WAL copying for replication.

David

Reply via email to