That's assuming that toasting is evenly spread between tables. In my
experience, that's not a great bet...
Time to create a test:
SELECT chunk_id::bigint/10 as id_range, count(*), count(*)/(10::float)
density FROM (SELECT chunk_id FROM pg_toast.pg_toast_39000165 WHERE chunk_id
Matthew Kelly mke...@tripadvisor.com writes:
However, I do have active databases where the current oid is between 1
billion and 2 billion. They were last dump-restored for a hardware upgrade a
couple years ago and were a bit more than half the size. I therefore can
imagine that I have
Hmm 2^32 times aprox. 2kB (as per usual heuristics, ~4 rows per heap
page) is 8796093022208 (~9e13) bytes
... which results in 8192 1GB segments :O
8192 1GB segments is just 8TB, its not _that_ large. At TripAdvisor we’ve been
using a NoSQL solution to do session storage. We are
On 2/3/15 9:01 AM, Tom Lane wrote:
Matthew Kelly mke...@tripadvisor.com writes:
However, I do have active databases where the current oid is between 1 billion
and 2 billion. They were last dump-restored for a hardware upgrade a couple
years ago and were a bit more than half the size. I