Michael Goldner <[EMAIL PROTECTED]> writes:
> On 11/5/07 12:19 AM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>> It might be interesting to look at stats such as
>> select sum(length(data)) from pg_largeobject;
>> to confirm that your 100GB estimate for the data payload is accurate.

> That select returns the following:

> image=# select sum(length(data)) from pg_largeobject;
>      sum      
> --------------
>  215040008847
> (1 row)

Hmm, so given that you had 34803136 pages in pg_largeobject, that works
out to just about 75% fill factor.  That is to say, you're only getting
3 2K rows per page and not 4.  If the rows were full-size then 4 would
obviously not fit (there is some overhead...) but the normal expectation
in pg_largeobject is that tuple compression will shave enough space to
make up for the overhead and let you get 4 rows per page.  Are your
large objects mostly pre-compressed data?

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to