[Synopsis: VACUUM FULL ANALYZE goes out of memory on a very large pg_catalog.pg_largeobject table.]

Simon Riggs wrote:
Can you run ANALYZE and then VACUUM VERBOSE, both on just
pg_largeobject, please? It will be useful to know whether they succeed
ANALYZE:

INFO:  analyzing "pg_catalog.pg_largeobject"
INFO: "pg_largeobject": scanned 3000 of 116049431 pages, containing 18883 live rows and 409 dead rows; 3000 rows in sample, 730453802 estimated total rows

VACUUM VERBOSE:

INFO:  vacuuming "pg_catalog.pg_largeobject"
INFO: scanned index "pg_largeobject_loid_pn_index" to remove 106756133 row versions
DETAIL:  CPU 38.88s/303.43u sec elapsed 2574.24 sec.
INFO:  "pg_largeobject": removed 106756133 row versions in 13190323 pages
DETAIL:  CPU 259.42s/113.20u sec elapsed 14017.17 sec.
INFO: index "pg_largeobject_loid_pn_index" now contains 706303560 row versions in 2674471 pages
DETAIL:  103960219 index row versions were removed.
356977 index pages have been deleted, 77870 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.02 sec.
INFO: "pg_largeobject": found 17489832 removable, 706303560 nonremovable row versions in 116049431 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 36000670 unused item pointers.
64493445 pages contain useful free space.
0 pages are entirely empty.
CPU 1605.42s/1107.48u sec elapsed 133032.02 sec.
WARNING: relation "pg_catalog.pg_largeobject" contains more than "max_fsm_pages" pages with useful free space HINT: Consider using VACUUM FULL on this relation or increasing the configuration parameter "max_fsm_pages".
VACUUM

(This took some 36+ Hours. It will be interesting to see what happens when we add another 20 years worth of data to the 13 years already in the DB).

ANALYZE:

INFO:  analyzing "pg_catalog.pg_largeobject"
INFO: "pg_largeobject": scanned 3000 of 116049431 pages, containing 17830 live rows and 0 dead rows; 3000 rows in sample, 689720452 estimated total rows

I will lower the SharedMem and MaintenanceWorkMem settings as suggested in earlier posts before leaving for home this evening, and then let it run a VACUUM FULL ANALYZE. I remain dubious though - as mentioned, the first test I did had quite low settings for this, and we still had the memory crash. No reason not to try it though.

Over Christmas, we will be moving this over on a 64-bit kernel and 16 GB, so after that we'll be able to test on the database with > 1GB maintenance memory as well.

Regards,

Michael A.
Database Architect, met.no
begin:vcard
fn:Michael Akinde
n:Akinde;Michael
org:Meteorologisk Institutt, Norge;IT
adr;quoted-printable:;;Gaustadall=C3=A9en 30D;Oslo;;0313;Norge
email;internet:[EMAIL PROTECTED]
tel;work:22963379
tel;cell:45885379
x-mozilla-html:FALSE
url:http://www.met.no
version:2.1
end:vcard

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to