Michael Akinde wrote:
Hi,
I am encountering problems when trying to run VACUUM FULL ANALYZE on a
particular table in my database; namely that the process crashes out
with the following problem:
INFO: vacuuming "pg_catalog.pg_largeobject"
ERROR: out of memory
DETAIL: Failed on request of size 536870912.
INFO: vacuuming "pg_catalog.pg_largeobject"
ERROR: out of memory
DETAIL: Failed on request of size 32.
Granted, our largeobject table is a bit large:
INFO: analyzing "pg_catalog.pg_largeobject"
INFO: "pg_largeobject": scanned 3000 of 116049431 pages, containing
18883 live rows and 409 dead rows; 3000 rows in sample, 730453802
estimated total rows
...but I trust that VACUUM ANALYZE doesn't try to read the entire table
into memory at once. :-) The machine was set up with 1.2 GB shared
memory and 1 GB maintenance memory, so I would have expected this to be
sufficient for the task (we will eventually set this up oa 64-bit
machine with 16 GB memory, but at the moment we are restricted to 32 bit).
This is currently running on PostgreSQL 8.3beta2, but since I haven't
seen this problem reported before, I guess this will also be a problem
in earlier versions. Have we run into a bug/limitation of the Postgres
VACUUM or is this something we might be able to solve via reconfiguring
the server/database, or downgrading the DBMS version.
this seems simply a problem of setting maintenance_work_mem too high (ie
higher than what your OS can support - maybe an ulimit/processlimit is
in effect?) . Try reducing maintenance_work_mem to say 128MB and retry.
If you promise postgresql that it can get 1GB it will happily try to use
it ...
Stefan
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq