Thanks for the rapid responses.
Stefan Kaltenbrunner wrote:
this seems simply a problem of setting maintenance_work_mem too high
(ie higher than what your OS can support - maybe an
ulimit/processlimit is in effect?) . Try reducing maintenance_work_mem
to say 128MB and retry.
If you promise postgresql that it can get 1GB it will happily try to
use it ...
I set up the system together with one of our Linux sysOps, so I think
the settings should be OK. Kernel.shmmax is set to 1.2 GB, but I'll get
him to recheck if there could be any other limits he has forgotten to
increase.
The way the process was running, it seems to have basically just
continually allocated memory until (presumably) it broke through the
slightly less than 1.2 GB shared memory allocation we had provided for
PostgreSQL (at least the postgres process was still running by the time
resident size had reached 1.1 GB).
Incidentally, in the first error of the two I posted, the shared memory
setting was significantly lower (24 MB, I believe). I'll try with 128 MB
before I leave in the evening, though (assuming the other tests I'm
running complete by then).
Simon Riggs wrote:
On Tue, 2007-12-11 at 10:59 +0100, Michael Akinde wrote:
I am encountering problems when trying to run VACUUM FULL ANALYZE on a
particular table in my database; namely that the process crashes out
with the following problem:
Probably just as well, since a VACUUM FULL on an 800GB table is going to
take a rather long time, so you are saved from discovering just how
excessively long it will run for. But it seems like a bug. This happens
consistently, I take it?
I suspect so, though it has only happened a couple of times yet (as it
does take a while) before it hits that 1.1 GB roof. But part of the
reason for running the VACUUM FULL was of course to find out how long
time it would take. Reliability is always a priority for us,
so I like to know what (useful) tools we have available and stress the
system as much as possible... :-)
Can you run ANALYZE and then VACUUM VERBOSE, both on just
pg_largeobject, please? It will be useful to know whether they succeed.
I ran just ANALYZE on the entire database yesterday, and that worked
without any problems.
I am currently running a VACUUM VERBOSE on the database. It isn't done
yet, but it is running with a steady (low) resource usage.
Regards,
Michael A.
begin:vcard
fn:Michael Akinde
n:Akinde;Michael
org:Meteorologisk Institutt, Norge;IT
adr;quoted-printable:;;Gaustadall=C3=A9en 30D;Oslo;;0313;Norge
email;internet:[EMAIL PROTECTED]
tel;work:22963379
tel;cell:45885379
x-mozilla-html:FALSE
url:http://www.met.no
version:2.1
end:vcard
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match