be that I would not have to wait for
pg_dump (it takes quite long on our 60G+ database) and would just be able to
configure the backup agent to monitor the data directory and do differential
backups of the files there every hour or so.
Your suggestions are much appreciated!
-Gabor
be that I would not have to wait for
pg_dump (it takes quite long on our 60G+ database) and would just be able to
configure the backup agent to monitor the data directory and do differential
backups of the files there every hour or so.
Your suggestions are much appreciated!
-Gabor
Does DROP TABLE free up disk space immediately?
Thanks,
-Gabor
NOTE: The contents of this e-mail including any attachments may contain confidential or privileged information and are to be read solely by the intended recipient(s). If you are not an intended recipient of this e
manually if the resulting document contains the
correct data.
gabor
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get
I'm trying to update a table in transaction mode with
300 million records in it, and I'm getting an out of
memory error.
ERROR: out of memory
DETAIL: Failed on request of size 32.
In transaction mode, I first delete all the records in
the table and then try to use COPY to populate it
again with t
where a normal vacuum is unable to reclaim some
space, but a full-vacuum is able?
or any other ideas?
thanks,
gabor
--
That's life for you, said McDunn. Someone always waiting for someone
who never comes home. Always someone loving something more than that
thing loves them. And after awhil
hi,
i'd like to delete the postgresql log file
(resides in /var/log/pgsql/postgres),
because it has become too big.
can i simply delete the file while postgresql is running?
or do i have to stop postgresql first, and only delete the logfile after
that?
thanks,
gabor
--
That's li