Hi,
i tried various ways to backup that db.
if i use a separate 'copy table to 'file' with binary' i can export the
problematic table and restore without problems. resulting outputfile is
much smaller than default output and runtime is much shorter.
is there any way to say pg_dump to use a copy
Thomas Markus <[EMAIL PROTECTED]> writes:
> logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
It looks to me like you must have individual rows whose COPY
representation requires more than half a gigabyte (maybe much more,
but at least that) and the system cannot allocate enough
df -h
FilesystemSize Used Avail Use% Mounted on
/dev/sda5 132G 99G 34G 75% /
tmpfs 4.0G 0 4.0G 0% /dev/shm
/dev/sda1 74M 16M 54M 23% /boot
is there another dump tool that dumps blobs (or all) as binary content
(not as inser
To decrease shared buffers you need restart your pgsql.
If do you make on df -h command what is the result, please send.
2006/12/15, Thomas Markus <[EMAIL PROTECTED]>:
Hi,
free diskspace is 34gb (underlying xfs) (complete db dump is 9gb). free
-tm says 6gb free ram and 6gb unused swap space.
Hi,
free diskspace is 34gb (underlying xfs) (complete db dump is 9gb). free
-tm says 6gb free ram and 6gb unused swap space.
can i decrease shared buffers without pg restart?
thx
Thomas
Shoaib Mir schrieb:
Looks like with 1.8 GB usage not much left for dump to get the
required chunk from mem
Looks like with 1.8 GB usage not much left for dump to get the required
chunk from memory. Not sure if that will help but try increasing the swap
space...
-
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 12/15/06, Thomas Markus <[EMAIL PROTECTED]> wrote:
Hi,
logfile content see
Try see your /tmp directory on your server, this maybe can for an left space
on your system disk.
[],s
Marcelo.
2006/12/15, Thomas Markus <[EMAIL PROTECTED]>:
Hi,
logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
- cat /proc/sys/kernel/shmmax is 2013265920
- ulimit is unlim
Hi,
logfile content see http://www.rafb.net/paste/results/cvD7uk33.html
- cat /proc/sys/kernel/shmmax is 2013265920
- ulimit is unlimited
kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit
postmaster process usage is 1.8gb ram atm
thx
Thomas
Shoaib Mir schrieb:
Can you please show th
Can you please show the dbserver logs and syslog at the same time when it
goes out of memory...
Also how much is available RAM you have and the SHMMAX set?
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 12/15/06, Thomas Markus <[EMAIL PROTECTED]> wrote:
Hi,
i'm running pg 8.1
Hi,
i'm running pg 8.1.0 on a debian linux (64bit) box (dual xeon 8gb ram)
pg_dump creates an error when exporting a large table with blobs
(largest blob is 180mb)
error is:
pg_dump: ERROR: out of memory
DETAIL: Failed on request of size 1073741823.
pg_dump: SQL command to dump the contents
10 matches
Mail list logo