Daniel said:

>It's plausible that, with only 4GB of RAM, the table that fails
to dump has some very large rows that can't be allocated, especially
since both the backend and pg_dump need to have it simultaneously
in memory.
> >pg_dump: The command was: COPY public.docfile (docfile_pkey,
> >docfileoriginalname, ordernumber, versionnum, docfilecontents,
>> docfilepath, docfileextension, enddatetime, endby, editnum, insby,
>> insdatetime, modby, moddatetime, active, doc_fkey) TO stdout;
>The "docfilecontents" column suggests that it might contain
large contents. If its type is bytea, it's going to be expanded
to twice its size to build the hex representation.
>You may get a sense on how big is the biggest row expressed
as text with this query:
 SELECT max(length(contents.*::text)) FROM public.docfile;
>If it's big enough that it might cause the OOM issue,
try to run pg_dump remotely through an SSH tunnel [1], which you
can already do in terms of network permissions since you log in with
SSH, so pg_dump itself does not use any memory on the server.
>Also, if the machine doesn't have swap space, it might be
that just adding a few GB's of swap would make the operation
succeed.

This appears to be the case. I ran:

SELECT max(length(docfilecontents::text)) FROM docfile;
 and after a very long time, got:
ERROR: invalid memory alloc request size 1636085512 SQL state: XX000

Adrian said:
>> Ok, thanks for explaining this. Here is the current value:
>> "shared_buffers""131072""8kB"
>It should be a single value something like this for the default:
>shared_buffers = 128MB

The results I pasted were from:

SELECT * FROM pg_settings

Maybe I didn't get it the right way.

The system has only 4GB of RAM. I read that a reasonable swap size is 1/4
of RAM, so I've created a swap file of 1GB.

Tom said:
>> Ok, thanks for explaining this. Here is the current value:
>> "shared_buffers" "131072" "8kB"
>Well, that's 1GB, which might be ambitious inside a VM with a hard
restriction to 4GB total RAM.  Postgres can get by with a *lot* less.
>Try knocking it down to a tenth of that and see if it makes a difference

I think I also based this on a rule-of-thumb that it should be no more than
25% of RAM. Should I test pg_dump with the added VM before reducing
shared_buffers?

On Mon, Nov 5, 2018 at 10:13 AM Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 11/5/18 7:04 AM, Charles Martin wrote:
> > Adrian said:
> >
> >  >Tom was referring to this from your previous post:
> >  >(postmaster) total-vm:3068900kB,
> >  >where vm(VM) is Virtual Memory:
> >  >
> https://stackoverflow.com/questions/18845857/what-does-anon-rss-and-total-vm-mean#22326766
> >  >So what is your shared_buffers:
> >  >https://www.postgresql.org/docs/10/static/runtime-config-resource.html
> >  >set to?
> >
> > Ok, thanks for explaining this. Here is the current value:
> > "shared_buffers""131072""8kB"
> >
>
> It should be a single value something like this for the default:
>
> shared_buffers = 128MB
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>

Reply via email to