"Nik" <[EMAIL PROTECTED]> writes:
> pg_restore: ERROR:  out of memory
> DETAIL:  Failed on request of size 32.
> CONTEXT:  COPY lane_data, line 17345022: "<line of data goes here>"

A COPY command by itself shouldn't eat memory.  I'm wondering if the
table being copied into has any AFTER triggers on it (eg for foreign key
checks), as each pending trigger event uses memory and so a copy of a
lot of rows could run out.

pg_dump scripts ordinarily load data before creating triggers or foreign
keys in order to avoid this problem.  Perhaps you were trying a
data-only restore?  If so, best answer is "don't do that".  A plain
combined schema+data dump should work.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to