On Thu, Dec 4, 2008 at 7:38 PM, Franck Routier <franck.rout...@axege.com>wrote:

> Hi,
>
> I am trying to restore a table out of a dump, and I get an 'out of
> memory' error.
>
> The table I want to restore is 5GB big.
>
> Here is the exact message :
>
> adm...@goules:/home/backup-sas$ pg_restore -F c -a -d axabas -t cabmnt
> axabas.dmp
> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> pg_restore: [archiver (db)] Error from TOC entry 5492; 0 43701 TABLE
> DATA cabmnt axabas
> pg_restore: [archiver (db)] COPY failed: ERROR:  out of memory
> DETAIL:  Failed on request of size 40.
> CONTEXT:  COPY cabmnt, line 9038995: "FHSJ    CPTGEN    RE
> 200806_004    6.842725E7    6.842725E7    \N    7321100    1101
> \N
> 00016    \N    \N    \N    \N    \N    \N    -1278.620..."
> WARNING: errors ignored on restore: 1
>
> Looking at the os level, the process is effectively eating all memory
> (incl. swap), that is around 24 GB...
>
how are you ensuring that it eats up all memory..

post those outputs ?

>
> So, here is my question : is pg_restore supposed to eat all memory ? and
> is there something I can do to prevent that ?
>
> Thanks,
>
> Franck
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

Reply via email to