Hi all,

I have to restore a database that its dump using custom format (-Fc) takes about 2.3GB. To speed the restore first I have restored everything except (played with pg_restore -l) the contents of some tables that's where most of the data is stored.

I think you've outsmarted yourself by creating indexes and foreign keys
before loading the data.  That's *not* the way to make it faster.

I made a mistake saying that I wanted to speed the restore. What I really meant is the following. I have to migrated that DB from a server to another, that means I have to stop my production environment. Those big tables are not really needed to be on production as they are only statistical data. So what I wanted to do is first of all restore the important tables and at the end restore the statistics.

  So what's the way to do this?


pg_restore: ERROR:  out of memory
DETAIL:  Failed on request of size 32.
CONTEXT: COPY statistics_operators, line 25663678: "137320348 58618027

I'm betting you ran out of memory for deferred-trigger event records.
It's best to load the data and then establish foreign keys ... indexes
too.  See
http://www.postgresql.org/docs/8.2/static/populate.html
for some of the underlying theory.  (Note that pg_dump/pg_restore
gets most of this stuff right already; it's unlikely that you will
improve matters by manually fiddling with the load order.  Instead,
think about increasing maintenance_work_mem and checkpoint_segments,
which pg_restore doesn't risk fooling with.)

Thank you very much
--
Arnau

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to