On Mon, Oct 23, 2006 at 05:51:40PM -0400, Steve wrote:
> Hello there;
> 
> I've got an application that has to copy an existing database to a new 
> database on the same machine.
> 
> I used to do this with a pg_dump command piped to psql to perform the 
> copy; however the database is 18 gigs large on disk and this takes a LONG 
> time to do.
> 
> So I read up, found some things in this list's archives, and learned that 
> I can use createdb --template=old_database_name to do the copy in a much 
> faster way since people are not accessing the database while this copy 
> happens.
> 
> 
> The problem is, it's still too slow.  My question is, is there any way I 
> can use 'cp' or something similar to copy the data, and THEN after that's 
> done modify the database system files/system tables to recognize the 
> copied database?
 
AFAIK, that's what initdb already does... it copies the database,
essentially doing what cp does.

> For what it's worth, I've got fsync turned off, and I've read every tuning 
> thing out there and my settings there are probably pretty good.  It's a 
> Solaris 10 machine (V440, 2 processor, 4 Ultra320 drives, 8 gig ram) and 
> here's some stats:

I don't think any of the postgresql.conf settings will really come into
play when you're doing this...
-- 
Jim Nasby                                            [EMAIL PROTECTED]
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to