Sebastien- I have a similar nightly process to keep our development system synched with production. I just do a complete pg_dump of production, do a dropdb & createdb to empty the database for development, and then restore the whole db from the pg_dump file. Our database is about 12 GB currently, and it takes less than one hour to dump & restore back into dev if I go through a file. I can go even faster by piping the data to eliminate one set of reads & writes to the disk:
dropdb dev createdb dev pg_dump prod | psql dev This of course only works if you haven't changed your data structure in the development area, but it is very simple and reasonably quick. in situations where the data structure has changed, I run a more complex system that deletes data rather than drop the whole db, but I always drop the indexes in development before restoring data and recreate them afterwards. -Nick > -----Original Message----- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Sebastien > Lemieux > Sent: Wednesday, August 06, 2003 1:56 PM > To: Postgresql-performance > Subject: [PERFORM] How to efficiently duplicate a whole schema? > > > Hi, > > I'm running on Redhat 7.2 with postgresql 7.3.2 and I have two schema in > the same database 'db' and 'db_dev'. Both contain a set of >20 tables for > a total of less than 50 Mb of data each (on the order of 50k rows in > total). Once in a while (often these days!), I need to synchronize the > dev version from the production 'db'. Currently, I do this by setting > constraints to deferred, deleting everything in db_dev, then issue a serie > of insert ... select ... to copy data from each table in db to the > equivalent table in db_dev. > > This approach used to run in less than 30 seconds in MySQL, but in > PostgreSQL it currently takes around 30 minutes. The postmaster process > is running at 100% cpu all the time. I enclosed all the delete statement > in one transaction and all the insert statements in a second > transaction. > All the time is taken at the commit of both transaction. > > Is there a more straightforward way to synchronize a development > database to a production one? Is there anyway to increase the > performance > of this delete/insert combination? I've got indexes and constraints on > most tables, could that be the problem? At some point in the future, I > will also need to make a copy of a whole schema ('db' into 'db_backup'), > what would be an efficient way to do that? > > These are the parameters I've adjusted in the postgresql.conf: > > max_connections = 16 > shared_buffers = 3000 > max_fsm_relations = 2000 > max_fsm_pages = 20000 > sort_mem = 20000 > vacuum_mem = 20000 > effective_cache_size = 15000 > > And this is the memory state of the machine: > > [EMAIL PROTECTED]> free > total used free shared buffers cached > Mem: 2059472 2042224 17248 24768 115712 1286572 > -/+ buffers/cache: 639940 1419532 > Swap: 2096440 490968 1605472 > > thanks, > > -- > Sebastien Lemieux > Bioinformatics, post-doc > Elitra-canada > > > ---------------------------(end of broadcast)--------------------------- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org > ---------------------------(end of broadcast)--------------------------- TIP 7: don't forget to increase your free space map settings