Hi!
I am using PostgreSQL with a proprietary ERP software in Brazil. The database have around 1.600 tables (each one with +/- 50 columns).
My problem now is the time that takes to restore a dump. My customer database have arount 500mb (on the disk, not the dump file) and I am making the dump with pg_dump -Fc, my dumped file have 30mb. To make the dump, it's taking +/- 1,5 hours BUT to restore (using pg_restore ) it it takes 4 - 5 hours!!!
Our machine it's a Dell Server Power Edge 1600sc (Xeon 2,4Ghz, with 1GB memory, 7200 RPM disk). I don't think that there is a machine problem because it's a server dedicated for the database and the cpu utilization during the restore is around 30%.
Looking on the lists arquives I found some messages about this and Tom Lane was saying that then you have a lot of convertions the dump can delay too much. 90% of the columns on my database are char columns and I don't have large objects on the database. The restore is delaying too much because the conversion of the char columns ? How can I have a better performance on this restore?
I need to find a solution for this because I am convincing customers that are using SQL Server, DB2 and Oracle to change to PostgreSQL but this customers have databases of 5GB!!! I am thinking that even with a better server, the restore will take 2 days!
My data: Conectiva Linux 10 , Kernel 2.6.8 PostgreSQL 7.4.6.
postgresql.conf modified parameters (the other parameters are the default)
tcpip_socket = true
max_connections = 30
shared_buffers = 30000
sort_mem = 4096 vacuum_mem = 8192
max_fsm_pages = 20000
max_fsm_relations = 1000
Regards,
Rodrigo Carvalhaes
---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives?
http://archives.postgresql.org