Hi, I have a performance problem with a script that does massive bulk insert in 6 tables. When the script starts the performance is really good but will degrade minute after minute and take almost a day to finish!
I almost tried everything suggested on this list, changed our external raid array from raid 5 to raid 10, tweaked postgresql.conf to the best of my knowledge, moved pg_xlog to a different array, dropped the tables before running the script. But the performance gain was negligible even after all these changes... IMHO the hardware that we use should be up to the task: Dell PowerEdge 6850, 4 x 3.0Ghz Dual Core Xeon, 8GB RAM, 3 x 300GB SAS 10K in raid 5 for / and 6 x 300GB SAS 10K in raid 10 (MD1000) for PG data, the data filesystem is ext3 mounted with noatime and data=writeback. Running on openSUSE 10.3 with PostgreSQL 8.2.7. The server is dedicated for PostgreSQL... We tested the same script and schema with Oracle 10g on the same machine and it took only 2.5h to complete! What I don't understand is that with Oracle the performance seems always consistent but with PG it deteriorates over time... Any idea? Is there any other improvements I could do? Thanks Christian -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance