I have a testing program that uses 30 concurrent connections (max_connections = 32 in my postgresql.conf) and each does 100 insertions to a simple table with index.
It took me approximately 2 minutes to finish all of them. But under the same environment(after "delete From test_table, and vacuum analyze"), I then queue up all those 30 connections one after another one (serialize) and it took only 30 seconds to finish. Why is it that the performance of concurrent connections is worse than serializing them into one? I was testing them using our own (proprietary) scripting engine and the extension library that supports postgresql serializes the queries by simply locking when a query manipulates a PGconn object and unlocking when it is done. (And similiarly, it creates a PGconn object on the stack for each concurrent queries.) Thanks -- Wei Weng Network Software Engineer KenCast Inc. ---------------------------(end of broadcast)--------------------------- TIP 4: Don't 'kill -9' the postmaster