On Tuesday 13 Aug 2002 9:39 pm, Wei Weng wrote:
> I have a testing program that uses 30 concurrent connections
> (max_connections = 32 in my postgresql.conf) and each does 100
> insertions to a simple table with index.
>
> It took me approximately 2 minutes to finish all of them.
>
> But under the same environment(after "delete From test_table, and vacuum
> analyze"), I then queue up all those 30 connections one after another
> one (serialize) and it took only 30 seconds to finish.
>
> Why is it that the performance of concurrent connections is worse than
> serializing them into one?

What was the limiting factor during the test? Was the CPU maxed, memory, disk 
I/O?

I take it the insert really *is* simple - no dependencies etc.

> I was testing them using our own (proprietary) scripting engine and the
> extension library that supports postgresql serializes the queries by
> simply locking when a query manipulates a PGconn object and unlocking
> when it is done. (And similiarly, it creates a PGconn object on the
> stack for each concurrent queries.)

I assume you've ruled the application end of things out.

- Richard Huxton

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to