On Tue, 2003-08-12 at 15:36, Tom Lane wrote:
> Gavin Sherry <[EMAIL PROTECTED]> writes:
> > I wasn't interested in measuring the performance of yacc -- since I know
> > it is bad. It was a basic test which wasn't even meant to be real
> > world. It just seemed interesting that the numbers were three times slower
> > than other databases I ran it on. Here is the script which generates the
> > SQL:
> 
> > echo "create table abc(t text);"
> > echo "begin;"
> > c=0
> > while [ $c -lt 100000 ]
> > do
> >         echo "insert into abc values('thread1');";
> >         c=$[$c+1]
> > done
> > echo "commit;"
> 
> Of course the obvious way of getting rid of the parser overhead is not
> to parse everytime --- viz, to use prepared statements.
> 
> I have just finished running some experiments that compared a series of
> INSERTs issued via PQexec() versus preparing an INSERT command and then
> issuing new-FE-protocol Bind and Execute commands against the prepared
> statement.  With a test case like the above (one target column and a
> prepared statement like "insert into abc values($1)"), I saw about a 30%
> speedup.  (Or at least I did after fixing a couple of bottlenecks in the
> backend's per-client-message loop.)
> 
> Of course, the amount of work needed to parse this INSERT command is
> pretty trivial.  With just a slightly more complex test case:
>       create table abc (f1 text, f2 int, f3 float8);
> and a prepared statement like
>       PREPARE mystmt(text,int,float8) AS insert into abc values($1,$2,$3)
> there was a factor of two difference in the speed.

Do you happen to have any numbers comparing prepared inserts in a single
transaction against copy?

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to