the table is myisam, so anything having to do with transactions won't have any effect.
On Tue, Sep 2, 2008 at 4:54 PM, Michael Dykman <[EMAIL PROTECTED]> wrote: > To hazard a guess here: the mysql console typically starts up in > autocommit mode. The neccesity of commiting after each > statement/execution could be costing you some cycles. > > What is the table type of your test data? I would not expect > transactions in MyISAM would give you any beifit but under and of the > ACID types like InnoDB< it could be a factor. I"m guesing that you > are using a transational table type and perl-dbd is smart enough to > automatically wrap you in a transaction anyhow. > > > A suggestion: > > at the start of both scripts add > SET AUTOCOMMIT=0'; > START TRANSACTION; > > and then, at the end insert > COMMIT > > See if that improves the numbers on the prepared statement. > > - michael dykman > > > On Tue, Sep 2, 2008 at 5:41 PM, Jim Lyons <[EMAIL PROTECTED]> wrote: > > I have a question about the prepare/execute commands provided by mysql. > > > > I devised a benchmark, comparing updating a table using a > > statement-at-a-time versus using a prepare/execute pair. The table had a > > million rows. > > > > The statement-at-a-time file looked like: > > > > reset query cache; > > update company set data_source_id = data_source_id - 1 where id = > > 12256006995591; > > update company set data_source_id = data_source_id - 1 where id = > > 120749009859430; > > . > > . a million rows of updates > > . > > > > > > The prepare/execute file looked like: > > > > reset query cache; > > prepare upd from > > "update company set data_source_id = data_source_id + 1 where id = ?"; > > set @id = 12256006995591; > > execute upd using @id; > > set @id = 120749009859430; > > execute upd using @id; > > . > > . a million rows of the same > > . > > > > I had thought the prepare/execute file would run faster, since supposedly > > mysql had already done a hard parse on the command. But it ran > > significantly slower (about 25%) than the statement-at-a-time version. > This > > was consistent over several tries. Also, I tried one version using Perl > DBI > > and its prepare and execute commands and it just did the same data in > half > > the time of the statement-at-a-time version. > > > > There's not any likelihood of extraneous events screwing up the results. > I > > ran the commands one right after the other, several time, in different > > order. Also, they were run on the same machine, my own private server, > so > > there was no contention from other processes to account for the > difference. > > There was no query cache defined and I used different operators for the 2 > > statements so no command was the same as any other. I even reset the > query > > cache even though it wasn't being used. So, I'm convinced the results > are > > real. > > > > Does anyone know the reason for the significantly better performance for > > statement-at-a-time versus prepare/execute? And why prepare/execute in > the > > mysql commandline program is so much slower than prepare/execute in Perl? > > > > Thanks. > > > > Jim > > > > > > > > > > -- > > Jim Lyons > > Web developer / Database administrator > > http://www.weblyons.com > > > > > > -- > - michael dykman > - [EMAIL PROTECTED] > > - All models are wrong. Some models are useful. > -- Jim Lyons Web developer / Database administrator http://www.weblyons.com