[EMAIL PROTECTED] wrote on 11.10.2006 16:54:
Do a simple test to see my point:

1. create table test (id int4, aaa int4, primary key (id));
2. insert into test values (0,1);
3. Execute "update test set aaa=1 where id=0;" in an endless loop

As others have pointed out, committing the data is a vital step in when testing the performance of a relational/transactional database.

What's the point of updating an infinite number of records and never committing them? Or were you running in autocommit mode? Of course MySQL will be faster if you don't have transactions. Just as a plain text file will be faster than MySQL.

You are claiming that this test does simulate the load that your applications puts on the database server. Does this mean that you never commit data when running on MySQL?

This test also proves (in my opinion) that any multi-db application when using the lowest common denominator simply won't perform equally well on all platforms. I'm pretty sure the same test would also show a very bad performance on an Oracle server. It simply ignores the basic optimization that one should do in an transactional system. (Like batching updates, committing transactions etc).

Just my 0.02€
Thomas


---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to