Hello Michael,

Saturday, April 07, 2001, 3:14:42 AM, you wrote:


MW> Hi!

>>>>>> "Peter" == Peter Zaitsev <[EMAIL PROTECTED]> writes:

Peter>> Hello mysql,
Peter>>   I made one more test of mysql perfomance.
  
Peter>>   PIII-700/768Mb/Linux 2.4.2 glbc 2.1.3
  
Peter>>   I had the followings table  with 12.000.000 rows (well I checked
Peter>>   this with 10000 rows as well but the speed does not differs much)
Peter>>   in it and I checked how fast I can select data by hash key "select *
Peter>>   from g00pages where hash=1" - the query was constant and only one
Peter>>   row matched.  The speed was about queries/sec 1800.

MW> Is hash an unique key?

No it's not but it has just a couple of collisions per 100.000
records.

Peter>>   So I decided to check what about selecting many pages at the time.
Peter>>   like  "select * from g00pages there hash in (1,2,3...)"  - the best
Peter>>   result I got is then selecting about 100 pages/query  this gave me
Peter>>   about 2 times more perfomance then selecting pages one by one.  This
Peter>>   locked even more upsetting.

MW> Why is this upsetting? Because you only got 3600 pages per second or
MW> because it was 2 times faster than single queries?
Both :)  I really expected about 10000 of selects per second one by
one, and much more scaling like I got with heap table, then I got up
to 70000 pages per second.



MW> The reason singly queries are slower are of course that the
MW> initialization phase (reading the data from a socket, parsing,
MW> locking, checking which keys to use) is about half of the query time.

Yes, the problem is I thought this takes much more then 50% on such
very simple query, then all data is in memory.  The only thing indeed
to call the os is data as myisam does not caches data, therefore I
tested my reiserfs and got 25000 of file open+read+close per second on
directory with 1500000 files of 10 bytes each :)

Peter>>   I've tried the same thing with heap table - the result's are
Peter>>   strange. The perfomance is starting from about 3600 pages/sec then
Peter>>   I've increased The number of pages/query I've got up to 70000
Peter>>   pages/sec  which is quite good :)

Peter>>   The strange thing is why result differ so much then ALL data fits it
Peter>>   memory without any problem

Peter>>   Well. Of couse I'll soon try glibc 2.2.x with your patches but I
Peter>>   don't se how it can improve things then  I have obly one thread
Peter>>   running and one cpu and the query is running relatively big.

Peter>>   Heikki made tests which  also shows some strange things - for
Peter>>   example why INSERT is FASTER then SELECT.

MW> I haven't seen the test but I can imagine this is true in some
MW> context.  The reason for this is that a SELECT has to go through many
MW> optimization stages to find out what indexes to use and what queries
MW> to do.  This is one basic fault with SQL;  The optimizer has to do a
MW> lot of work...

Well. But then I select with multiple pages selected optimizer takes
it's work ony once, and this is well showed with heap tables, but with
myisam this somehow is not thrue.

Other interesting thing is - if I'll tell mysql explictly to use the
index will it reduce optimization cost ?

It's a pity what mysql misses something like PROFILE call for the
query  so I could check how much time is spent for different phases of
query execution.

MW> Sergei is actually working on something interesting for people that
MW> need all the speed they can get:  We are going to provide a interface
MW> directly to the storage handler, without any optimizations.
MW> This will of course not be SQL, but we expect this to be VERY fast as
MW> long as you are searching on a specific key...

Yes it's nice.  The other possible speedup is prepeared statements, or
execution plan cache (classical solutions) :)

Peter>>   Even looking at the context swithces does not explain the thing much
Peter>>   - my result show that this system can do about 300.000 of context
Peter>>   swithces per second - so 5.000-10.000 of context swithces per second
Peter>>   does not eat much from cpu.
  
MW> To be able to comment, I would need to make a gprof of this.
MW> (This has to wait at least until the end of next week...)

OK. This is not really pain - just comments :)
My current pains are problems with repair tables and.... keycache :)


-- 
Best regards,
 Peter                            mailto:[EMAIL PROTECTED]



---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to