I have been bench-marking an in-house higher-level library to access the 
MySQL database which uses the store_result and data_seek functions to 
provide random access to records in a selected dataset.

This has brought to light that for 'large' result sets over 10,000 records, 
the data_seek is starting to take up an enormous portion of time, even when 
sequentially stepping through all the records once. Is this due to a 
limitation in the data_seek function? In one instance the comparison was:

mysql_use_result + mysql_fetch_row (through total result set) = 3 seconds

mysql_store_result + mysql_data_seek (through total result set) = 169 seconds

This is without even taking into account the required fetch_row in the 
second case to actually access the data.

Is this a known 'problem' with the data_seek function? ...

Inspection of system statistics indicated that physical memory was not full 
yet, but the processor was running at 100%.

Any enlightenment would be appreciated,



   Jerry van Leeuwen
   Business Analyst - Trans Data
   Tel: 02 - 9630 3533
   Mobile: 0407 - 480 811


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to