At 09:43 30.07.02 -0500, you wrote:
>At 16:27 +0200 7/30/02, Daniel Brockhaus wrote:
>[...]
>>
>>Whoa. Each record has been split into 1000 (one thousand!) blocks. 
>>Reading one of these records would require 1000 reads from your harddisk. 
>>That's about 14 seconds to read a record of 16K length! (You might get 
>>lucky and get better values from the OS's read-ahead logic and disk cache.)
>>
>>Now sit back and marvel at the efficiency of mysql's dynamic record handling.
>></sarcasm>
>
>Use OPTIMIZE TABLE periodically to defragment your table.
>
>http://www.mysql.com/doc/D/y/Dynamic_format.html
>http://www.mysql.com/doc/O/P/OPTIMIZE_TABLE.html
>http://www.mysql.com/doc/O/p/Optimisation.html

Yes. Of course. But that's just another way to work around the problem, you 
know? I mean, who wants to have to take a database down for an hour at 
least once a week?


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to