Ok, sorry, I focused too much on the optimization of the table. Because
this is an isolated problem and most what is written on the page does
not apply to this issues. This is like saying "When the sun shines,
there is always a shadow" which is generally true but not on June,21,
12:00pm on the equator. Anyway...

Why are fixed length faster than dynamic length? Because the DB can
calculate the record length for all records in the DB once and then use
fseek() to directly jump to the beginning of the record. With dynamic
structures this is not possible because every record differs. So every
field which can be dynamic (varchar) has to be checked for its length
(the first byte(s) in the data file) and then the REAL length is found.
So no global use of fseek() is possible.

However, if you have good indexes this does not turn out to be that much
of a problem, because the index does the "fseek()" for you (gets you the
position). Then you have to find the beginning of the field within the
column, which is slower on dynamic (but not that much if you do not have
too many columns). What now again is the difference is, that with
dynamic length only the REAL number of bytes is fetched into memory (the
real length is stored in the first bytes of the record). On a fixed
length the WHOLE field is fetched into memory. And here you can have the
bottleneck why Alex's kwiris now take longer. An extreme case would be,
if the field is defined as VARCHAR(255) and always only contains 1
character. So you have a disk read of the first byte (real length) and
then the content (again 1 byte). So you read 2 bytes from disk. If on
the other hand you then convert this to CHAR(255) you ALWAYS have to
read 255 chars into memory. Given that the result set is huge and
exactly these disk reads limit your performance because your read 254
byes for nothing (which is 253 bytes more than with dynamic structure).
Calculating and finding real length on the otherhand takes no time at
all (given that between the 2 reads the read header of the hard disk
does not move, which should not be case).

Hope this is more clear now. If I sounded offensive in my first mail,
this was NOT the intent, but with my limited English knowledge I just
type what comes to my mind :( blame it to my Austrian heritage :)

Cheers
/rudy

ps: as to converting dynamic length to fixed length, procedure analyze
could be handy.


-----Original Message-----
From: Veysel Harun Sahin [mailto:[EMAIL PROTECTED] 
Sent: dinsdag 15 juli 2003 16:24
To: Rudy Metzger
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Managing big tables

Sorry rudy, but I can not understand what you try to say!

I can only say that if you follow the link below and read the 
explanations on that page and also follow the related links you can find

answers to Alex's problems.

[EMAIL PROTECTED] wrote:

>Always take care what you want to achieve! And consider the
>circumstances.
>
>Yes, adding a lot of indexes makes queries faster. But makes
>inserts/deletes/updates slower.
>
>Alex's problem is NOT that his/her queries takes too long, the problem
>is that optimize takes too long. Which is something completely
>different.
>
>Cheers
>/rudy
>
>-----Original Message-----
>From: Veysel Harun Sahin [mailto:[EMAIL PROTECTED] 
>Sent: dinsdag 15 juli 2003 15:22
>To: [EMAIL PROTECTED]
>Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
>Subject: Re: Managing big tables
>
>http://www.mysql.com/doc/en/Data_size.html
>
>[EMAIL PROTECTED] wrote:
>
>  
>
>>Hello,
>>i've got a little problem,
>>we're using mysql with two big tables (one has 90 Mio. Rows (60 Gb on 
>>HD), the other contains nearly 200.000.000 (130 Gb on HD).
>>Now we want to delete some rows from these tables to free diskspace. 
>>It seems that MySQL frees the harddisk-space which
>>was used by these rows only after optimization, which lasts very long 
>>on these tables. Both tables are "dynamic" in terms of row-format
>>what seems to extend the time needed for optimization. I tried to 
>>convert the smaller one to "fixed"-row-format, which increased the
>>disk-space of its data-file from 30 Gb to 60 Gb. This would not be the
>>    
>>
>
>  
>
>>problem, but some SQLs which are run daily
>>on this table now run 4 times slower than with dynamic structure.
>>
>>So, my questions are:
>>1) Did i something wrong while converting to fixed row-format ? (i 
>>found no indication)
>>2) Is the fixed structure really faster on optimization ?
>>3) Can anybody confirm the slow-down on big tables when converted from
>>    
>>
>
>  
>
>>dynamic to fixed ?
>>   (on small tables fixed were faster)
>>
>>I'm running SuSE Linux, Kernel 2.4.20, the above behaviour could be 
>>reproduced with our productive server (MySQL 3.23) and
>>with a test server (MySQL 4.0.12).
>>
>>thanks in advance,
>>alex
>>
>>
>>
>>    
>>
>
>  
>

-- 

Veysel Harun Sahin
[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to