> Here is the background:  Anyone that is running a huge system like MARC
> that has millions of uncompressed blob records in huge tables, needs to be
> able to migrate, in real-time and without down-time, to compressed blobs.
> Therefore, we need a way to know if a given field is compressed or not.

I hear you on that! We did the compression on the application end. When we
started compressing all of the blobs in the table were uncompressed except
newly added ones. We took advantage of the fact that zlib fails on
decompression. So we wrote a function my_decompress() that takes the blob
and decompresses it and if it fails just returns the original (assumed to be
already decompressed). Works great and decompression gets divided among the
webservers which scales better than having MySQL do it. 

However, you should develop a way to take tables offline. Lack of proper
table maintenance can slow things down by a factor of 10 or more (and one of
the reasons we can not use InnodDB).

-steve--



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to