On 5 Jun 2002, at 7:50, Jared Richardson wrote:

> This table is part of a product that contains publicly available (and
> always expanding) publicly avilable biological data in addition to
> large companies internal data.  A one terrabyte cap very well could
> come back to haunt us one day! (sadly enough!)

I fear that you'll run into problems with more than just MySQL if you 
have files that large, but I don't have any direct experience with 
them.  Others on the list may have more experience with really huge 
files.

Is there any reasonable way of breaking up your data into sets (by 
date, by company, by species?) and having many separate tables that 
you can access through a merge table when necessary?  That could be 
especially useful if older data gets archived, if some searches don't 
need all the data, or if some of the separate tables won't ever 
change and can thus be compressed.

Also, another way of making your indexes smaller would be to use 
PACK_KEYS=1 when creating your table, since your BIGINT IDs will 
probably compress well.

-- 
Keith C. Ivey <[EMAIL PROTECTED]>
Tobacco Documents Online
http://tobaccodocuments.org

---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to