Do you compress the data?

I use perl and Compress::Zlib::memGzip to keep size down.

Also, you could split into months by something like:
o    A column MESSAGE_DATE set on insertion
o    a cron script run once a month to:
     o  create another table for the month to be archived
     o  INSERT INTO SELECT for the dates to be archived
     o  DELETE WHERE to remove from main table

If you are looking at being able to index the archive, you will
need a number of tables for word indexes.  A simple version would use:


Word_Hash      INTEGER,
Message_Id     BIGINT,
Occurrences    INTEGER,
Message_Date   DATE

Some nice CRC ish algorithm to return an INTEGER for a word.
SELECT Message_Id, Message_Date WHERE Word_Hash=? ORDER BY Occurrences DESC
will give the messages with most occurrences of the word at the top of the
search.

If you do not need to archive, or use another table to store location, such:
Message_Id     BIGINT,
Data_Table     varchar(255) if text, BIGINT if you use numbers for tables.
Then you do not need to store the Message_Date in the index tables.

M


-----Original Message-----
From: Tim Wood [mailto:[EMAIL PROTECTED]]
Sent: 12 December 2001 23:16
To: [EMAIL PROTECTED]
Subject: 4 G table limit?


Hi

3 months ago starting using mysql to archive a live news feed that our
company is paying for.  Being a lazy unit, I am putting all the document
bodies into one table.  After 3 months, that table has grown to 2G - so
its a natural enough assumption to assume that in another 3, it will
grow to 4G or so - around the current mysql table size limit ( mysql v.
3.23.36, kernel 2.4.2-2).

As mentioned above, I'm a lazy unit, so if possible would like to avoid
restructuring the DB to spread the document body table across >1 table
(thus enabling > 4 G of doc bodies), so am looking for some easier way
to do it (eg upgrading mysql, or using a different fs type etc).

Does anyone out there know of any
- future plans by the mysql development crew to increase table size
limits by eg using their own custom filesystem type?
- any kernel fs patches I might apply to get around the 4G limit imposed
by the linux kernel
- any alternative filesystems (JFS?) that might permit greater table
sizes under mysql
- any other tested and functioning workarounds to this issue?

Any suggestions will be appreciated
Cheers
Tim
--
Tim Wood
Predictive Technologies
ph +61 3 8344 0395 (BH) +61 413 845 317
----
This is clearly another case of too many mad scientists,
                and not enough hunchbacks.



---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to