Hello Jeremy,

> > I have a curious issue here, maybe someone can help.
> >
> > I have a single process that inserts data into tables that contain
> > purely logging information.  This table is then searched by our Care
> > department to troubleshoot issues.  I am looking for the best way to
> > store this data, and the structure on the backend.
> >
> > There are 50 million inserts into table LOG a day.  The primary index
> > on the table is seconds from 1971.  I only need to keep 60 days worth
> > of data, and the table is only used for read purposes.  This is my
> > design criteria, but my problem is how to delete old data without
> > crashing the log writer that is atteched to the table.
>
> OK, how about this:
>
> Use MyISAM and MERGE tables.  Keep one table per day.  E.g.:
>
> log_2005_01_15
> log_2005_01_16
> log_2005_01_17
> log_2005_01_18
>
> etc.
>
> Use MERGE tables to give you the 60 day (and perhaps e.g. 14 day, 30
> day, 7 day, etc.) read views that you need, like so:
>
> CREATE TABLE log_view_7day (
>    ...
> ) TYPE=MERGE UNION=(
>    log_2005_01_12,
> You can then do all of your inserts from the log writer into the "today"
> table, and do your reads against the various MERGEs.
>
> Every day at exactly midnight, you would use ALTER TABLE (which is
> atomic) to redefine the UNION of the MERGE definition of the various
tables.

Modifying metadata because you need a different view at your
data.

Am I the only one to which this sound ugly?

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL
Server
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to