I have a curious issue here, maybe someone can help.

I have a single process that inserts data into tables that contain purely 
logging information.  This table is then searched by our Care department to 
troubleshoot issues.  I am looking for the best way to store this data, and the 
structure on the backend.

There are 50 million inserts into table LOG a day.  The primary index on the 
table is seconds from 1971.  I only need to keep 60 days worth of data, and the 
table is only used for read purposes.  This is my design criteria, but my 
problem is how to delete old data without crashing the log writer that is 
atteched to the table.

I would prefer to use MyIsam, since it is simple data, and as such it is much 
faster than an untuned InnoDB table.  But what would I do when it is time to 
delete data?  The delete would lock the table, hence freeze the application, 
and I can not have that.  

I thought of using a combo of InnoDB and Merge tables, where the LOG table is 
InnoDB, and the LOG_ARCH tables are Merge.  The application would know to read 
from both, and I can just migrate data from the InnoDB table to the Merge 
tables, and then delete from the LOG table and not affect the app. since the 
delete is running against an InnoDB table.

I would use truncate table, but there will always be a certain amount of time 
that will elapse between copying data from the live version to the archive, 
hence the need for a selective delete on the original.

If anyone with experience with large logging apps can chime in here, I would be 
most appreciative.

Regards,

Thomas.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to