The delete would definitely depend upon the size of the record set being
deleted, anyway assuming I comprehended your situation correctly ,I would
suggest using the Merge storage engine for your needs , and keep every
single day of data in a seperate MyISAM table, and merge all those tables
together into a single merge table, so when you want to do any maintenance
on any data older than x days just alter the table and take that <mailxx>
table out of the merge table , delete it compress it, do anyhting with it
and then just add a other one (you could easily run a nightly cron job for
this)

Please look at
http://dev.mysql.com/doc/refman/5.0/en/merge-storage-engine.html

Kishore Jalleda

On 1/31/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
wrote:

I'm trying to wrap my head around a performance problem our institution is
having on our Mysql Server which hosts all of our logs from around campus.

Specifically our MailLogs tables.

What I'm having a hard time with is we have metadata such as:

CREATE TABLE mail00 (
   host varchar(32) default NULL,
   fromMTA varchar(44) default NULL,
   nextMTA varchar(44) default NULL,
   messageID varchar(44) default NULL,
   messageID2 varchar(44) default NULL,
   sender varchar(80) default NULL,
   recipient varchar(120) default NULL,
   recipient2 varchar(120) default NULL,
   date date default NULL,
   time time default NULL,
   program varchar(44) default NULL,
   ACTION varchar(44) default NULL,
   detail varchar(120) default NULL,
   msg text,
   seq int(10) unsigned NOT NULL auto_increment,
   PRIMARY KEY  (seq),
   KEY host (host),
   KEY sender (sender),
   KEY recipient (recipient),
   KEY MessageID (messageID),
   KEY seq (seq),
   KEY time (time),
   KEY date (date),
   KEY ACTION (ACTION),
   KEY messageID2 (messageID2),
   KEY fromMTA (fromMTA)
) TYPE=MyISAM MAX_ROWS=99000000;

We might end up storing two to three gigs of logging data per day from our
mail servers.

When we had the process setup to purge data out of this table that is
older
than 14 days, it would lock and take great deal of time process the
request.

My question is, is the structure of my table the problem or is it just the
size of the data that is just going to take that long due to Disk IO?

This isn't a table structure that I came up with, I'm just looking to
optimize the performance of the server.

Currently around midnight the sysadmin currently drops table 13, then
moves
12 -> 13, 11 -> 12, etc... and creates a 00.

All of this is because it takes to long to purge out one days worth of
data
.  This will eventually cause a problem when we try to develop a program
to
scan the logs, it will need to scan through all 14 tables instead of just
one.

Is there a better way that mitigates the performance and flexibility?

Or just a better way in general?

Thanks,

-Tyler


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]


Reply via email to