Martijn Tonies wrote:

Hello Jeremy,



I have a curious issue here, maybe someone can help.

I have a single process that inserts data into tables that contain
purely logging information.  This table is then searched by our Care
department to troubleshoot issues.  I am looking for the best way to
store this data, and the structure on the backend.

There are 50 million inserts into table LOG a day. The primary index
on the table is seconds from 1971.


Sorry, as I understand, there are not 50000000 seconds in a day, so it can not be primary key.


OK, how about this:

Use MyISAM and MERGE tables.  Keep one table per day.  E.g.:

log_2005_01_15
log_2005_01_16
log_2005_01_17
log_2005_01_18

etc.

Use MERGE tables to give you the 60 day (and perhaps e.g. 14 day, 30
day, 7 day, etc.) read views that you need, like so:

CREATE TABLE log_view_7day (
  ...
) TYPE=MERGE UNION=(
  log_2005_01_12,
You can then do all of your inserts from the log writer into the "today"
table, and do your reads against the various MERGEs.

Every day at exactly midnight, you would use ALTER TABLE (which is
atomic) to redefine the UNION of the MERGE definition of the various


tables.

Modifying metadata because you need a different view at your
data.

Am I the only one to which this sound ugly?

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL
Server
Upscene Productions
http://www.upscene.com






--
====================================
Cuando todo esta bajo control,
es que no vamos suficientemente deprisa
====================================
Javier Armendáriz
[EMAIL PROTECTED]





Reply via email to