On Mon, Mar 04, 2002 at 02:06:00PM -0700, Lopez David E-r9374c wrote:
> Using MySQL 3.23.40
> 
> App is logging db with 130 clients to a central MySQL db with 2
> million events per day. Aged events of over 7 days are no longer
> required. The central table is myisam type with fixed size
> (datetime, enum and foreign keys only).  Selects are done
> infrequently while inserts are done throughout the day. What is the
> fastest delete possible and still keep table optimized?

I recently wrote the following bit for a chapter in my book where I
discuss MySQL's table types and offer suggestions about when to use
which ones.  Does it seem to help your situation?  (Granted, it's a
bit out of context and is still rather rough...)

---snip---

  Logging

  Suppose you want to use MySQL to log a record of every telephone
  call from a central telephone switch in real-time.  Or maybe you.ve
  installed mod_log_mysql for Apache so that you can log all visits to
  your web site directly in a table.  In such an application, speed is
  probably the most important goal--you don't want the database to be
  the bottleneck.  Using MyISAM tables will work very well because
  they have very low overhead and can handle inserting thousands of
  records per second.

  Things will get interesting if you decide its time to start running
  reports to summarize the data you've logged.  Depending on the
  queries you use, there.s a good chance that you'll significantly
  slow the process of inserting records while gathering data for the
  report.  What can you do?

  You could use MySQL's built-in replication (chapter 8) to clone on
  the data onto a second (slave) server.  Then you can run your time
  and CPU-intensive queries against the data on the slave.  This will
  keep the master free to insert records as fast as it possibly can
  while also giving you the freedom to run any query you want without
  worrying about how it could affect the real-time logging.

  Another option is to use a MyISAM Merge table.  Rather than always
  logging to the same table, adjust the application to log to a table
  that contains the name or number of the month in its name, such as
  web_logs_2002_01 or web_logs_2002_jan.  Then define a Merge table
  that "contains" the data you'd like to summarize and use it in your
  queries.  If you need to summarize data daily or weekly, the same
  strategy works, you'd just need to create tables with more specific
  names, such as web_logs_2002_01_01.  While you are busy running
  queries against tables that are no longer being written to, your
  application can log records to its current table uninterrupted.

  A final possibility is to simply switch to using a table that has
  more granular locking than MyISAM does.  Either BDB or InnoDB would
  work well in this case.  Non-MyISAM tables will generally use more
  CPU and disk space, but that may be a reasonable tradeoff in this
  case.  The same reasoning applies to nearly any real-time monitoring
  or data collection system.  Whether you're monitoring the oil flow
  at pumping station in Alaska or tracking the x-ray emissions from a
  distant galaxy, you don't want the database slowing down the
  process.

---snip---

Jeremy
-- 
Jeremy D. Zawodny, <[EMAIL PROTECTED]>
Technical Yahoo - Yahoo Finance
Desk: (408) 349-7878   Fax: (408) 349-5454   Cell: (408) 685-5936

MySQL 3.23.47-max: up 25 days, processed 863,708,202 queries (389/sec. avg)

---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to