Have you thought about using Merge tables?

If you have a sliding 5 minute monitoring window that you need to query
frequently you could create a smaller MERGE table to hold to 6 minutes
worth of data composed of six tables of one minute's data each.

At the end of each minute, you create a new table, change the merge
definition, then archive the "old" minute (the one that just left the
monitoring window) into a larger static table. Your indexes will be small
(only 1 minutes worth of data). The tables you need to query are smaller
(just 6 minutes worth) and you still keep all of your historical data.  You
could even hold off archiving the "old" tables until you have some free
time if you needed to.

You could also run "tiers" of tables. One "weekly" Merge table containing 7
"daily" tables. The most recent "daily" table could be a merge table of up
to 24 "hourly" tables. The most recent "Hourly" table could have the
results of archiving off your old minutes for the current hour..... I
didn't see anything about NOT "nesting"  merge tables but I would assume it
could cause some serious headaches if you went overboard doing it

You probably want to review: http://dev.mysql.com/doc/mysql/en/MERGE.html
and: http://dev.mysql.com/doc/mysql/en/MERGE_table_problems.html
for details. (especially read about MERGE tables using lots of file
handles!!!)

I am sure if you thought about it you could break down your storage into
something more manageable than I described.

Best Wishes,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



                                                                                       
                                
                      Jeremy Zawodny                                                   
                                
                      <[EMAIL PROTECTED]        To:       Aram Mirzadeh <[EMAIL 
PROTECTED]>                                  
                      om>                      cc:       [EMAIL PROTECTED]             
                            
                                               Fax to:                                 
                                
                      06/28/2004 02:24         Subject:  Re: Performance issues        
                                
                      PM                                                               
                                
                      Please respond to                                                
                                
                      mysql                                                            
                                
                                                                                       
                                
                                                                                       
                                




On Tue, Jun 22, 2004 at 01:34:39PM -0400, Aram Mirzadeh wrote:
>
> We have an internal SNMP monitoring system that is monitoring about
> 10,000 devices.  Each device is pinged then pulled for about an
> average of 25-30 elements.  Each of the ping results and elements
> are then stored in text file, then another system picks them up
> (NFS) and inserts them into a MyISAM (3.23.54) database.  The data
> is kept for 13 weeks.
>
> The database system is a Xeon 4 way, 12GB of ram with a striped raid
> array dedicated to the database files and its indexes and such.
>
> Every 5 minutes another process goes through the last set of inserts
> and compares them for any threshold breaches, so the entire last set
> of data is looked at.
>
> We're falling behind on the inserts because the system can't seem to
> handle the amount of inserts, the front end that generates the web
> pages based on the previous records is dogging down.
>
> I have read the regular optimizations papers and have done as much
> as I felt safe, are there any huge database optimization papers?
> Anything I should be looking at?

I'd consider bulking up the INSERTs, performing multi-row INSERTs
rather than doing them one by one.  That can speed things up quite a
bit in my experience.

Jeremy
--
Jeremy D. Zawodny     |  Perl, Web, MySQL, Linux Magazine, Yahoo!
<[EMAIL PROTECTED]>  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to