Hi Matt,

If you are going to do this then would you be able to make this
optional?  The reason I ask is that we currently have a single gmetad
server monitoring several clusters totaling over 1100 nodes, which comes
to only about 370MB of space for all of the rrds.  Because of the amount
of IO needed to update these, the rrds are stored entirely in RAM
(tmpfs), which is possible because of their small size.  The change you
are suggesting is to increase this almost 6 times which would require
over 2GB of RAM just for the rrds.

Maybe you can even make how the rrds are stored customizable where the
gmetad and webfrontend would read some kind of config file allowing the
user complete control over the rrd format.  If this is too difficult
then something like a config option to specify full-sized or mini
databases would be enough.

~Jason


On Tue, 2004-03-16 at 15:28, Matt Massie wrote:
> what is the maximum size of an rrd that you would tolerate?  what is a
> reasonable size?  it is currently 11948 bytes per metric and double that
> for summary metrics.
> 
> that means that a 128 node cluster monitoring 30 metrics each would take
> about 11948*128*30 + 11948*2*30 = 44.5 MBs.  tiny.
> 
> i'd like to expand the size of the round-robin databases to around 
> 70620 bytes per metric.  that means that a 128 node cluster monitoring
> 30 metrics each would take around
> 70620 * 128 * 30 + 70620*2*30 = 262 MBs.  small.
> 
> it would allow hourly averages for a year.  it would give you the power
> to ask.. what going on last week with more fine-grain accuracy.
> 
> keep in mind that the disk io is not going to go up.. it will drop
> significantly given the new design.
> 
> -matt
-- 
/------------------------------------------------------------------\
|  Jason A. Smith                          Email:  [EMAIL PROTECTED] |
|  Atlas Computing Facility, Bldg. 510M    Phone:  (631)344-4226   |
|  Brookhaven National Lab, P.O. Box 5000  Fax:    (631)344-7616   |
|  Upton, NY 11973-5000                                            |
\------------------------------------------------------------------/


Reply via email to