To generalize this, has anyone looked at how the system requirements 
(disk usage, memory, i/o rate, network usage, ... , etc) of machines running 
gmetad need to scale as the number of nodes increase.  More specifically, has 
anyone come up with a (rough) algorithm to estimate system requirements for a
top level gmetad monitoring a hierarchical grid (many clusters reporting to 
ultimately one collection point) configuration considering for example 
scalability flags?
-john

On Mon, 9 Feb 2004, Federico Sacerdoti wrote:

> Yes, this is the case. The problem is you dont know how many nodes you 
> will have. What if a large number are added? Everyone's RRD disk 
> requirements will go up.
> 
> For Rocks clusters, we allocate a Linux tmpfs with a maximum size of 
> 25% available memory. Only the "frontend" node (not a compute node) 
> keeps the RRD databases.
> 
> -Federico
> 
> On Feb 5, 2004, at 6:30 PM, Brooks Davis wrote:
> 
> > Does anyone know how to go about estimating ganglia disk requirements
> > assuming one knows the number of nodes and the number of metrics per
> > node?  It looks like we use about 12k per-metric per-node plus some
> > summary info.  Is this alwasy the case?
> >
> > I'm thinking about this because I'm looking at adding support to the
> > FreeBSD startup script for more or less automatic handling of memory
> > files system backed storage of data since the disk IO load of writing
> > directly to disk cripples the performance of even machines with 
> > hardware
> > raid.  To do that, I'll need a good way to estimate disk usage based on
> > some parameters the users can figure out easily.
> >
> > -- Brooks
> >
> > -- 
> > Any statement of the form "X is the one, true Y" is FALSE.
> > PGP fingerprint 655D 519C 26A7 82E7 2529  9BF0 5D8E 8BE9 F238 1AD4
> >
> Federico
> 
> Rocks Cluster Group, San Diego Supercomputing Center, CA
> 
> 
> 
> -------------------------------------------------------
> The SF.Net email is sponsored by EclipseCon 2004
> Premiere Conference on Open Tools Development and Integration
> See the breadth of Eclipse activity. February 3-5 in Anaheim, CA.
> http://www.eclipsecon.org/osdn
> _______________________________________________
> Ganglia-developers mailing list
> Ganglia-developers@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ganglia-developers
> 
> 
> 

---------------------------------
 John Hicks - HPCC Engineer
 TransPAC, Indiana University
 [EMAIL PROTECTED], 317-278-1083
"Let the wild rumpus start!" - MS 



Reply via email to