Yes, this is the case. The problem is you dont know how many nodes you will have. What if a large number are added? Everyone's RRD disk requirements will go up.

For Rocks clusters, we allocate a Linux tmpfs with a maximum size of 25% available memory. Only the "frontend" node (not a compute node) keeps the RRD databases.

-Federico

On Feb 5, 2004, at 6:30 PM, Brooks Davis wrote:

Does anyone know how to go about estimating ganglia disk requirements
assuming one knows the number of nodes and the number of metrics per
node?  It looks like we use about 12k per-metric per-node plus some
summary info.  Is this alwasy the case?

I'm thinking about this because I'm looking at adding support to the
FreeBSD startup script for more or less automatic handling of memory
files system backed storage of data since the disk IO load of writing
directly to disk cripples the performance of even machines with hardware
raid.  To do that, I'll need a good way to estimate disk usage based on
some parameters the users can figure out easily.

-- Brooks

--
Any statement of the form "X is the one, true Y" is FALSE.
PGP fingerprint 655D 519C 26A7 82E7 2529  9BF0 5D8E 8BE9 F238 1AD4

Federico

Rocks Cluster Group, San Diego Supercomputing Center, CA


Reply via email to