Does anyone know how to go about estimating ganglia disk requirements
assuming one knows the number of nodes and the number of metrics per
node?  It looks like we use about 12k per-metric per-node plus some
summary info.  Is this alwasy the case?

I'm thinking about this because I'm looking at adding support to the
FreeBSD startup script for more or less automatic handling of memory
files system backed storage of data since the disk IO load of writing
directly to disk cripples the performance of even machines with hardware
raid.  To do that, I'll need a good way to estimate disk usage based on
some parameters the users can figure out easily.

-- Brooks

-- 
Any statement of the form "X is the one, true Y" is FALSE.
PGP fingerprint 655D 519C 26A7 82E7 2529  9BF0 5D8E 8BE9 F238 1AD4

Attachment: pgp8oGISEQDZi.pgp
Description: PGP signature

Reply via email to