To generalize this, has anyone looked at how the system requirements
(disk usage, memory, i/o rate, network usage, ... , etc) of machines running
gmetad need to scale as the number of nodes increase. More specifically, has
anyone come up with a (rough) algorithm to estimate system requirements
We have found that a dual-P3 server running gmetad can handle roughly
1000 hosts before it starts to fail. 1000 hosts here means pure-gmond
host data, of course, not picking up summaries of other grids.
In fact, we just hit the case where adding an additional (128-node)
monitored cluster made
Yes, this is the case. The problem is you dont know how many nodes you
will have. What if a large number are added? Everyone's RRD disk
requirements will go up.
For Rocks clusters, we allocate a Linux tmpfs with a maximum size of
25% available memory. Only the "frontend" node (not a compute no
On Mon, Feb 09, 2004 at 11:13:39AM -0800, Federico Sacerdoti wrote:
> Yes, this is the case. The problem is you dont know how many nodes you
> will have. What if a large number are added? Everyone's RRD disk
> requirements will go up.
I'll make that a system config paramater with a reasionable d
Does anyone know how to go about estimating ganglia disk requirements
assuming one knows the number of nodes and the number of metrics per
node? It looks like we use about 12k per-metric per-node plus some
summary info. Is this alwasy the case?
I'm thinking about this because I'm looking at addi