Re: [Ganglia-developers] estimating disk usage

2004-02-15 Thread John M Hicks
To generalize this, has anyone looked at how the system requirements (disk usage, memory, i/o rate, network usage, ... , etc) of machines running gmetad need to scale as the number of nodes increase. More specifically, has anyone come up with a (rough) algorithm to estimate system requirements

Re: [Ganglia-developers] estimating disk usage

2004-02-13 Thread Federico Sacerdoti
We have found that a dual-P3 server running gmetad can handle roughly 1000 hosts before it starts to fail. 1000 hosts here means pure-gmond host data, of course, not picking up summaries of other grids. In fact, we just hit the case where adding an additional (128-node) monitored cluster made

Re: [Ganglia-developers] estimating disk usage

2004-02-09 Thread Federico Sacerdoti
Yes, this is the case. The problem is you dont know how many nodes you will have. What if a large number are added? Everyone's RRD disk requirements will go up. For Rocks clusters, we allocate a Linux tmpfs with a maximum size of 25% available memory. Only the "frontend" node (not a compute no

Re: [Ganglia-developers] estimating disk usage

2004-02-09 Thread Brooks Davis
On Mon, Feb 09, 2004 at 11:13:39AM -0800, Federico Sacerdoti wrote: > Yes, this is the case. The problem is you dont know how many nodes you > will have. What if a large number are added? Everyone's RRD disk > requirements will go up. I'll make that a system config paramater with a reasionable d

[Ganglia-developers] estimating disk usage

2004-02-05 Thread Brooks Davis
Does anyone know how to go about estimating ganglia disk requirements assuming one knows the number of nodes and the number of metrics per node? It looks like we use about 12k per-metric per-node plus some summary info. Is this alwasy the case? I'm thinking about this because I'm looking at addi