Hi Jason,
If you are familiar with ROCKS clusters, they also do this, so if you're
using Linux, you can just grab the appropriate RPM.
I pulled the parts I need from the ganglia-hpc-4.3-2 RPM that comes with the
ROCKS 4.3 distro and installed it on a vanilla RHEL4 system and it worked
great.
Martin Hicks wrote:
I think I want to run gmetad on each head node, and to use that RRD data
without
regenerating it on the admin node. Is that possible?
first you might try running gmetad on the head notes, and gmetad on
the admin node, and having the admin gmetad pull from the head
(send to ganglia mailing list also)
On Thu, Mar 13, 2008 at 12:19:55AM -0700, eliott wrote:
I think I want to run gmetad on each head node, and to use that RRD data
without
regenerating it on the admin node. Is that possible?
first you might try running gmetad on the head notes, and
To: Martin Hicks
Cc: Ganglia-general@lists.sourceforge.net
Subject: Re: [Ganglia-general] Setup large clusters
On Wed, Mar 12, 2008 at 11:35 AM, Martin Hicks [EMAIL PROTECTED] wrote:
Hi,
I'm wondering what the suggested setup is for a large Grid. I'm
having
trouble with scalaing ganglia to work
: Thursday, March 13, 2008 3:53 PM
To: Witham, Timothy D
Cc: Jesse Becker; Martin Hicks; Ganglia-general@lists.sourceforge.net
Subject: Re: [Ganglia-general] Setup large clusters
On Thu, Mar 13, 2008 at 01:20:31PM -0700, Witham, Timothy D wrote:
Now that is way interesting. Maybe if you do
On 3/13/08, Martin Hicks [EMAIL PROTECTED] wrote:
The information seems to indicate that you don't need the web server on
the green cluster, but do you?
I tried to set this up, although I had some difficulties.
I have the admin node pulling from the head nodes on 8651 (gmetad), but
I
Martin Hicks wrote:
The configuration of gmetad has been modified to store the rrds in
/dev/shm, but this directory gets very large so I'd like to move away
from that.
Using tmpfs is pretty much your only option. As you discovered, the disk
I/O will bring most machines to their knees.
Is
On Wed, Mar 12, 2008 at 10:52:03AM -0500, Seth Graham wrote:
Martin Hicks wrote:
The configuration of gmetad has been modified to store the rrds in
/dev/shm, but this directory gets very large so I'd like to move away
from that.
Using tmpfs is pretty much your only option. As you
On Wed, Mar 12, 2008 at 11:35 AM, Martin Hicks [EMAIL PROTECTED] wrote:
Hi,
I'm wondering what the suggested setup is for a large Grid. I'm having
trouble with scalaing ganglia to work on large clusters.
Have you considered turning off disk read-ahead on the partition that
includes the
9 matches
Mail list logo