Hi Steve:

Just curious - when you add in the new data_source, what steps did you take to restart the daemons?

Usually I have to stop all the gmond daemons, stop gmetad, then start gmond and gmetad. Seems quite sensitive about the steps taken to restart the daemons.

Also, when you add the new data_source, do they show up as a separate data_source in the main page? Are they listening on separate ports?

Cheers,

Bernard

Steve Gilbert wrote:

My problems keep getting worse.  Now when I try to add a new data_source, my
overview memory graph suddenly drops from 4TB total memory to 0.  Taking the
new data_source out fixes it, but it looks like I can't add anything new.
Anyone seen this before?  All the machines in question here are ia32 Linux
boxes, so I don't think it's the conflicting architecture bug or anything
like that.

Steve Gilbert
Unix Systems Administrator
[EMAIL PROTECTED]


-----Original Message-----
From: Steve Gilbert Sent: Thursday, October 16, 2003 9:59 AM
To: ganglia-general@lists.sourceforge.net
Subject: [Ganglia-general] questions...gmetad failover, metric
definitions, grid separation, and alternatives to multicast


Howdy folks,

I have a few Ganglia questions I hope someone can answer.

1. Has anyone had a problem with gmetad not failing over to an alternate
data source?  Ie: if I have a line like this in gmetad.conf:

data_source "Linux Regress cluster"    172.16.208.247 172.16.208.250

...if 172.16.208.247 crashes, all my graphs for this data source just seem
to stop...it never fails over to the second host listed as it should.
Anyone else have this problem?

2. I'm not clear as to what the following metrics mean:

CPU_AIDLE
PROC_RUN     (total running processes?)
PROC_TOTAL   (total processes regarless of their state?)

...better yet, is there a place I can look myself for these definitions?
There's nothing in the documentation, and gmond/metric.h and
gmond/machines/linux.c don't mean much to me.  Maybe the next release can
have some explanatory comments in these files?

3. Is there a way with a single gmetad to group Ganglia into separate grids?
I guess one ways is to simply set up another machine with another gmetad and
just have two separate Ganglia environments, but I was wondering if it's
possible to do this with a single gmetad?

4. Is it possible to configure Ganglia to not use multicast?  Let me be more
specific...if I have a subnet of 200 machines, I don't really need for each
machine to know about the state of the entire subnet.  What I would rather
have is for all the machines to just talk directly either a couple of
machines running gmond or gmetad.  In my case (with everything on the same
subnet), it just seems like a lot of network traffic for no reason.  I can
see how the multicast model can be extremely useful for many people, but I
don't think it's needed in my environment.  Can this be done with tweaking
the "mute" and "deaf" settings in gmond.conf?

...I know this is a lot, but I hope somebody can take the time to help.
Thanks!!

Steve Gilbert
Unix Systems Administrator
[EMAIL PROTECTED]


-------------------------------------------------------
This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
_______________________________________________
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general


-------------------------------------------------------
This SF.net email is sponsored by OSDN developer relations
Here's your chance to show off your extensive product knowledge
We want to know what you know. Tell us and you have a chance to win $100
http://www.zoomerang.com/survey.zgi?HRPT1X3RYQNC5V4MLNSV3E54
_______________________________________________
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general



Reply via email to