Greetings,
I think you need the override_hostname statement in your gmond
configuration. It is available starting with Ganglia 3.2.0 and is
described here -
http://sourceforge.net/apps/trac/ganglia/wiki/override_hostname
Of course, you will need to restart the gmond daemon after changing
Yes you are right there is a little dance among host-names to be despaired from
UI and many times master node in unicast mode takes time to fetch all the
required data like - cpu_num, etc.
By the way I was missing something to use with override_hostname is,
override_ip. Without this gmond
I would appreciate if you could share the hardware specification needs to be
there for Ganglia.
1. Server running gmond (around 50)
2. Since I am using unicast mode what should be the hardware configuration for
the master server(chosen from one of 50 gmond instances) which is gathering
data
Hi
Is there an easy way of getting Ganglia to monitor memory use per process. For
example to get the 10 most memory hungry procesess?
Regards Peter Ellevseth
--
Try before you buy = See our experts in action!
The most
I've also observed this and have been unable to find a solution. In my
case at least there was no obvious correlation with the number of
metrics or weather the gmond was an aggregating or not (so several
orders of magnitude in the number of metrics did not matter, it might
happen on 2 out of 80
Currently tcpconn.py uses netstat to get it's socket stats. This gives
lots of detail but is far too slow for much production use (running
netstat can take many minutes). /proc/net/sockstat gives less
information but has no performance problems. There was a suggestion
previously to use the ss
Hi Aidan,
for what it is worth, I cannot reproduce the growing memory consumption on a
small 3.2.0 grid using only standard metrics in unicast mode. Running now for a
few hours. Will check again tomorrow.
Cheers
Martin
--
Martin Knoblauch
Hi,
I'd like to know what's the best topology for a unicast deployment. I
want to monitor several clusters (memcache / redis / mysql / etc...).
---
One solution is to configure each node in a given cluster to talk to a
given head gmond (or two for HA)
M1, M2, M3 - send to MHEAD:MPORT
R1, R2, R3
8 matches
Mail list logo