I see. So I need a separate IP AND A SEPARATE PORT. Got it.

Also, I use a single gmond in each cluster to aggregate the single
cluster. I configure the gmetad to talk to only gmond from each cluster.
Is that wrong?

-RC

Ron Cavallo 
Sr. Director, Infrastructure
Saks Fifth Avenue / Saks Direct
12 East 49th Street
New York, NY 10017
212-451-3807 (O)
212-940-5079 (fax) 
646-315-0119(C) 
www.saks.com
 

-----Original Message-----
From: Seth Graham [mailto:set...@fnal.gov] 
Sent: Wednesday, March 23, 2011 11:06 AM
To: Ron Cavallo
Cc: Bernard Li; ganglia-general@lists.sourceforge.net
Subject: Re: [Ganglia-general] Need help configuring clusters to use
separate multicast IP


That might work, but I don't think anyone sets up their ganglia so that
a single gmond is trying aggregate all clusters. That's what the gmetad
daemon is for. 

Also note that even though you have a separate multicast address for
each cluster, the port still has to be unique. The port is what gmetad
and the web frontend use to distinguish between clusters. You get really
weird results if multiple data_source lines use the same port.


An ideal configuration might be:

Each of the 5 clusters has a unique gmond.conf, with its own multicast
address and port number.

The gmetad host has 5 data_source lines to query one host from each of
the 5 clusters.




On Mar 23, 2011, at 9:52 AM, Ron Cavallo wrote:

> 
> I need some help. I am trying to configure my gmetad to collect from
> different clusters on different IP's. I have 5 clusters. This is my
> gmetad collections server's local gmond.conf configuration:
> 
> 
> /* Feel free to specify as many udp_send_channels as you like.  Gmond
>   used to only support having a single channel */
> udp_send_channel {
>  mcast_join = 239.2.11.72
>  port = 8649
>  ttl = 1
> }
> 
> /* You can specify as many udp_recv_channels as you like as well. */
> udp_recv_channel {
>  mcast_join = 239.2.11.71
>  port = 8649
>  bind = 239.2.11.71
> }
> 
> udp_recv_channel {
>  mcast_join = 239.2.11.72
>  port = 8649
>  bind = 239.2.11.72
> }
> 
> udp_recv_channel {
>  mcast_join = 239.2.11.73
>  port = 8649
>  bind = 239.2.11.73
> }
> 
> udp_recv_channel {
>  mcast_join = 239.2.11.74
>  port = 8649
>  bind = 239.2.11.74
> }
> 
> udp_recv_channel {
>  mcast_join = 239.2.11.75
>  port = 8649
>  bind = 239.2.11.75
> }
> 
> udp_recv_channel {
>  port = 8649
> }
> 
> This is an excerpt from ONE OF THE CLUSTERS ABOVE (the .74 cluster)
> 
> /* Feel free to specify as many udp_send_channels as you like.  Gmond
>   used to only support having a single channel */
> udp_send_channel {
>  mcast_join = 239.2.11.74
>  port = 8649
>  ttl = 1
> }
> 
> /* You can specify as many udp_recv_channels as you like as well. */
> udp_recv_channel {
>  mcast_join = 239.2.11.74
>  port = 8649
>  bind = 239.2.11.74
> }
> 
> I configure only one server in a cluster to be polled from the gmetad
> since that server has all of the cluster members information in it
> anyway. Here is how I have it configured to talk to the one gmond
shown
> directly above:
> 
> data_source "SaksGoldApps" 45 sd1mzp01lx.saksdirect.com:8649
> 
>
------------------------------------------------------------------------
------
> Enable your software for Intel(R) Active Management Technology to meet
the
> growing manageability and security demands of your customers.
Businesses
> are taking advantage of Intel(R) vPro (TM) technology - will your
software 
> be a part of the solution? Download the Intel(R) Manageability Checker

> today! http://p.sf.net/sfu/intel-dev2devmar
> _______________________________________________
> Ganglia-general mailing list
> Ganglia-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ganglia-general


------------------------------------------------------------------------------
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
_______________________________________________
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general

Reply via email to