All,

Today I spammed the bejesus out of the mailing list, so I thought I
would give something back. Today I was able to configure a single front
end to monitor 5 clusters, all using different multicast addresses and
ports. I needed to move to different IPs/PORTS because without that, the
frontend was confusing what hosts belonged in what cluster.

HERE IS A SAMPLE GMOND.CONF FROM ONE CLUSTER:

/* Feel free to specify as many udp_send_channels as you like.  Gmond
   used to only support having a single channel */
udp_send_channel {
  mcast_join = 239.2.11.75
  port = 8653
  ttl = 1
}

/* You can specify as many udp_recv_channels as you like as well. */
udp_recv_channel {
  mcast_join = 239.2.11.75
  port = 8653
  bind = 239.2.11.75
}

/* You can specify as many tcp_accept_channels as you like to share
   an xml description of the state of the cluster */
tcp_accept_channel {
  port = 8653
}

HERE IS A SAMPLE GMOND.CONF IN THE SECOND CLUSTER:

/* Feel free to specify as many udp_send_channels as you like.  Gmond
   used to only support having a single channel */
udp_send_channel {
  mcast_join = 239.2.11.73
  port = 8651
  ttl = 1
}

/* You can specify as many udp_recv_channels as you like as well. */
udp_recv_channel {
  mcast_join = 239.2.11.73
  port = 8651
  bind = 239.2.11.73
}

/* You can specify as many tcp_accept_channels as you like to share
   an xml description of the state of the cluster */
tcp_accept_channel {
  port = 8651
}
HERE IS THE GMETAD.CONF IN THE WEB FRONTEND
data_source "SaksProdApps" 45 server1.saksdirect.com:8651
data_source "SaksProdDB" 45 server2.saksdirect.com:8653

Hope that helps someone and thanks you folks for all the help today!

-RC




Ron Cavallo 
Sr. Director, Infrastructure
Saks Fifth Avenue / Saks Direct
12 East 49th Street
New York, NY 10017
212-451-3807 (O)
212-940-5079 (fax) 
646-315-0119(C) 
www.saks.com
 

-----Original Message-----
From: Xavier Stevens [mailto:xstev...@mozilla.com] 
Sent: Wednesday, March 23, 2011 12:27 PM
To: Ron Cavallo
Cc: ganglia-general@lists.sourceforge.net
Subject: Re: [Ganglia-general] Need help configuring clusters to use
separate multicast IP

Hey Ron,

Your gmond on the gmetad server shouldn't have anything special in it.
It should just be like any other gmond. Ours gmetad runs on one of the
"Application Servers" so the gmond.conf on that machine would look the
same as below. If your gmetad server is on its own you could set it up
with a basic non-multicast gmond setup.

So here's the relevant sections from gmond.conf:

cluster {
  name = "Application Servers"
  owner = "Mozilla Metrics"
}

/* Feel free to specify as many udp_send_channels as you like.  Gmond
   used to only support having a single channel */
udp_send_channel {
  bind_hostname = yes # Highly recommended, soon to be default.
                       # This option tells gmond to use a source address
                       # that resolves to the machine's hostname.
Without
                       # this, the metrics may appear to come from any
                       # interface and the DNS names associated with
                       # those IPs will be used to create the RRDs.
  mcast_join = 239.2.11.76
  mcast_if = eth0
  port = 8649
  ttl = 1
}

/* You can specify as many udp_recv_channels as you like as well. */
udp_recv_channel {
  mcast_join = 239.2.11.76
  port = 8649
  bind = 239.2.11.76
}

/* You can specify as many tcp_accept_channels as you like to share
   an xml description of the state of the cluster */
tcp_accept_channel {
  port = 8649
}


On 3/23/11 8:51 AM, Ron Cavallo wrote:
> Thank you Xavier. 
>
> Can you give me an example of the gmond.conf that is located on the
> cluster "Application Servers"? 
>
> Are there any changes needed in the gmond.conf on the gmetad server to
> allow "Application Servers" to be collected?
>
> I think we are getting closer to my problem... thanks all for the
help.
>
> Ron Cavallo 
> Sr. Director, Infrastructure
> Saks Fifth Avenue / Saks Direct
> 12 East 49th Street
> New York, NY 10017
> 212-451-3807 (O)
> 212-940-5079 (fax) 
> 646-315-0119(C) 
> www.saks.com
>  
>
> -----Original Message-----
> From: Xavier Stevens [mailto:xstev...@mozilla.com] 
> Sent: Wednesday, March 23, 2011 11:25 AM
> To: ganglia-general@lists.sourceforge.net
> Subject: Re: [Ganglia-general] Need help configuring clusters to use
> separate multicast IP
>
> Ron,
>
> You will probably want to configure it to check more than 1 machine
per
> cluster. That way if that machine is the one that goes down you don't
> lose visibility into the whole cluster.
>
> So here's an example I pulled from our gmetad server (changed the
> hostnames of course):
>
> data_source "Application Servers" app1
> data_source "Databases" db1
> data_source "ETL" etl1
> data_source "Elastic Search Cluster" elasticsearch1 elasticsearch2
> data_source "Research Cluster" admin1 admin2
> gridname "Mozilla Metrics"
>
> I should note that each data source is on different multicast
channels,
> but we always use the default port (8649) for gmond.
>
> Hopefully this helps!
>
> Cheers,
>
>
> -Xavier
>
>
> On 3/23/11 8:12 AM, Ron Cavallo wrote:
>> I see. So I need a separate IP AND A SEPARATE PORT. Got it.
>>
>> Also, I use a single gmond in each cluster to aggregate the single
>> cluster. I configure the gmetad to talk to only gmond from each
> cluster.
>> Is that wrong?
>>
>> -RC
>>
>> Ron Cavallo 
>> Sr. Director, Infrastructure
>> Saks Fifth Avenue / Saks Direct
>> 12 East 49th Street
>> New York, NY 10017
>> 212-451-3807 (O)
>> 212-940-5079 (fax) 
>> 646-315-0119(C) 
>> www.saks.com
>>  
>>
>> -----Original Message-----
>> From: Seth Graham [mailto:set...@fnal.gov] 
>> Sent: Wednesday, March 23, 2011 11:06 AM
>> To: Ron Cavallo
>> Cc: Bernard Li; ganglia-general@lists.sourceforge.net
>> Subject: Re: [Ganglia-general] Need help configuring clusters to use
>> separate multicast IP
>>
>>
>> That might work, but I don't think anyone sets up their ganglia so
> that
>> a single gmond is trying aggregate all clusters. That's what the
> gmetad
>> daemon is for. 
>>
>> Also note that even though you have a separate multicast address for
>> each cluster, the port still has to be unique. The port is what
gmetad
>> and the web frontend use to distinguish between clusters. You get
> really
>> weird results if multiple data_source lines use the same port.
>>
>>
>> An ideal configuration might be:
>>
>> Each of the 5 clusters has a unique gmond.conf, with its own
multicast
>> address and port number.
>>
>> The gmetad host has 5 data_source lines to query one host from each
of
>> the 5 clusters.
>>
>>
>>
>>
>> On Mar 23, 2011, at 9:52 AM, Ron Cavallo wrote:
>>
>>> I need some help. I am trying to configure my gmetad to collect from
>>> different clusters on different IP's. I have 5 clusters. This is my
>>> gmetad collections server's local gmond.conf configuration:
>>>
>>>
>>> /* Feel free to specify as many udp_send_channels as you like.
Gmond
>>>   used to only support having a single channel */
>>> udp_send_channel {
>>>  mcast_join = 239.2.11.72
>>>  port = 8649
>>>  ttl = 1
>>> }
>>>
>>> /* You can specify as many udp_recv_channels as you like as well. */
>>> udp_recv_channel {
>>>  mcast_join = 239.2.11.71
>>>  port = 8649
>>>  bind = 239.2.11.71
>>> }
>>>
>>> udp_recv_channel {
>>>  mcast_join = 239.2.11.72
>>>  port = 8649
>>>  bind = 239.2.11.72
>>> }
>>>
>>> udp_recv_channel {
>>>  mcast_join = 239.2.11.73
>>>  port = 8649
>>>  bind = 239.2.11.73
>>> }
>>>
>>> udp_recv_channel {
>>>  mcast_join = 239.2.11.74
>>>  port = 8649
>>>  bind = 239.2.11.74
>>> }
>>>
>>> udp_recv_channel {
>>>  mcast_join = 239.2.11.75
>>>  port = 8649
>>>  bind = 239.2.11.75
>>> }
>>>
>>> udp_recv_channel {
>>>  port = 8649
>>> }
>>>
>>> This is an excerpt from ONE OF THE CLUSTERS ABOVE (the .74 cluster)
>>>
>>> /* Feel free to specify as many udp_send_channels as you like.
Gmond
>>>   used to only support having a single channel */
>>> udp_send_channel {
>>>  mcast_join = 239.2.11.74
>>>  port = 8649
>>>  ttl = 1
>>> }
>>>
>>> /* You can specify as many udp_recv_channels as you like as well. */
>>> udp_recv_channel {
>>>  mcast_join = 239.2.11.74
>>>  port = 8649
>>>  bind = 239.2.11.74
>>> }
>>>
>>> I configure only one server in a cluster to be polled from the
gmetad
>>> since that server has all of the cluster members information in it
>>> anyway. Here is how I have it configured to talk to the one gmond
>> shown
>>> directly above:
>>>
>>> data_source "SaksGoldApps" 45 sd1mzp01lx.saksdirect.com:8649
>>>
>>>
>
------------------------------------------------------------------------
>> ------
>>> Enable your software for Intel(R) Active Management Technology to
> meet
>> the
>>> growing manageability and security demands of your customers.
>> Businesses
>>> are taking advantage of Intel(R) vPro (TM) technology - will your
>> software 
>>> be a part of the solution? Download the Intel(R) Manageability
> Checker
>>> today! http://p.sf.net/sfu/intel-dev2devmar
>>> _______________________________________________
>>> Ganglia-general mailing list
>>> Ganglia-general@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/ganglia-general
>>
>
------------------------------------------------------------------------
> ------
>> Enable your software for Intel(R) Active Management Technology to
meet
> the
>> growing manageability and security demands of your customers.
> Businesses
>> are taking advantage of Intel(R) vPro (TM) technology - will your
> software 
>> be a part of the solution? Download the Intel(R) Manageability
Checker
>> today! http://p.sf.net/sfu/intel-dev2devmar
>> _______________________________________________
>> Ganglia-general mailing list
>> Ganglia-general@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/ganglia-general
>
------------------------------------------------------------------------
> ------
> Enable your software for Intel(R) Active Management Technology to meet
> the
> growing manageability and security demands of your customers.
Businesses
> are taking advantage of Intel(R) vPro (TM) technology - will your
> software 
> be a part of the solution? Download the Intel(R) Manageability Checker

> today! http://p.sf.net/sfu/intel-dev2devmar
> _______________________________________________
> Ganglia-general mailing list
> Ganglia-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/ganglia-general

------------------------------------------------------------------------------
Enable your software for Intel(R) Active Management Technology to meet the
growing manageability and security demands of your customers. Businesses
are taking advantage of Intel(R) vPro (TM) technology - will your software 
be a part of the solution? Download the Intel(R) Manageability Checker 
today! http://p.sf.net/sfu/intel-dev2devmar
_______________________________________________
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general

Reply via email to