Thank you Xavier. Can you give me an example of the gmond.conf that is located on the cluster "Application Servers"?
Are there any changes needed in the gmond.conf on the gmetad server to allow "Application Servers" to be collected? I think we are getting closer to my problem... thanks all for the help. Ron Cavallo Sr. Director, Infrastructure Saks Fifth Avenue / Saks Direct 12 East 49th Street New York, NY 10017 212-451-3807 (O) 212-940-5079 (fax) 646-315-0119(C) www.saks.com -----Original Message----- From: Xavier Stevens [mailto:xstev...@mozilla.com] Sent: Wednesday, March 23, 2011 11:25 AM To: ganglia-general@lists.sourceforge.net Subject: Re: [Ganglia-general] Need help configuring clusters to use separate multicast IP Ron, You will probably want to configure it to check more than 1 machine per cluster. That way if that machine is the one that goes down you don't lose visibility into the whole cluster. So here's an example I pulled from our gmetad server (changed the hostnames of course): data_source "Application Servers" app1 data_source "Databases" db1 data_source "ETL" etl1 data_source "Elastic Search Cluster" elasticsearch1 elasticsearch2 data_source "Research Cluster" admin1 admin2 gridname "Mozilla Metrics" I should note that each data source is on different multicast channels, but we always use the default port (8649) for gmond. Hopefully this helps! Cheers, -Xavier On 3/23/11 8:12 AM, Ron Cavallo wrote: > I see. So I need a separate IP AND A SEPARATE PORT. Got it. > > Also, I use a single gmond in each cluster to aggregate the single > cluster. I configure the gmetad to talk to only gmond from each cluster. > Is that wrong? > > -RC > > Ron Cavallo > Sr. Director, Infrastructure > Saks Fifth Avenue / Saks Direct > 12 East 49th Street > New York, NY 10017 > 212-451-3807 (O) > 212-940-5079 (fax) > 646-315-0119(C) > www.saks.com > > > -----Original Message----- > From: Seth Graham [mailto:set...@fnal.gov] > Sent: Wednesday, March 23, 2011 11:06 AM > To: Ron Cavallo > Cc: Bernard Li; ganglia-general@lists.sourceforge.net > Subject: Re: [Ganglia-general] Need help configuring clusters to use > separate multicast IP > > > That might work, but I don't think anyone sets up their ganglia so that > a single gmond is trying aggregate all clusters. That's what the gmetad > daemon is for. > > Also note that even though you have a separate multicast address for > each cluster, the port still has to be unique. The port is what gmetad > and the web frontend use to distinguish between clusters. You get really > weird results if multiple data_source lines use the same port. > > > An ideal configuration might be: > > Each of the 5 clusters has a unique gmond.conf, with its own multicast > address and port number. > > The gmetad host has 5 data_source lines to query one host from each of > the 5 clusters. > > > > > On Mar 23, 2011, at 9:52 AM, Ron Cavallo wrote: > >> I need some help. I am trying to configure my gmetad to collect from >> different clusters on different IP's. I have 5 clusters. This is my >> gmetad collections server's local gmond.conf configuration: >> >> >> /* Feel free to specify as many udp_send_channels as you like. Gmond >> used to only support having a single channel */ >> udp_send_channel { >> mcast_join = 239.2.11.72 >> port = 8649 >> ttl = 1 >> } >> >> /* You can specify as many udp_recv_channels as you like as well. */ >> udp_recv_channel { >> mcast_join = 239.2.11.71 >> port = 8649 >> bind = 239.2.11.71 >> } >> >> udp_recv_channel { >> mcast_join = 239.2.11.72 >> port = 8649 >> bind = 239.2.11.72 >> } >> >> udp_recv_channel { >> mcast_join = 239.2.11.73 >> port = 8649 >> bind = 239.2.11.73 >> } >> >> udp_recv_channel { >> mcast_join = 239.2.11.74 >> port = 8649 >> bind = 239.2.11.74 >> } >> >> udp_recv_channel { >> mcast_join = 239.2.11.75 >> port = 8649 >> bind = 239.2.11.75 >> } >> >> udp_recv_channel { >> port = 8649 >> } >> >> This is an excerpt from ONE OF THE CLUSTERS ABOVE (the .74 cluster) >> >> /* Feel free to specify as many udp_send_channels as you like. Gmond >> used to only support having a single channel */ >> udp_send_channel { >> mcast_join = 239.2.11.74 >> port = 8649 >> ttl = 1 >> } >> >> /* You can specify as many udp_recv_channels as you like as well. */ >> udp_recv_channel { >> mcast_join = 239.2.11.74 >> port = 8649 >> bind = 239.2.11.74 >> } >> >> I configure only one server in a cluster to be polled from the gmetad >> since that server has all of the cluster members information in it >> anyway. Here is how I have it configured to talk to the one gmond > shown >> directly above: >> >> data_source "SaksGoldApps" 45 sd1mzp01lx.saksdirect.com:8649 >> >> > ------------------------------------------------------------------------ > ------ >> Enable your software for Intel(R) Active Management Technology to meet > the >> growing manageability and security demands of your customers. > Businesses >> are taking advantage of Intel(R) vPro (TM) technology - will your > software >> be a part of the solution? Download the Intel(R) Manageability Checker >> today! http://p.sf.net/sfu/intel-dev2devmar >> _______________________________________________ >> Ganglia-general mailing list >> Ganglia-general@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/ganglia-general > > ------------------------------------------------------------------------ ------ > Enable your software for Intel(R) Active Management Technology to meet the > growing manageability and security demands of your customers. Businesses > are taking advantage of Intel(R) vPro (TM) technology - will your software > be a part of the solution? Download the Intel(R) Manageability Checker > today! http://p.sf.net/sfu/intel-dev2devmar > _______________________________________________ > Ganglia-general mailing list > Ganglia-general@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/ganglia-general ------------------------------------------------------------------------ ------ Enable your software for Intel(R) Active Management Technology to meet the growing manageability and security demands of your customers. Businesses are taking advantage of Intel(R) vPro (TM) technology - will your software be a part of the solution? Download the Intel(R) Manageability Checker today! http://p.sf.net/sfu/intel-dev2devmar _______________________________________________ Ganglia-general mailing list Ganglia-general@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/ganglia-general ------------------------------------------------------------------------------ Enable your software for Intel(R) Active Management Technology to meet the growing manageability and security demands of your customers. Businesses are taking advantage of Intel(R) vPro (TM) technology - will your software be a part of the solution? Download the Intel(R) Manageability Checker today! http://p.sf.net/sfu/intel-dev2devmar _______________________________________________ Ganglia-general mailing list Ganglia-general@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/ganglia-general