you only need one notifier, usually in the CSG.
No need for proxy anywhere else.
 

________________________________

From: i man [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 03, 2008 12:00 PM
To: Gene Henriksen
Cc: John Cronin; Jim Senicka; veritas-ha@mailman.eng.auburn.edu
Subject: Re: [Veritas-ha] .stale file


Gene,John,Jim,

Thats excellent. So many thanks again for the new ideas. There is one
last query regarding the whole activity. 

This is regarding the use of Proxy for notifier. Nobody has been able to
tell me definately whether this is required for the notifier. If I
create my notifier in the Cluster service group or any other service
group does it require a proxy to send alerts. If so and if I create the
notifier in separate service group is it fine if I create the proxy in
Cluster service group. 

Having gone thorugh BARG there are sample examples which explain
notifier dependency on proxy, but even without the proxy things seem to
be working fine for me in a test system.Also when installing thorugh GUI
it does ask about some NIC card information, the step which I always
skipped, don't know how relevant this is for the creation and working of
notifier.

Ciao



On Tue, Jun 3, 2008 at 4:22 PM, Gene Henriksen
<[EMAIL PROTECTED]> wrote:


        Putting the Notifier in the cluster service group also has an
advantage because CSG is the first SG up and the hardest to kill,
therefore in times of lots of problems you will get notification more so
than if the service group you arbitrarily chose to use is faulted on all
systems in the cluster, then notification is also down.

         

        You could create the CSG in one system, save the configuration,
run "hacf -cftocmd ." in the /etc/VRTSvcs/conf/config directory, then
edit the main.cmd (look toward the bottom) to find the commands to
create the CSG and Notifier, make a script and modify to run on other
clusters.

         

        
________________________________


        From: John Cronin [mailto:[EMAIL PROTECTED] 
        Sent: Tuesday, June 03, 2008 10:45 AM
        To: i man
        Cc: Jim Senicka; Gene Henriksen;
veritas-ha@mailman.eng.auburn.edu 

        Subject: Re: [Veritas-ha] .stale file

        

         

        It would be no problem to create a Notifier resource in any
arbitrary service group with the CLI.  If I understand this correctly,
what you are doing is shutting down VCS, and then editing main.cf to
change the config?  If this was for one or two clusters, it might be an
OK way to do it, but if this is for hundreds of systems, it would be
better to learn how to use the CLI and then script the changes.

         

        Also, what is the problem with putting the notifier in the
ClusterService group?  I can't see how putting it in another service
group would provide you any particular benefit - the Notifier if going
to do the same things no matter which service group it is in.  Since it
is a cluster wide service, it makes sense that it should be in the
ClusterService group.

         

        As for using "hastop -all -force", I tend to use it frequently
on production systems when I am doing something that requires stopping
the cluster, but does not require stopping the systems or the services
running on those systems (e.g. patching or upgrading VCS, or
reconfiguring GAB or LLT).  However, I would not do this to accomplish
something that can be done with CLI commands.

         

        -- 

        John Cronin
         

        On 6/3/08, i man <[EMAIL PROTECTED]> wrote: 

        Correct Jim, If this would have been a normal cluster service
group I would loved to have done that. What I'm trying to obtain is
creation of snmp notifier in a separate service group . Through GUI you
cannot create it in your own service group but could only create it as a
part of Clusterservicegroup. Not sure if this is achievable through CLI.
        
        Any suggestions ? 

         

        On Tue, Jun 3, 2008 at 2:52 PM, Jim Senicka
<[EMAIL PROTECTED]> wrote:

        Right.

        But that can also be done via CLI or GUI with the cluster
running.

         

         

         

        
________________________________


        From: i man [mailto:[EMAIL PROTECTED] 
        Sent: Tuesday, June 03, 2008 9:48 AM
        To: Jim Senicka
        Cc: Gene Henriksen; veritas-ha@mailman.eng.auburn.edu 

        
        Subject: Re: [Veritas-ha] .stale file
         

        
         

        Jim,
        
        This is to update systems with some new service groups. This is
not on a single system but rather large number of systems (100+)
        
        Also so many thanks to Gene and John for resolving my doubts.
        
        Ciao,

        On Tue, Jun 3, 2008 at 2:30 PM, Jim Senicka
<[EMAIL PROTECTED]> wrote:

        Bigger question is what are you routinely using stop -force to
accomplish?

         

         

        
________________________________


        From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gene
Henriksen
        Sent: Tuesday, June 03, 2008 8:17 AM
        To: i man; veritas-ha@mailman.eng.auburn.edu
        Subject: Re: [Veritas-ha] .stale file
        
         

        It indicates you did not close and save the cluster
configuration after making modifications. It is a warning. If you close
and save the config, it goes away.

         

        
________________________________


        From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of i man
        Sent: Tuesday, June 03, 2008 7:28 AM
        To: veritas-ha@mailman.eng.auburn.edu
        Subject: [Veritas-ha] .stale file

         

        All,
        
        Had some queries regarding the .stale file present in the
/etc/VRTSvcs/conf/config directory. I know that if the haagents are
restarted with hastop -all -force and this file is present the cluster
memebers could be in stale admin wait state. I have been deleting this
file then hastop -all -force and then hastart on the the nodes. I do not
want the service groups to go offline that's why -force.
        
        My query is what is the use of .stale ?
        Would hastart -force help to get nodes back if this file is
present ?
        Is file deletion the only method to get the nodes back ?
        
        I noticed recently that when getting the cluster back, this way
my clusters the information about the admin password. I thnk I'm doing
something wrong.....any help.
        
        Ciao.

        
         

        
        
        

        
        _______________________________________________
        Veritas-ha maillist  -  Veritas-ha@mailman.eng.auburn.edu
        http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha

         


_______________________________________________
Veritas-ha maillist  -  Veritas-ha@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha

Reply via email to