On Thu, Mar 19, 2009 at 07:23, Takenaka Kazuhiro <takenaka.kazuh...@oss.ntt.co.jp> wrote: > Hello > >> It is completely dependent on your stonith architecture. >> Some devices support a list of hosts which means you only need one >> stonith resource. > > Even if there is a such device, the Pacemaker stonith mechanism > provides insufficient support for it. Suppose: how should its plugin > instance do when it loses control to one of its target nodes? If a > monitor for the instance returns an error code, Pacemaker loses control > via it to all of its target nodes at a time.
on-fail=ignore ? > On the other hand, If a > monitor for the instance returns a success code, Pacemaker cannot > be aware of the trouble. In the later case, when the node that the > instance cannot control must be shooted, Pacemaker will request > the useless instance to shoot the node. > > Andrew Beekhof wrote: >> >> 2009/3/16 Romi Verma <romi3rd...@gmail.com>: >>>> >>>> i guess if DC fails then CRM will elect any other node for DC role and >>>> then , this new DC will try to fence the errant node , if it fails to >>>> fence >>>> then it will choose other node for this fencing task. >>> >>> Andrew could you please confirm the above . >> >> [insert same rant as Lars - this is getting _really_ annoying] >> >>> one thing more related to stonith design. suppose we have 4 nodes cluster >>> so >>> do we need 3 stoniths on each node ( to make a node eligible to kill any >>> of >>> other 3 nodes ) that will lead to 3x4 stoniths or do we need only one >>> stonith on each node that will be total 4 stoniths . >> >> It is completely dependent on your stonith architecture. >> Some devices support a list of hosts which means you only need one >> stonith resource. >> >>> i experimented this and found that 3 stoniths on each node is required >>> but i >>> guess it should not be the c ase as CIB has the stonith info so even if >>> stonith is not running on a node it shoul be able to read the stonith >>> related info from CIB. >>> >>> Thanks, >>>> >>>> Thanks & Regards, >>>> >>>>> >>>>> Regards, >>>>> >>>>> _______________________________________________ >>>>> Pacemaker mailing list >>>>> Pacemaker@oss.clusterlabs.org >>>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >>>>> >>>>> >>>>> _______________________________________________ >>>>> Pacemaker mailing list >>>>> Pacemaker@oss.clusterlabs.org >>>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >>>>> >>>> >>>> _______________________________________________ >>>> Pacemaker mailing list >>>> Pacemaker@oss.clusterlabs.org >>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >>>> >>> >>> _______________________________________________ >>> Pacemaker mailing list >>> Pacemaker@oss.clusterlabs.org >>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >>> >>> >> >> _______________________________________________ >> Pacemaker mailing list >> Pacemaker@oss.clusterlabs.org >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >> > > -- > Takenaka Kazuhiro <takenaka.kazuh...@oss.ntt.co.jp> > > _______________________________________________ > Pacemaker mailing list > Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > _______________________________________________ Pacemaker mailing list Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker