You forgot to turn on monitor operation for ping (actual job)

On May 11, 2010, at 5:15 AM, Gianluca Cecchi wrote:

> On Mon, May 10, 2010 at 4:39 PM, Vadym Chepkov <vchep...@gmail.com> wrote:
> # crm ra meta ping
> 
> name (string, [undef]): Attribute name
>     The name of the attributes to set.  This is the name to be used in the 
> constraints.
> 
> By default is "pingd", but you are checking against pinggw
> 
> I suggest you do not change name though, but adjust your location constraint 
> to use pingd instead.
> crm_mon only notices "pingd" at the moment whenn you pass -f argument: it's 
> hardcoded
> 
> 
> On Mon, May 10, 2010 at 9:34 AM, Gianluca Cecchi <gianluca.cec...@gmail.com> 
> wrote:
> Hello,
> using pacemaker 1.0.8 on rh el 5 I have some problems understanding the way 
> ping clone works to setup monitoring of gw... even after reading docs...
> 
> As soon as I run:
> crm configure location nfs-group-with-pinggw nfs-group rule -inf: not_defined 
> pinggw or pinggw lte 0
> 
> the resources go stopped and don't re-start....
> 
> [snip]
> 
> hem...
> I changed the location line so that now I have:
> primitive pinggw ocf:pacemaker:ping \
>       params host_list="192.168.101.1" multiplier="100" \
>       op start interval="0" timeout="90" \
>       op stop interval="0" timeout="100"
> 
> clone cl-pinggw pinggw \
>       meta globally-unique="false"
> 
> location nfs-group-with-pinggw nfs-group \
>       rule $id="nfs-group-with-pinggw-rule" -inf: not_defined pingd or pingd 
> lte 0
> 
> But now nothing happens  if I run for example
>  iptables -A OUTPUT -p icmp -d 192.168.101.1 -j REJECT (or DROP)
> in the node where nfs-group is running.....
> 
> Do I have to name the primitive itself to pingd????
> It seems that the binary /bin/ping is not accessed at all (with ls -lu ...)
> 
> Or do I have to change the general property I previously define to avoide 
> failback:
> rsc_defaults $id="rsc-options" \
>       resource-stickiness="100"
> 
> crm_mon -f -r gives:
> Online: [ ha1 ha2 ]
> 
> Full list of resources:
> 
> SitoWeb (ocf::heartbeat:apache):        Started ha1
>  Master/Slave Set: NfsData
>      Masters: [ ha1 ]
>      Slaves: [ ha2 ]
>  Resource Group: nfs-group
>      ClusterIP  (ocf::heartbeat:IPaddr2):     Started ha1
>      lv_drbd0   (ocf::heartbeat:LVM):   Started ha1
>      NfsFS    (ocf::heartbeat:Filesystem):    Started ha1
>      nfssrv     (ocf::heartbeat:nfsserver):     Started ha1
> nfsclient     (ocf::heartbeat:Filesystem):    Started ha2
>  Clone Set: cl-pinggw
>      Started: [ ha2 ha1 ]
> 
> Migration summary:
> * Node ha1:  pingd=100
> * Node ha2:  pingd=100
> 
> Probably I didn't understand correctly what described at the link:
> http://www.clusterlabs.org/wiki/Pingd_with_resources_on_different_networks
> or it is outdated now... and instead of defining two clones it is better (aka 
> works) to populate the host_list parameter as described here in case of more 
> networks connected:
> 
> http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch09s03s03.html
> 
> Probably I'm missing something very simple but I don't get a clue to it...
> Gianluca
> _______________________________________________
> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf

Reply via email to