On Fri, Aug 19, 2011 at 7:31 AM, Dimitri Maziuk <[email protected]> wrote:
>
> WTH does this mean (from node2):
>
> pengine: [16069]: notice: clone_print:  Master/Slave Set: master_drbd
> pengine: [16069]: notice: short_print:      Masters: [ node1 ]
> pengine: [16069]: notice: short_print:      Slaves: [ node2 ]
> pengine: [16069]: notice: native_print:
> filesystem_drbd#011(ocf::heartbeat:Filesystem):#011Started node1
> pengine: [16069]: notice: native_print:
> primitive_nfslock#011(lsb:nfslock):#011Started node2
> pengine: [16069]: info: get_failcount: filesystem_drbd has failed
> INFINITY times on node2
> pengine: [16069]: WARN: common_apply_stickiness: Forcing filesystem_drbd
> away from node2 after 1000000 failures (max=1000000)
> pengine: [16069]: info: get_failcount: primitive_nfslock has failed
> INFINITY times on node1
> pengine: [16069]: WARN: common_apply_stickiness: Forcing
> primitive_nfslock away from node1 after 1000000 failures (max=1000000)
>
> Does this mean the nfs filesystem is started on node1 while the statd &
> lockd for it are started on node2? Despite inf: colocation constraint?

No it means one of more filesystem_drbd and primitive_nfslock
operations failed really badly on node1.
Possibly it was the initial health check (to check it wasn't already
running before the cluster started) and subsequent failed "stop".

>
> (SL6 w/ stock rpms plus drbd from atrpms)
>
> Dima
> --
> Dimitri Maziuk
> Programmer/sysadmin
> BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
>
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to