Hello Junko,
thanks for the answer. I gave pacemaker a try, and I think I'll stick to it.
But unfortunately it didn't solved my problem, some resources are still
being restarted for some reason. 
ie if I put one node (vbox4) into standby mode, clvmd gets restarted on the 
other.

log output:

May  9 12:36:59 vbox3 pengine: [10199]: info: unpack_nodes: Node vbox4 is in 
standby-mode
May  9 12:36:59 vbox3 pengine: [10199]: info: determine_online_status: Node 
vbox3 is online
May  9 12:36:59 vbox3 pengine: [10199]: info: determine_online_status: Node 
vbox4 is standby
May  9 12:36:59 vbox3 pengine: [10199]: info: unpack_find_resource: Internally 
renamed drbd0:0 on vbox4 to drbd0:1
May  9 12:36:59 vbox3 pengine: [10199]: info: unpack_find_resource: Internally 
renamed cman:0 on vbox4 to cman:1
May  9 12:36:59 vbox3 pengine: [10199]: info: unpack_find_resource: Internally 
renamed clvmd:0 on vbox4 to clvmd:1
May  9 12:36:59 vbox3 pengine: [10199]: notice: clone_print: Master/Slave Set: 
drbd0
May  9 12:36:59 vbox3 pengine: [10199]: notice: native_print:     drbd0:0       
(ocf::heartbeat:drbd):  Master vbox3
May  9 12:36:59 vbox3 pengine: [10199]: notice: native_print:     drbd0:1       
(ocf::heartbeat:drbd):  Master vbox4
May  9 12:36:59 vbox3 pengine: [10199]: notice: clone_print: Clone Set: 
cman_clone
May  9 12:36:59 vbox3 pengine: [10199]: notice: native_print:     cman:0        
(lsb:cman):     Started vbox3
May  9 12:36:59 vbox3 pengine: [10199]: notice: native_print:     cman:1        
(lsb:cman):     Started vbox4
May  9 12:36:59 vbox3 pengine: [10199]: notice: clone_print: Clone Set: 
clvmd_clone
May  9 12:36:59 vbox3 pengine: [10199]: notice: native_print:     clvmd:0       
(lsb:lxclvmd):  Started vbox3
May  9 12:36:59 vbox3 pengine: [10199]: notice: native_print:     clvmd:1       
(lsb:lxclvmd):  Started vbox4
May  9 12:36:59 vbox3 pengine: [10199]: WARN: native_color: Resource drbd0:1 
cannot run anywhere
May  9 12:36:59 vbox3 pengine: [10199]: info: master_color: Promoting drbd0:0
May  9 12:36:59 vbox3 pengine: [10199]: info: master_color: drbd0: Promoted 1 
instances of a possible 2 to master
May  9 12:36:59 vbox3 pengine: [10199]: WARN: native_color: Resource cman:1 
cannot run anywhere
May  9 12:36:59 vbox3 pengine: [10199]: info: master_color: Promoting drbd0:0
May  9 12:36:59 vbox3 pengine: [10199]: info: master_color: drbd0: Promoted 2 
instances of a possible 2 to master
May  9 12:36:59 vbox3 pengine: [10199]: info: master_color: Promoting drbd0:0
May  9 12:36:59 vbox3 pengine: [10199]: info: master_color: drbd0: Promoted 2 
instances of a possible 2 to master
May  9 12:36:59 vbox3 pengine: [10199]: WARN: native_color: Resource clvmd:1 
cannot run anywhere
May  9 12:36:59 vbox3 pengine: [10199]: notice: DemoteRsc: vbox3        Demote 
drbd0:0
May  9 12:36:59 vbox3 pengine: [10199]: notice: NoRoleChange: Leave resource 
drbd0:0    (vbox3)
May  9 12:36:59 vbox3 pengine: [10199]: notice: DemoteRsc: vbox4        Demote 
drbd0:1
May  9 12:36:59 vbox3 pengine: [10199]: notice: StopRsc:   vbox4        Stop 
drbd0:1
May  9 12:36:59 vbox3 pengine: [10199]: notice: DemoteRsc: vbox3        Demote 
drbd0:0
May  9 12:36:59 vbox3 pengine: [10199]: notice: NoRoleChange: Leave resource 
drbd0:0    (vbox3)
May  9 12:36:59 vbox3 pengine: [10199]: notice: DemoteRsc: vbox4        Demote 
drbd0:1
May  9 12:36:59 vbox3 pengine: [10199]: notice: StopRsc:   vbox4        Stop 
drbd0:1
May  9 12:36:59 vbox3 pengine: [10199]: notice: NoRoleChange: Leave resource 
cman:0     (vbox3)
May  9 12:36:59 vbox3 pengine: [10199]: notice: StopRsc:   vbox4        Stop 
cman:1
May  9 12:36:59 vbox3 pengine: [10199]: notice: NoRoleChange: Leave resource 
clvmd:0    (vbox3)
May  9 12:36:59 vbox3 pengine: [10199]: notice: StopRsc:   vbox4        Stop 
clvmd:1
May  9 12:36:59 vbox3 pengine: [10199]: notice: update_action: Processing 
action drbd0_promote_0: optional runnable pseudo
May  9 12:36:59 vbox3 pengine: [10199]: notice: update_action:    Checking 
action drbd0_stopped_0: required runnable pseudo (flags=0x100100)
May  9 12:36:59 vbox3 crmd: [10193]: info: do_state_transition: State 
transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_I

May  9 12:36:59 vbox3 pengine: [10199]: WARN: process_pe_message: Transition 6: 
WARNINGs found during PE processing. PEngine Input stored in: 
/var/lib/heartbeat/pengine/pe-warn-241.bz2
May  9 12:36:59 vbox3 pengine: [10199]: info: process_pe_message: Configuration 
WARNINGs found during PE processing.  Please run "crm_verify -L" to identify 
issues.
May  9 12:36:59 vbox3 crmd: [10193]: info: do_lrm_rsc_op: Performing 
op=drbd0:0_notify_0 key=62:6:e3934610-eeb6-417a-80ba-9f35b435b6bb)
May  9 12:36:59 vbox3 tengine: [10198]: info: unpack_graph: Unpacked transition 
6: 32 actions in 32 synapses
May  9 12:36:59 vbox3 lrmd: [10190]: info: rsc:drbd0:0: notify
May  9 12:36:59 vbox3 tengine: [10198]: info: te_pseudo_action: Pseudo action 
33 fired and confirmed
May  9 12:36:59 vbox3 tengine: [10198]: info: te_pseudo_action: Pseudo action 
49 fired and confirmed
May  9 12:36:59 vbox3 tengine: [10198]: info: send_rsc_command: Initiating 
action 62: drbd0:0_pre_notify_demote_0 on vbox3
May  9 12:36:59 vbox3 tengine: [10198]: info: send_rsc_command: Initiating 
action 65: drbd0:1_pre_notify_demote_0 on vbox4
May  9 12:36:59 vbox3 tengine: [10198]: info: send_rsc_command: Initiating 
action 44: clvmd:0_stop_0 on vbox3
May  9 12:36:59 vbox3 tengine: [10198]: info: send_rsc_command: Initiating 
action 46: clvmd:1_stop_0 on vbox4
May  9 12:36:59 vbox3 crmd: [10193]: info: do_lrm_rsc_op: Performing 
op=clvmd:0_stop_0 key=44:6:e3934610-eeb6-417a-80ba-9f35b435b6bb)
May  9 12:36:59 vbox3 lrmd: [10190]: info: rsc:clvmd:0: stop
May  9 12:36:59 vbox3 lrmd: [13247]: WARN: For LSB init script, no additional 
parameters are needed.
May  9 12:36:59 vbox3 crmd: [10193]: info: process_lrm_event: LRM operation 
drbd0:0_notify_0 (call=17, rc=0) complete
May  9 12:36:59 vbox3 tengine: [10198]: info: match_graph_event: Action 
drbd0:0_pre_notify_demote_0 (62) confirmed on vbox3 (rc=0)
May  9 12:36:59 vbox3 cib: [13243]: info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
/var/lib/heartbeat/crm/cib.xml.sig)
May  9 12:36:59 vbox3 cib: [13243]: info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: 
/var/lib/heartbeat/crm/cib.xml.sig.last)
May  9 12:36:59 vbox3 lrmd: [10190]: info: RA output: (clvmd:0:stop:stdout) 
Deactivating VG vgshared:
May  9 12:36:59 vbox3 cib: [13243]: info: write_cib_contents: Wrote version 
0.26.1 of the CIB to disk (digest: acb6f794779913382b7b62a2a2c84177)
May  9 12:36:59 vbox3 cib: [13243]: info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml (digest: 
/var/lib/heartbeat/crm/cib.xml.sig)
May  9 12:36:59 vbox3 cib: [13243]: info: retrieveCib: Reading cluster 
configuration from: /var/lib/heartbeat/crm/cib.xml.last (digest: 
/var/lib/heartbeat/crm/cib.xml.sig.last)
May  9 12:36:59 vbox3 lrmd: [10190]: info: RA output: (clvmd:0:stop:stderr) 
File descriptor 3 left open File descriptor 4 left open File descriptor 5 left

etc.

I don't especially understand the part about renaming:
May  9 12:36:59 vbox3 pengine: [10199]: info: unpack_find_resource: Internally 
renamed drbd0:0 on vbox4 to drbd0:1
May  9 12:36:59 vbox3 pengine: [10199]: info: unpack_find_resource: Internally 
renamed cman:0 on vbox4 to cman:1
May  9 12:36:59 vbox3 pengine: [10199]: info: unpack_find_resource: Internally 
renamed clvmd:0 on vbox4 to clvmd:1
Why is that? Isn't it somehow related?

here is showscores output if it might help somehow:
clvmd:0             INFINITY  vbox3           0
clvmd:0             INFINITY  vbox4           0
clvmd:1             -INFINITY vbox3           0
clvmd:1             INFINITY  vbox4           0
cman:0              0         vbox4           0
cman:0              1         vbox3           0
cman:1              1         vbox4           0
cman:1              -INFINITY vbox3           0
drbd0:0             0         vbox4           0
drbd0:0             77        vbox3           0
drbd0:0_(master)    76        vbox3           0
drbd0:0_(master)    INFINITY  vbox3           0
drbd0:1             76        vbox4           0
drbd0:1             -INFINITY vbox3           0
drbd0:1_(master)    75        vbox4           0
drbd0:1_(master)    INFINITY  vbox4           0

Sorry for such a long post, I hope it isn't problem.

best regards!
thanks
nik


> Is this the same problem as this?
> Clone instance might be shuffled unexpectedly.
> http://bugs.clusterlabs.org/cgi-bin/bugzilla/show_bug.cgi?id=1
> 
> It was fixed in Pacemaker 0.6.3.
> http://hg.clusterlabs.org/pacemaker/stable-0.6/rev/60bc7b0a6ad4
> 
> You can set globally_unique="false" to clone.
> 
> Thanks,
> Junko
> 

-- 
-------------------------------------
Nikola CIPRICH
LinuxBox.cz, s.r.o.
28. rijna 168, 709 01 Ostrava

tel.:   +420 596 603 142
fax:    +420 596 621 273
mobil:  +420 777 093 799
www.linuxbox.cz

mobil servis: +420 737 238 656
email servis: [EMAIL PROTECTED]
-------------------------------------
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to