On 12/16/2009 at 01:41 AM, Поляченко Владимир Владимирович<strafer.ad...@gmail.com> wrote: > Hi all (sorry for my english, i can read and understand, but not write > in english) > Configure cluster in Fedora 12(base manual "Cluster from Scratch > Apache in Fedora 11") > > package from fedora repo > > [r...@server1 /]# rpm -q pacemaker ocfs2-tools ocfs2-tools-pcmk > dlm-pcmk heartbeat corosync resource-agents drbd > > pacemaker-1.0.5-4.fc12.i686 > ocfs2-tools-1.4.3-3.fc12.i686 > ocfs2-tools-pcmk-1.4.3-3.fc12.i686 > dlm-pcmk-3.0.6-1.fc12.i686 > heartbeat-3.0.0-0.5.0daab7da36a8.hg.fc12.i686 > corosync-1.2.0-1.fc12.i686 > resource-agents-3.0.6-1.fc12.i686 > drbd-8.3.6-2.fc12.i686 > > Configuration Active/Active, next problem (/var/log/messages) > > Dec 15 16:07:21 server1 crmd: [1189]: info: te_rsc_command: Initiating > action 4: monitor o2cb:0_monitor_0 on server1 (local) > Dec 15 16:07:21 server1 crmd: [1189]: info: do_lrm_rsc_op: Performing > key=4:91:7:78a6a7b0-ef15-434f-8aaf-e00cd0f9d6ef op=o2cb:0_monitor_0 ) > Dec 15 16:07:21 server1 lrmd: [1186]: info: rsc:o2cb:0:101: monitor > Dec 15 16:07:21 server1 o2cb[20999]: ERROR: Wrong stack o2cb > Dec 15 16:07:21 server1 lrmd: [1186]: info: RA output: > (o2cb:0:monitor:stderr) 2009/12/15_16:07:21 ERROR: Wrong stack o2cb > Dec 15 16:07:21 server1 crmd: [1189]: info: process_lrm_event: LRM operation > o2cb:0_monitor_0 (call=101, rc=5, cib-update=430, confirmed=true) not > installed > Dec 15 16:07:21 server1 crmd: [1189]: WARN: status_from_rc: Action 4 > (o2cb:0_monitor_0) on server1 failed (target: 7 vs. rc: 5): Error > Dec 15 16:07:21 server1 crmd: [1189]: info: abort_transition_graph: > match_graph_event:272 - Triggered transition abort (complete=0, > tag=lrm_rsc_op, id=o2cb:0 > _monitor_0, magic=0:5;4:91:7:78a6a7b0-ef15-434f-8aaf-e00cd0f9d6ef, > cib=0.329.2) > : Event failed > Dec 15 16:07:21 server1 crmd: [1189]: info: update_abort_priority: Abort > priority upgraded from 0 to 1 > Dec 15 16:07:21 server1 crmd: [1189]: info: update_abort_priority: Abort > action done superceeded by restart > Dec 15 16:07:21 server1 crmd: [1189]: info: match_graph_event: Action > o2cb:0_monitor_0 (4) confirmed on server1 (rc=4) > Dec 15 16:07:21 server1 crmd: [1189]: info: te_rsc_command: Initiatingaction > 3: probe_complete > probe_complete on server1 (local) - no waiting > > but resource /dev/drbd1 mounted without problem(nodes online, mount > not start, i mount monually)
You don't want to be mounting it manually, the cluster needs to do it for you. > crm config (only need rows) > --------------------------------- > primitive DataFS ocf:heartbeat:Filesystem \ > params device="/dev/drbd/by-res/data" directory="/opt" fstype="ocfs2" > > \ > meta target-role="Started" > primitive ServerData ocf:linbit:drbd \ > > params drbd_resource="data" > primitive dlm ocf:pacemaker:controld \ > > op monitor interval="120s" > primitive dlm ocf:pacemaker:controld \ > > op monitor interval="120s" > primitive o2cb ocf:ocfs2:o2cb \ > > op monitor interval="120s" > ms ServerDataClone ServerData \ > > meta master-max="2" master-node-max="1" clone-max="2" > clone-node-max="1" notify="true" > clone dlm-clone dlm \ > meta interleave="true" > clone o2cb-clone o2cb \ > meta interleav e="true" > colocation o2cb-with-dlm inf: o2cb-clone dlm-clone > order start-o2cb-after-dlm inf: dlm-clone o2cb-clone > ------------------------- > I create /etc/ocfs2/cluser.conf > ------------------------- > node: > name = server1 > cluster = ocfs2 > number = 0 > ip_address = 10.10.10.1 > ip_port = 7777 > > node: > name = server2 > cluster = ocfs2 > number = 1 > ip_address = 10.10.10.2 > ip_port = 7777 > > cluster: > name = ocfs2 > node_count = 2 > ----------------------------- > How resolve this problem? You shouldn't need /etc/ocfs2/cluster.conf. AFAIK this is only used in non-Pacemaker environments, when o2cb is managing the cluster. Did you create your filesystem with oc2b running, or the Pacemaker cluster? If the former, I'd suggest: - Make sure o2cb is chkconfig'd off. - Make sure your pacemaker cluster is running, and that dlm and ocfs2 are up. - run tunefs.ocfs2 --update-cluster-stack (or use mkfs to recreate your clustered filesystem). One cluster stack can't mount a filesystem created with a different cluster stack. HTH, Tim -- Tim Serong <tser...@novell.com> Senior Clustering Engineer, Novell Inc. _______________________________________________ Pacemaker mailing list Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker