Re: [Pacemaker] node can't join cluster after reboot
Hi Vlad, No I haven't. 2.34 wasn't available when I had the problem. FYI I'm still using glib-2.30.3 and I'll probably stay there until 3.34.0 is stable or 2.32.4-r2 comes out :) Pat. On 05/11/2012 7:57 AM, Vladimir Elisseev wrote: Patrick, Gentoo devs suggested trying >=dev-libs/glib-2.34. Have you tried this already? Regards, Vlad. On Sun, 2012-11-04 at 08:45 -0800, Patrick Irvine wrote: Hey Guys, Just for the record, I just noticed this thread and it sounded familiar. I checked my gentoo systems and I had to mask glib-2.32.4-r1 and use glib-2.30.3 in order to get corosync/pacemaker to work. I had the same problem. Nodes couldn't talk to each other. Sorry I didn't notice this thread earlier, as I might have been able to help. Pat. On 04/11/2012 3:15 AM, Vladimir Elisseev wrote: Thanks for the explanation. I saw coredumps in the directories you mentioned already. The "suspicious -r1" includes two patches over vanilla version of glib: https://bugzilla.gnome.org/show_bug.cgi?id=679306 http://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/dev-libs/glib/files/glib-2.32.4-CVE-2012-3524.patch?view=markup For the moment I simply masked this particular glib version. Hopefully I'll be able to find time to do a complete debug as you described. Regards, Vlad. On Sun, 2012-11-04 at 13:54 +0300, Vladislav Bogdanov wrote: 03.11.2012 18:22, Vladimir Elisseev wrote: Vladislav, Thanks for the hint! Upgrading glig from 2.30.3 to 2.32.4 triggers this behavior of corosync. Do you know where I can find more info regarding this problem? That is not corosync but pacemaker, which heavily uses glib internally. And glib is the only package in your list which may affect pacemaker. I would say that is a regression in that specific glib version or build. Library behavior changed without bumping major so-number. You'd better talk to your distribution maintainers. And -r1 looks suspicious in glib version you installed. Don't you know what does it mean? One more note, cib exits with signal 6 (SIGABRT), which usually means you hit some assert in code. That usually results in memory dump. Look at /var/lib/heartbeat/cores or /var/lib/pacemaker/cores if you have relevant core files for that. If not, then you need to enable coredumps. Then install debuginfo packages for pacemaker and glib (that is very distribution specific, so I cannot help with that). After that you can analyze relevant core files with 'gdb ' Just run 'bt full' and that should be enough to find what exactly code path caused SIGABRT. Vladislav Vlad. On Sat, 2012-11-03 at 16:22 +0300, Vladislav Bogdanov wrote: 03.11.2012 15:26, Vladimir Elisseev wrote: I've been able to reproduce the problem. Herewith I've attached crm_report tarballs from both nodes. Although I don't know what particular package triggers this problem, but below is the list of what has been updated. Hopefully this helps. I bet that is glib. Vladislav Regards, Vlad. Sat Nov 3 12:15:40 2012 <<< sys-apps/busybox-1.20.2 Sat Nov 3 12:15:42 2012 >>> sys-apps/busybox-1.20.2 Sat Nov 3 12:15:50 2012 <<< sys-fs/dosfstools-3.0.9 Sat Nov 3 12:15:52 2012 >>> sys-fs/dosfstools-3.0.12 Sat Nov 3 12:16:00 2012 <<< dev-lang/nasm-2.10.01 Sat Nov 3 12:16:02 2012 >>> dev-lang/nasm-2.10.05 Sat Nov 3 12:16:11 2012 <<< dev-libs/libgamin-0.1.10-r2 Sat Nov 3 12:16:13 2012 >>> dev-libs/libgamin-0.1.10-r3 Sat Nov 3 12:16:40 2012 <<< media-fonts/droid-113-r1 Sat Nov 3 12:16:46 2012 >>> media-fonts/droid-113-r2 Sat Nov 3 12:16:54 2012 <<< media-libs/libpng-1.5.10 Sat Nov 3 12:16:56 2012 >>> media-libs/libpng-1.5.13-r1 Sat Nov 3 12:17:04 2012 <<< app-arch/unzip-6.0-r1 Sat Nov 3 12:17:05 2012 >>> app-arch/unzip-6.0-r3 Sat Nov 3 12:17:12 2012 <<< app-arch/rpm2targz-9.0.0.4g Sat Nov 3 12:17:14 2012 >>> app-arch/rpm2targz-9.0.0.5g Sat Nov 3 12:17:22 2012 <<< app-arch/pbzip2-1.1.5 Sat Nov 3 12:17:24 2012 >>> app-arch/pbzip2-1.1.8 Sat Nov 3 12:17:34 2012 <<< app-arch/zip-3.0 Sat Nov 3 12:17:35 2012 >>> app-arch/zip-3.0-r1 Sat Nov 3 12:17:43 2012 <<< sys-process/htop-1.0.1 Sat Nov 3 12:17:45 2012 >>> sys-process/htop-1.0.1-r1 Sat Nov 3 12:17:55 2012 <<< media-libs/tiff-4.0.2 Sat Nov 3 12:17:57 2012 >>> media-libs/tiff-4.0.2-r1 Sat Nov 3 12:18:04 2012 <<< net-ftp/tftp-hpa-5.1 Sat Nov 3 12:18:06 2012 >>> net-ftp/tftp-hpa-5.2 Sat Nov 3 12:18:18 2012 <<< media-video/ffmpeg-0.10.3 Sat Nov 3 12:18:20 2012 >>> media-video/ffmpeg-0.10.3 Sat Nov 3 12:18:35 2012 <<< sys-devel/gettext-0.18.1.1-r1 Sat Nov 3 12:18:37 2012 >>> sys-devel/gettext-0.18.1.1-r3 Sat Nov 3 12:18:44 2012 <<< app-admin/logrotate-3.8.1 Sat
Re: [Pacemaker] node can't join cluster after reboot
Hey Guys, Just for the record, I just noticed this thread and it sounded familiar. I checked my gentoo systems and I had to mask glib-2.32.4-r1 and use glib-2.30.3 in order to get corosync/pacemaker to work. I had the same problem. Nodes couldn't talk to each other. Sorry I didn't notice this thread earlier, as I might have been able to help. Pat. On 04/11/2012 3:15 AM, Vladimir Elisseev wrote: Thanks for the explanation. I saw coredumps in the directories you mentioned already. The "suspicious -r1" includes two patches over vanilla version of glib: https://bugzilla.gnome.org/show_bug.cgi?id=679306 http://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/dev-libs/glib/files/glib-2.32.4-CVE-2012-3524.patch?view=markup For the moment I simply masked this particular glib version. Hopefully I'll be able to find time to do a complete debug as you described. Regards, Vlad. On Sun, 2012-11-04 at 13:54 +0300, Vladislav Bogdanov wrote: 03.11.2012 18:22, Vladimir Elisseev wrote: Vladislav, Thanks for the hint! Upgrading glig from 2.30.3 to 2.32.4 triggers this behavior of corosync. Do you know where I can find more info regarding this problem? That is not corosync but pacemaker, which heavily uses glib internally. And glib is the only package in your list which may affect pacemaker. I would say that is a regression in that specific glib version or build. Library behavior changed without bumping major so-number. You'd better talk to your distribution maintainers. And -r1 looks suspicious in glib version you installed. Don't you know what does it mean? One more note, cib exits with signal 6 (SIGABRT), which usually means you hit some assert in code. That usually results in memory dump. Look at /var/lib/heartbeat/cores or /var/lib/pacemaker/cores if you have relevant core files for that. If not, then you need to enable coredumps. Then install debuginfo packages for pacemaker and glib (that is very distribution specific, so I cannot help with that). After that you can analyze relevant core files with 'gdb ' Just run 'bt full' and that should be enough to find what exactly code path caused SIGABRT. Vladislav Vlad. On Sat, 2012-11-03 at 16:22 +0300, Vladislav Bogdanov wrote: 03.11.2012 15:26, Vladimir Elisseev wrote: I've been able to reproduce the problem. Herewith I've attached crm_report tarballs from both nodes. Although I don't know what particular package triggers this problem, but below is the list of what has been updated. Hopefully this helps. I bet that is glib. Vladislav Regards, Vlad. Sat Nov 3 12:15:40 2012 <<< sys-apps/busybox-1.20.2 Sat Nov 3 12:15:42 2012 >>> sys-apps/busybox-1.20.2 Sat Nov 3 12:15:50 2012 <<< sys-fs/dosfstools-3.0.9 Sat Nov 3 12:15:52 2012 >>> sys-fs/dosfstools-3.0.12 Sat Nov 3 12:16:00 2012 <<< dev-lang/nasm-2.10.01 Sat Nov 3 12:16:02 2012 >>> dev-lang/nasm-2.10.05 Sat Nov 3 12:16:11 2012 <<< dev-libs/libgamin-0.1.10-r2 Sat Nov 3 12:16:13 2012 >>> dev-libs/libgamin-0.1.10-r3 Sat Nov 3 12:16:40 2012 <<< media-fonts/droid-113-r1 Sat Nov 3 12:16:46 2012 >>> media-fonts/droid-113-r2 Sat Nov 3 12:16:54 2012 <<< media-libs/libpng-1.5.10 Sat Nov 3 12:16:56 2012 >>> media-libs/libpng-1.5.13-r1 Sat Nov 3 12:17:04 2012 <<< app-arch/unzip-6.0-r1 Sat Nov 3 12:17:05 2012 >>> app-arch/unzip-6.0-r3 Sat Nov 3 12:17:12 2012 <<< app-arch/rpm2targz-9.0.0.4g Sat Nov 3 12:17:14 2012 >>> app-arch/rpm2targz-9.0.0.5g Sat Nov 3 12:17:22 2012 <<< app-arch/pbzip2-1.1.5 Sat Nov 3 12:17:24 2012 >>> app-arch/pbzip2-1.1.8 Sat Nov 3 12:17:34 2012 <<< app-arch/zip-3.0 Sat Nov 3 12:17:35 2012 >>> app-arch/zip-3.0-r1 Sat Nov 3 12:17:43 2012 <<< sys-process/htop-1.0.1 Sat Nov 3 12:17:45 2012 >>> sys-process/htop-1.0.1-r1 Sat Nov 3 12:17:55 2012 <<< media-libs/tiff-4.0.2 Sat Nov 3 12:17:57 2012 >>> media-libs/tiff-4.0.2-r1 Sat Nov 3 12:18:04 2012 <<< net-ftp/tftp-hpa-5.1 Sat Nov 3 12:18:06 2012 >>> net-ftp/tftp-hpa-5.2 Sat Nov 3 12:18:18 2012 <<< media-video/ffmpeg-0.10.3 Sat Nov 3 12:18:20 2012 >>> media-video/ffmpeg-0.10.3 Sat Nov 3 12:18:35 2012 <<< sys-devel/gettext-0.18.1.1-r1 Sat Nov 3 12:18:37 2012 >>> sys-devel/gettext-0.18.1.1-r3 Sat Nov 3 12:18:44 2012 <<< app-admin/logrotate-3.8.1 Sat Nov 3 12:18:46 2012 >>> app-admin/logrotate-3.8.2 Sat Nov 3 12:18:54 2012 <<< media-libs/libwebp-0.1.3 Sat Nov 3 12:18:55 2012 >>> media-libs/libwebp-0.2.0 Sat Nov 3 12:19:03 2012 <<< dev-perl/Convert-ASN1-0.220.0 Sat Nov 3 12:19:05 2012 >>> dev-perl/Convert-ASN1-0.260.0 Sat Nov 3 12:19:13 2012 <<< dev-perl/net-server-0.97 Sat Nov 3 12:19:15 2012 >>> dev-perl/net-server-2.6.0 Sat Nov 3 12:19:24 2012 <<< dev-perl/Config-IniFiles-2.710.0 Sat Nov 3 12:19:26 2012 >>> dev-perl/Config-IniFiles-2.760.0 Sat Nov 3 12:19:33 2012 <<< dev-perl/HTTP-Date-6.0.0 Sat Nov 3 12:19:35 2012 >>> dev-perl/HTTP-Date-6.20.0 Sat Nov 3 12:19:44 2012 <<< sys-boot/syslinux-4.06_pre11 Sat Nov 3 12:19:46 2012 >>> sys-boot/syslinux-4.06 Sat Nov 3 12:20:05 2012 <<< dev-libs/glib-2.30.3 S
Re: [Pacemaker] order constraint based on any one of many
-Original Message- From: Andrew Beekhof [mailto:and...@beekhof.net] Sent: Thursday, September 02, 2010 12:03 AM To: The Pacemaker cluster resource manager Subject: Re: [Pacemaker] order constraint based on any one of many On Fri, Aug 27, 2010 at 6:06 PM, Patrick Irvine wrote: > > Hi, and thanks for the response. > > The issue is this. Glfs must be restricted from starting until at least one > of the glfsd clones has started. The glfsd clones provide the backed > storage for the glfs mounts. So glfsd doesn't really need to be a M/S. > This is just a way I came up with to make glfs wait for any single glfsd as > opposed to waiting for all glfsd clones. > > What I really need is some order constraint ( or some other mechanism) that > is like the logic below (to use some C syntax) > > If ( glfsd-1 || glfsd-2 || glfsd-3 || glfsd-4 ) then start glfs-clones > We want to support this, but the feature isn't fully baked yet. For now what you've done is the only way. Thanks Andrew, I'll go ahead with the m/s plan. Btw thanks for all your hard work! Pat. ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] order constraint based on any one of many
-Original Message- From: Andrew Beekhof [mailto:and...@beekhof.net] Sent: Friday, August 27, 2010 7:24 AM To: The Pacemaker cluster resource manager Subject: Re: [Pacemaker] order constraint based on any one of many On Tue, Aug 24, 2010 at 4:03 AM, Patrick Irvine wrote: > Hi Vishal & list, > > Thanks for the info. Unfortuantly that won't due since this clone (glfs) is > the actual mounting of the user's home directorys and needs to be mounted > whither the local glfsd(server) is running or not. I do think I have a > solution, it's some what of a hack. > > If I turn my glfsd-x (servers) into a single master with multiple slaves > (cloned resources) then I could order the glfs(client) clone after the > master starts > > ie. > > order glfs-after-glfsd-ORDER inf: clone-glfsd:master clone-glfs > > this would achive what I want I think, > > I would ofcourse have to insure that even if only one of the glfs-x servers > is running, it would be master. > > any comments? that should work, though i dont really understand why glfsd cant just be a regular clone. why would it need to be a master/slave? also a possibility is to clone the glfsd group (ie. glfsd-1-GROUP) - the trick is making the IPaddr agent factor in the clone number when deciding which IP to start. there should be something like that in IPaddr or IPaddr2 but it may not be fully baked. > (and was I understandable?) yes :-) Hi, and thanks for the response. The issue is this. Glfs must be restricted from starting until at least one of the glfsd clones has started. The glfsd clones provide the backed storage for the glfs mounts. So glfsd doesn't really need to be a M/S. This is just a way I came up with to make glfs wait for any single glfsd as opposed to waiting for all glfsd clones. What I really need is some order constraint ( or some other mechanism) that is like the logic below (to use some C syntax) If ( glfsd-1 || glfsd-2 || glfsd-3 || glfsd-4 ) then start glfs-clones Thanks again for the reply Pat ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
Re: [Pacemaker] order constraint based on any one of many
Hi Vishal & list, Thanks for the info. Unfortuantly that won't due since this clone (glfs) is the actual mounting of the user's home directorys and needs to be mounted whither the local glfsd(server) is running or not. I do think I have a solution, it's some what of a hack. If I turn my glfsd-x (servers) into a single master with multiple slaves (cloned resources) then I could order the glfs(client) clone after the master starts ie. order glfs-after-glfsd-ORDERinf: clone-glfsd:master clone-glfs this would achive what I want I think, I would ofcourse have to insure that even if only one of the glfs-x servers is running, it would be master. any comments? (and was I understandable?) Pat. On 23/08/2010 12:49 AM, Vishal wrote: From what I have read and understood a clone will run simultaneously on all the nodes specified in the config . I donot see how a clone will run once a resource group is started . You can alternatively add the clone to each group and ensure when either group starts , the clone runs with it. On Aug 23, 2010, at 11:50 AM, Patrick Irvine wrote: Hi, I am setting up a Pacemaker/Corosync/Glusterfs HA cluster set. Pacemaker ver. 1.0.9.1 With Glusterfs I have 4 nodes serving replicated (RAID1) storage back-ends and up to 5 servers mounting the store. With out getting into the specifics of how Gluster works, simply put, as long as any one of the 4 backed nodes are running, all of the 5 servers will be able to mount the store. I have started setting up a testing cluster and have the following: (crm configure show output) node test1 node test2 node test3 node test4 primitive glfs ocf:cybersites:glusterfs \ params volfile="repstore.vol" mount_dir="/home" \ op monitor interval="10s" timeout="30" primitive glfsd-1 ocf:cybersites:glusterfsd \ params volfile="glfs.vol" \ op monitor interval="10s" timeout="30" \ meta target-role="Started" primitive glfsd-1-IP ocf:heartbeat:IPaddr2 \ params ip="192.168.5.221" nic="eth1" cidr_netmask="24" \ op monitor interval="5s" primitive glfsd-2 ocf:cybersites:glusterfsd \ params volfile="glfs.vol" \ op monitor interval="10s" timeout="30" \ meta target-role="Started" primitive glfsd-2-IP ocf:heartbeat:IPaddr2 \ params ip="192.168.5.222" nic="eth1" cidr_netmask="24" \ op monitor interval="5s" \ meta target-role="Started" primitive glfsd-3 ocf:cybersites:glusterfsd \ params volfile="glfs.vol" \ op monitor interval="10s" timeout="30" \ meta target-role="Started" primitive glfsd-3-IP ocf:heartbeat:IPaddr2 \ params ip="192.168.5.223" nic="eth1" cidr_netmask="24" \ op monitor interval="5s" primitive glfsd-4 ocf:cybersites:glusterfsd \ params volfile="glfs.vol" \ op monitor interval="10s" timeout="30" \ meta target-role="Started" primitive glfsd-4-IP ocf:heartbeat:IPaddr2 \ params ip="192.168.5.224" nic="eth1" cidr_netmask="24" \ op monitor interval="5s" group glfsd-1-GROUP glfsd-1-IP glfsd-1 group glfsd-2-GROUP glfsd-2-IP glfsd-2 group glfsd-3-GROUP glfsd-3-IP glfsd-3 group glfsd-4-GROUP glfsd-4-IP glfsd-4 clone clone-glfs glfs \ meta clone-max="4" clone-node-max="1" target-role="Started" location block-glfsd-1-GROUP-test2 glfsd-1-GROUP -inf: test2 location block-glfsd-1-GROUP-test3 glfsd-1-GROUP -inf: test3 location block-glfsd-1-GROUP-test4 glfsd-1-GROUP -inf: test4 location block-glfsd-2-GROUP-test1 glfsd-2-GROUP -inf: test1 location block-glfsd-2-GROUP-test3 glfsd-2-GROUP -inf: test3 location block-glfsd-2-GROUP-test4 glfsd-2-GROUP -inf: test4 location block-glfsd-3-GROUP-test1 glfsd-3-GROUP -inf: test1 location block-glfsd-3-GROUP-test2 glfsd-3-GROUP -inf: test2 location block-glfsd-3-GROUP-test4 glfsd-3-GROUP -inf: test4 location block-glfsd-4-GROUP-test1 glfsd-4-GROUP -inf: test1 location block-glfsd-4-GROUP-test2 glfsd-4-GROUP -inf: test2 location block-glfsd-4-GROUP-test3 glfsd-4-GROUP -inf: test3 now I need a way of saying that clone-glfs can start once any of glfsd-1, glfsd-2,glfsd-3 or glfsd-4 have started. Any ideas. I have read the crm cli document, as well of many iterations of the clusters from scratch, etc. I just can't seem to find an answer. can it be done? Pat. ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bug
[Pacemaker] order constraint based on any one of many
Hi, I am setting up a Pacemaker/Corosync/Glusterfs HA cluster set. Pacemaker ver. 1.0.9.1 With Glusterfs I have 4 nodes serving replicated (RAID1) storage back-ends and up to 5 servers mounting the store. With out getting into the specifics of how Gluster works, simply put, as long as any one of the 4 backed nodes are running, all of the 5 servers will be able to mount the store. I have started setting up a testing cluster and have the following: (crm configure show output) node test1 node test2 node test3 node test4 primitive glfs ocf:cybersites:glusterfs \ params volfile="repstore.vol" mount_dir="/home" \ op monitor interval="10s" timeout="30" primitive glfsd-1 ocf:cybersites:glusterfsd \ params volfile="glfs.vol" \ op monitor interval="10s" timeout="30" \ meta target-role="Started" primitive glfsd-1-IP ocf:heartbeat:IPaddr2 \ params ip="192.168.5.221" nic="eth1" cidr_netmask="24" \ op monitor interval="5s" primitive glfsd-2 ocf:cybersites:glusterfsd \ params volfile="glfs.vol" \ op monitor interval="10s" timeout="30" \ meta target-role="Started" primitive glfsd-2-IP ocf:heartbeat:IPaddr2 \ params ip="192.168.5.222" nic="eth1" cidr_netmask="24" \ op monitor interval="5s" \ meta target-role="Started" primitive glfsd-3 ocf:cybersites:glusterfsd \ params volfile="glfs.vol" \ op monitor interval="10s" timeout="30" \ meta target-role="Started" primitive glfsd-3-IP ocf:heartbeat:IPaddr2 \ params ip="192.168.5.223" nic="eth1" cidr_netmask="24" \ op monitor interval="5s" primitive glfsd-4 ocf:cybersites:glusterfsd \ params volfile="glfs.vol" \ op monitor interval="10s" timeout="30" \ meta target-role="Started" primitive glfsd-4-IP ocf:heartbeat:IPaddr2 \ params ip="192.168.5.224" nic="eth1" cidr_netmask="24" \ op monitor interval="5s" group glfsd-1-GROUP glfsd-1-IP glfsd-1 group glfsd-2-GROUP glfsd-2-IP glfsd-2 group glfsd-3-GROUP glfsd-3-IP glfsd-3 group glfsd-4-GROUP glfsd-4-IP glfsd-4 clone clone-glfs glfs \ meta clone-max="4" clone-node-max="1" target-role="Started" location block-glfsd-1-GROUP-test2 glfsd-1-GROUP -inf: test2 location block-glfsd-1-GROUP-test3 glfsd-1-GROUP -inf: test3 location block-glfsd-1-GROUP-test4 glfsd-1-GROUP -inf: test4 location block-glfsd-2-GROUP-test1 glfsd-2-GROUP -inf: test1 location block-glfsd-2-GROUP-test3 glfsd-2-GROUP -inf: test3 location block-glfsd-2-GROUP-test4 glfsd-2-GROUP -inf: test4 location block-glfsd-3-GROUP-test1 glfsd-3-GROUP -inf: test1 location block-glfsd-3-GROUP-test2 glfsd-3-GROUP -inf: test2 location block-glfsd-3-GROUP-test4 glfsd-3-GROUP -inf: test4 location block-glfsd-4-GROUP-test1 glfsd-4-GROUP -inf: test1 location block-glfsd-4-GROUP-test2 glfsd-4-GROUP -inf: test2 location block-glfsd-4-GROUP-test3 glfsd-4-GROUP -inf: test3 now I need a way of saying that clone-glfs can start once any of glfsd-1, glfsd-2,glfsd-3 or glfsd-4 have started. Any ideas. I have read the crm cli document, as well of many iterations of the clusters from scratch, etc. I just can't seem to find an answer. can it be done? Pat. ___ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker