Re: [Pacemaker] node can't join cluster after reboot

2012-11-05 Thread Patrick Irvine
ying >=dev-libs/glib-2.34. Have you tried this already? Regards, Vlad. On Sun, 2012-11-04 at 08:45 -0800, Patrick Irvine wrote: Hey Guys, Just for the record, I just noticed this thread and it sounded familiar. I checked my gentoo systems and I had to mask glib-2.32.4-r1 and use glib

Re: [Pacemaker] node can't join cluster after reboot

2012-11-04 Thread Patrick Irvine
Hey Guys, Just for the record, I just noticed this thread and it sounded familiar. I checked my gentoo systems and I had to mask glib-2.32.4-r1 and use glib-2.30.3 in order to get corosync/pacemaker to work. I had the same problem. Nodes couldn't talk to each other. Sorry I didn't notice

Re: [Pacemaker] order constraint based on any one of many

2010-09-02 Thread Patrick Irvine
-Original Message- From: Andrew Beekhof [mailto:and...@beekhof.net] Sent: Thursday, September 02, 2010 12:03 AM To: The Pacemaker cluster resource manager Subject: Re: [Pacemaker] order constraint based on any one of many On Fri, Aug 27, 2010 at 6:06 PM, Patrick Irvine wrote: > &

Re: [Pacemaker] order constraint based on any one of many

2010-08-27 Thread Patrick Irvine
-Original Message- From: Andrew Beekhof [mailto:and...@beekhof.net] Sent: Friday, August 27, 2010 7:24 AM To: The Pacemaker cluster resource manager Subject: Re: [Pacemaker] order constraint based on any one of many On Tue, Aug 24, 2010 at 4:03 AM, Patrick Irvine wrote: > Hi Vis

Re: [Pacemaker] order constraint based on any one of many

2010-08-23 Thread Patrick Irvine
not see how a clone will run once a resource group is started . You can alternatively add the clone to each group and ensure when either group starts , the clone runs with it. On Aug 23, 2010, at 11:50 AM, Patrick Irvine wrote: Hi, I am setting up a Pacemaker/Corosync/Glusterfs HA cluster

[Pacemaker] order constraint based on any one of many

2010-08-22 Thread Patrick Irvine
Hi, I am setting up a Pacemaker/Corosync/Glusterfs HA cluster set. Pacemaker ver. 1.0.9.1 With Glusterfs I have 4 nodes serving replicated (RAID1) storage back-ends and up to 5 servers mounting the store. With out getting into the specifics of how Gluster works, simply put, as long as any on