Hi Phil,
On Fri, Apr 8, 2011 at 11:13 AM, Phil Hunt wrote:
>
> Hi
>
> I have been playing with DRBD, thats cool
>
> But I have 2 VM RHEL linux boxes. They each have a boot device (20g) and a
> shared ISCSI 200G volume.
>
> I've played with ucarp and have the commands to make available/mount the
Hello
As said here
http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-intro-redundancy.html
"Pacemaker allowing several active/passive clusters to be combined and share
a common backup node" But how to implemet such configuration? Cluster form
crutch manual doesn't holds
Just adding this as an FYI if anyone comes across it...
Not creating the logfile directory that is listed in corosync.conf will
create the following log errors and corosync will fail to start (this is
with the latest rpm based builds from http://www.clusterlabs.org/rpm/epel-5/
Apr 8 12:34:24 cvt
Okey dokey, I've done some further troubleshooting and started again from
scratch on a new node. I'm performing this setup on a CentOS 5.5 node.
Here's an excerpt from my messages file taken after doing a "yum -y install
pacemaker corosync"
Apr 8 11:50:19 cvt-db-003 yum: Updated: bzip2-libs-1.0
Hi
I have been playing with DRBD, thats cool
But I have 2 VM RHEL linux boxes. They each have a boot device (20g) and a
shared ISCSI 200G volume. I've played with ucarp and have the commands to make
available/mount the disk and dismount the shared disk using
vgchange/mount/umount, etc.
But
Got it. Will have a look on Monday. Happy weekend!
On Fri, Apr 8, 2011 at 2:50 PM, wrote:
> -Original Message-
> From: Andrew Beekhof [mailto:and...@beekhof.net]
> Sent: 08 April 2011 08:15
> To: The Pacemaker cluster resource manager
> Cc: Darren Mansell
> Subject: Re: [Pacemaker] Help
On Fri, Apr 08, 2011 at 09:13:45AM +0200, Andrew Beekhof wrote:
> On Thu, Apr 7, 2011 at 11:48 PM, Colin Hines wrote:
> > I've recently followed the clusters from scratch v2 document for RHEL and
> > although my cluster works and fails over correctly using corosync, I have
> > the following error
Hi,
during work on the move-XXX stuff I discovered this.
Regards
Holger
# HG changeset patch
# User Holger Teutsch
# Date 1302259903 -7200
# Branch mig
# Node ID caed31174dc966450a31da048b640201980870a8
# Parent 9451c288259b7b9fd6f32f5df01d47569e570c58
Low: lib/common/utils.c: Don't try to print
On Thu, 2011-04-07 at 12:33 +0200, Dejan Muhamedagic wrote:
> > New syntax:
> > ---
> >
> > crm_resource --move-from --resource myresource --node mynode
> >-> all resource variants: check whether active on mynode, then create
> > standby constraint
> >
> > crm_resource --move-from --
-Original Message-
From: Andrew Beekhof [mailto:and...@beekhof.net]
Sent: 08 April 2011 08:15
To: The Pacemaker cluster resource manager
Cc: Darren Mansell
Subject: Re: [Pacemaker] Help With Cluster Failure
On Thu, Apr 7, 2011 at 12:12 PM, wrote:
> Hi all.
>
>
>
> One of my clusters had
On Thu, Apr 7, 2011 at 12:12 PM, wrote:
> Hi all.
>
>
>
> One of my clusters had a STONITH shoot-out last night and then refused to do
> anything but sit there from 0400 until 0735 after I’d been woken up to fix
> it.
>
>
>
> In the end, just a resource cleanup fixed it, which I don’t think shoul
On Thu, Apr 7, 2011 at 11:48 PM, Colin Hines wrote:
> I've recently followed the clusters from scratch v2 document for RHEL and
> although my cluster works and fails over correctly using corosync, I have
> the following error message coming up in my logs consistently, twice a
> minute:
> Apr 7 17
12 matches
Mail list logo