On Thu, Jun 10, 2010 at 9:22 PM, Maros Timko tim...@gmail.com wrote:
Hi all,
I know it was requested here number of times, but with no real
conclusive answer. All of the requests were update Pacemaker and use
ping RA.
Setup:
- simple symetric 2 node DRBD-Xen cluster
- both nodes
On Fri, Jun 11, 2010 at 12:59 PM, Koch, Sebastian
sebastian.k...@netzwerk.de wrote:
Hi,
currently i am trying to deploy my already running 2 node active/passive
LAMP Cluster to physical machines. I got several problems while importing
the config and therefore i often need to fully flush the
Hi,
On Fri, Jun 11, 2010 at 03:45:19PM +0100, Maros Timko wrote:
Hi all,
using heartbeat stack. I have a system with one node offline:
Last updated: Fri Jun 11 13:52:40 2010
Stack: Heartbeat
Current DC: vsp7.example.com (ba6d6332-71dd-465b-a030-227bcd31a25f) -
partition
Hi Andrew,
Thank you for comment.
More likely of the underlying messaging infrastructure, but I'll take a look.
Perhaps the default cib operation timeouts are too low for larger clusters.
The log attached it to next Bugzilla.
#65533;*
Hi.
I successfully use sbd stonith on previous version of pacemaker (SLES11).
When I installed SLES11SP1 I found new version of pacemaker.
Everything was fine until I decided to check the work of sbd fensing and
what I see: _What_ does this mean (see last line of log)?
...
Jun 14 11:29:40
On 2010-06-14T11:40:51, Aleksey Zholdak alek...@zholdak.com wrote:
I successfully use sbd stonith on previous version of pacemaker (SLES11).
When I installed SLES11SP1 I found new version of pacemaker.
Everything was fine until I decided to check the work of sbd fensing
and what I see: _What_
2010/6/12 Julio Gómez ju...@openinside.es
There is the error. Thanks.
Marco was meaning about uncommenting these lines in your
/etc/apache2/apache2.conf
Location /server-status
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1
/Location
and to
I configured a sbd fencing device on the shared storage to prevent data
corruption. It works basically, but when I pull the network plugs on one node
to simulate a failure one of the nodes is fenced (not necessarily the one that
was unplugged). After the fenced node reboots it fences the other
Hi, developers and/or happy users of sbd!
Can anybody explain me more clear than on official and (IMHO) outdated page
http://www.linux-ha.org/wiki/SBD_Fencing next:
What timeouts I must specify, if my multipath needs from 90 to 160 secs to
be switched off the dead path... Timeouts below are
On Jun 7, 2010, at 8:04 AM, Vadym Chepkov wrote:
I filed bug 2435, glad to hear it's not me
Andrew closed this bug
(http://developerbugs.linux-foundation.org/show_bug.cgi?id=2435) as resolved,
but I respectfully disagree.
I will try to explain a problem again in this list.
lets assume
Hi,
On Mon, Jun 14, 2010 at 02:26:57PM +0200, Oliver Heinz wrote:
I configured a sbd fencing device on the shared storage to prevent data
corruption. It works basically, but when I pull the network plugs on one node
to simulate a failure one of the nodes is fenced (not necessarily the one
Am Montag, 14. Juni 2010, um 16:43:54 schrieb Dejan Muhamedagic:
Hi,
On Mon, Jun 14, 2010 at 02:26:57PM +0200, Oliver Heinz wrote:
I configured a sbd fencing device on the shared storage to prevent data
corruption. It works basically, but when I pull the network plugs on one
node to
Hi,
On Mon, Jun 14, 2010 at 06:29:59PM +0200, Oliver Heinz wrote:
Am Montag, 14. Juni 2010, um 16:43:54 schrieb Dejan Muhamedagic:
Hi,
On Mon, Jun 14, 2010 at 02:26:57PM +0200, Oliver Heinz wrote:
I configured a sbd fencing device on the shared storage to prevent data
corruption. It
Date: Mon, 14 Jun 2010 08:13:59 +0200
From: Andrew Beekhof and...@beekhof.net
To: The Pacemaker cluster resource manager
pacemaker@oss.clusterlabs.org
Subject: Re: [Pacemaker] How to really deal with gateway restarts?
Message-ID:
Hi All,
We have this interesting problem I was hoping someone could shed some
light on. Basically, we have 2 servers acting as a pacemaker cluster
for DRBD and VirtualDomain (KVM) resources under CentOS 5.5.
As it is set up, if one node dies, the other node promotes the DRBD
devices to
On Mon, Jun 14, 2010 at 4:37 PM, Erich Weiler wei...@soe.ucsc.edu wrote:
Hi All,
We have this interesting problem I was hoping someone could shed some light
on. Basically, we have 2 servers acting as a pacemaker cluster for DRBD and
VirtualDomain (KVM) resources under CentOS 5.5.
As it is
16 matches
Mail list logo