On Mon, Jun 14, 2010 at 4:22 PM, Vadym Chepkov wrote:
>
> On Jun 7, 2010, at 8:04 AM, Vadym Chepkov wrote:
>>
>> I filed bug 2435, glad to hear "it's not me"
>>
>
>
> Andrew closed this bug
> (http://developerbugs.linux-foundation.org/show_bug.cgi?id=2435) as resolved,
> but I respectfully disagr
On Mon, Jun 14, 2010 at 9:26 PM, Maros Timko wrote:
>> Date: Mon, 14 Jun 2010 08:13:59 +0200
>> From: Andrew Beekhof
>> To: The Pacemaker cluster resource manager
>>
>> Subject: Re: [Pacemaker] How to really deal with gateway restarts?
>> Message-ID:
>>
>> Content-Type: text/plain
On Mon, Jun 14, 2010 at 4:37 PM, Erich Weiler wrote:
> Hi All,
>
> We have this interesting problem I was hoping someone could shed some light
> on. Basically, we have 2 servers acting as a pacemaker cluster for DRBD and
> VirtualDomain (KVM) resources under CentOS 5.5.
>
> As it is set up, if on
Hi All,
We have this interesting problem I was hoping someone could shed some
light on. Basically, we have 2 servers acting as a pacemaker cluster
for DRBD and VirtualDomain (KVM) resources under CentOS 5.5.
As it is set up, if one node dies, the other node promotes the DRBD
devices to "Mas
> Date: Mon, 14 Jun 2010 08:13:59 +0200
> From: Andrew Beekhof
> To: The Pacemaker cluster resource manager
>
> Subject: Re: [Pacemaker] How to really deal with gateway restarts?
> Message-ID:
>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Thu, Jun 10, 2010 at 9:22 PM, Mar
Hi,
On Mon, Jun 14, 2010 at 06:29:59PM +0200, Oliver Heinz wrote:
> Am Montag, 14. Juni 2010, um 16:43:54 schrieb Dejan Muhamedagic:
> > Hi,
> >
> > On Mon, Jun 14, 2010 at 02:26:57PM +0200, Oliver Heinz wrote:
> > > I configured a sbd fencing device on the shared storage to prevent data
> > > co
Am Montag, 14. Juni 2010, um 16:43:54 schrieb Dejan Muhamedagic:
> Hi,
>
> On Mon, Jun 14, 2010 at 02:26:57PM +0200, Oliver Heinz wrote:
> > I configured a sbd fencing device on the shared storage to prevent data
> > corruption. It works basically, but when I pull the network plugs on one
> > node
Hi,
On Mon, Jun 14, 2010 at 02:26:57PM +0200, Oliver Heinz wrote:
>
> I configured a sbd fencing device on the shared storage to prevent data
> corruption. It works basically, but when I pull the network plugs on one node
> to simulate a failure one of the nodes is fenced (not necessarily the o
On Jun 7, 2010, at 8:04 AM, Vadym Chepkov wrote:
>
> I filed bug 2435, glad to hear "it's not me"
>
Andrew closed this bug
(http://developerbugs.linux-foundation.org/show_bug.cgi?id=2435) as resolved,
but I respectfully disagree.
I will try to explain a problem again in this list.
lets ass
Hi, developers and/or happy users of sbd!
Can anybody explain me more clear than on official and (IMHO) outdated page
http://www.linux-ha.org/wiki/SBD_Fencing next:
What timeouts I must specify, if my multipath needs from 90 to 160 secs to
be switched off the dead path... Timeouts below are m
I configured a sbd fencing device on the shared storage to prevent data
corruption. It works basically, but when I pull the network plugs on one node
to simulate a failure one of the nodes is fenced (not necessarily the one that
was unplugged). After the fenced node reboots it fences the other
2010/6/12 Julio Gómez
>
> There is the error. Thanks.
>
>
Marco was meaning about uncommenting these lines in your
/etc/apache2/apache2.conf
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1
and to have this uncommented (default for rh el 5 based ht
On 2010-06-14T11:40:51, Aleksey Zholdak wrote:
> I successfully use sbd stonith on previous version of pacemaker (SLES11).
> When I installed SLES11SP1 I found new version of pacemaker.
> Everything was fine until I decided to check the work of sbd fensing
> and what I see: _What_ does this mean
Hi.
I successfully use sbd stonith on previous version of pacemaker (SLES11).
When I installed SLES11SP1 I found new version of pacemaker.
Everything was fine until I decided to check the work of sbd fensing and
what I see: _What_ does this mean (see last line of log)?
...
Jun 14 11:29:40 sles
Hi Andrew,
Thank you for comment.
> More likely of the underlying messaging infrastructure, but I'll take a look.
> Perhaps the default cib operation timeouts are too low for larger clusters.
>
> >
> > The log attached it to next Bugzilla.
> > �* http://developerbugs.linux-foundation.org/show_bu
15 matches
Mail list logo