Ben Timby schrieb:
I am working with a 3 node cluster and using an IPaddr2 resource. The
resource is a clone, which implies the use of iptables CLUSTERIP for
load sharing. However, with three nodes it seems impossible to get
even load distribution on failure. Let me explain.
If I use three clone
Ben Timby schrieb:
I am working with a 3 node cluster and using an IPaddr2 resource. The
resource is a clone, which implies the use of iptables CLUSTERIP for
load sharing. However, with three nodes it seems impossible to get
even load distribution on failure. Let me explain.
If I use three clone
I am working with a 3 node cluster and using an IPaddr2 resource. The
resource is a clone, which implies the use of iptables CLUSTERIP for
load sharing. However, with three nodes it seems impossible to get
even load distribution on failure. Let me explain.
If I use three clones, when a node fails,
Dominik,
As usual, you are right on the money. I should have caught that myself. Thank
you for catching that for me. What happened was that I used a different server
to compile DRBD and I had assumed that Nomen and Rubic (my test nodes) were on
the same kernel.
Moreover, I had also combined
Yep worked with the nodeps option., some nice features in the new version!
thanks again!
Jason
2009/3/6 Jason Fitzpatrick
> Hi Yan
>
> thanks for the feedback
>
> I am running fedora core 10 in my test environment and RHEL 5.2 in
> production
>
> I will install without the deps and let you kno
On Fri, Mar 6, 2009 at 19:00, Dejan Muhamedagic wrote:
> Hi,
>
> On Fri, Mar 06, 2009 at 06:53:37PM +0100, Andrew Beekhof wrote:
>> On Fri, Mar 6, 2009 at 13:27, Dejan Muhamedagic wrote:
>> > Hi,
>> >> Another option for such devices might be to use a Master/Slave and
>> >> only have the master d
Hi,
On Fri, Mar 06, 2009 at 06:53:37PM +0100, Andrew Beekhof wrote:
> On Fri, Mar 6, 2009 at 13:27, Dejan Muhamedagic wrote:
> > Hi,
> >> Another option for such devices might be to use a Master/Slave and
> >> only have the master do monitoring.
> >> I wonder if the lrm hooks for stonith can hand
Neil,
Unfortunately, I was not able to receive your scripts as it was filter by the
mail servers. However, I have tried your suggestion that Pacemaker 1.0.1 and
DRBD 8.2 works. With Pacemaker 1.0.1 and DRBD 8.2, I was able to add the DRBD
resource into a master-slave resource in Pacemaker. I
On Fri, Mar 6, 2009 at 13:27, Dejan Muhamedagic wrote:
> Hi,
>> Another option for such devices might be to use a Master/Slave and
>> only have the master do monitoring.
>> I wonder if the lrm hooks for stonith can handle this.
>
> Don't see any reason why they shouldn't.
Does raexecstonith suppo
I apologize, Brian. The gratitude should have been sent to you. Thanks,
Brian. :)
Regards,
jerome
-Original Message-
From: Jerome Yanga
Sent: Friday, March 06, 2009 9:35 AM
To: General Linux-HA mailing list
Subject: RE: [Linux-HA] Having issues with getting DRBD to work with Pacemaker
Thanks, Neil. However, the reason why I wanted DRBD to start via Pacemaker is
because I want Pacemaker to manage the DRBD process and be able to migrate it
between the nodes.
jerome
-Original Message-
From: linux-ha-boun...@lists.linux-ha.org
[mailto:linux-ha-boun...@lists.linux-ha.or
Hi,
On Mon, Mar 02, 2009 at 10:33:40PM +0300, Benedict simon wrote:
> Dear All,
>
> I am a total novice to HA and would like to implement it in active /stanby
> mode
>
> i have the following hardware
>
> 2 P4 identical machines each with 2 network cards
>
> node 1:
> eth0 has ip address of 172
Hi,
On Sat, Feb 28, 2009 at 09:53:24AM +0100, Andrew Beekhof wrote:
> On Fri, Feb 27, 2009 at 22:32, Andreas Mock wrote:
> >> -Urspr?ngliche Nachricht-
> >> Von: "Andrew Beekhof"
> >> Gesendet: 16.02.09 11:23:53
> >> An: ?General Linux-HA mailing list
> >> Betreff: Re: [Linux-HA] Genere
Hi,
On Wed, Feb 25, 2009 at 07:23:30AM +1100, David Pinkerton H wrote:
>
> Can anyone explain why, if I execute a cleanup of resources on
> the node, where they are running, it takes 7 minutes before the
> next monitor operation is run. I was of the understanding all
> monitor operation should b
Hi,
I'm using heartbeat on a cluster of 2 nodes and stonith to avoid split
brain with external/ipmi:
heartbeat-stonith-2.1.4-0.11
heartbeat-2.1.4-0.11
I'm using heartbeat with crm off (version 1-like).
I've a question: If the nodes turn unavailable *each* *other*, how can
avoid that node-1 R
Hi,
On Tue, Feb 24, 2009 at 04:56:47PM +0100, Imre Sandor wrote:
> Hi,
>
> I am having troubles with a heartbeat cluster.
> 1. Occasionally, I get timeouts on the IPaddr monitors (I have two IP
> addresses aliased on a single Ethernet)
> Feb 15 15:05:16 fhbmplb1 lrmd: [17670]: WARN: on_op_timeout
Hi,
On Tue, Feb 24, 2009 at 11:28:06AM -0600, Kevin Harms wrote:
>
> Is there an easy way to build clients to interact with heartbeat 2.1.4?
> I'd like to build some client that can talk to multiple heartbeat clusters
> and verify the status of things, such as all resource are running, check
Hi,
On Tue, Feb 24, 2009 at 04:29:14PM +, Mark Watts wrote:
>
> On Tuesday 24 February 2009 16:11:25 Rick Ennis wrote:
> > I posted the "unintentional failover" message a week or so ago and no one
> > had any ideas. I think I've figured it out and thought I'd post back in
> > case it helps a
>
>
> >
> > just want to clear one thing here. suppose i have four nodes cluster then
> > do i need to configure and run stonith-node1 (which keep info about
> node1)
> > on all other three nodes (i.e node2, node3 and node4)
>
> not necessary. running it on any one of the other nodes is
> enough.
Hi Yan
thanks for the feedback
I am running fedora core 10 in my test environment and RHEL 5.2 in
production
I will install without the deps and let you know how I get on
Jason
2009/3/6 Yan Gao
> On Thu, 2009-03-05 at 23:36 +, Jason Fitzpatrick wrote:
> > Hi all
> >
> > I have just tried
20 matches
Mail list logo