Hi all,
the Linux-HA project is undergoing some changes, as you've noticed. Not
all of them have gone as well as expected, and it hasn't stabilized
yet.
Under guidance with Alan, the project members have met and decided to
change the governance of the project in the future. This will be
Great news! Thanks, Lars.
2008/4/10 Lars Marowsky-Bree [EMAIL PROTECTED]:
Hi all,
the Linux-HA project is undergoing some changes, as you've noticed. Not
all of them have gone as well as expected, and it hasn't stabilized
yet.
Under guidance with Alan, the project members have met and
High-Availability Linux Development List linux-ha-dev@lists.linux-ha.org
wrote:
Hi all,
the Linux-HA project is undergoing some changes, as you've noticed. Not
all of them have gone as well as expected, and it hasn't stabilized
yet.
Under guidance with Alan, the project members have met
Sebastian Reitenbach wrote:
High-Availability Linux Development List linux-ha-dev@lists.linux-ha.org
wrote:
Hi all,
the Linux-HA project is undergoing some changes, as you've noticed. Not
all of them have gone as well as expected, and it hasn't stabilized
yet.
Under guidance with Alan,
hi,
i want to construct a two-node cluster.
1. firstly, all of the resources is started up on the DC node.
2. when some of critical resources error occured , all of the
resources is move to the other node.
3. all resources must run on only one node in its whole lifetime.
until error
On Wed, Apr 9, 2008 at 4:16 PM, Dominik Klein [EMAIL PROTECTED] wrote:
Martin Knoblauch wrote:
Hi,
three questions on the failcount attribute. I am running 2.0.8, and yes
I know I should upgrade ... :-(
Good to know you know :)
a) Is it possible that the failcount for a
Hi,
On Thu, Apr 10, 2008 at 03:59:47PM +0800, [EMAIL PROTECTED] wrote:
hi,
i want to construct a two-node cluster.
1. firstly, all of the resources is started up on the DC node.
You can't control which node gets elected the DC and you don't
really need to.
2. when some of critical resources
[EMAIL PROTECTED] sbin # ./crm_failcount -G -U isdl601 -r caebench.proc
name=fail-count-caebench.proc value=(null)
Error performing operation: The object/attribute does not exist
Is this intentional?
At least the normal behaviour.
in that version
Ah right. crm_failcount gives a
Hi
to run all resources on the same node, you could put them in a group.
Read http://wiki.linux-ha.org/ClusterInformationBase/ResourceGroups
If you want to decide which node the group is usually located at, you
need a rsc_location constraint. An example is also on that page.
To move the group
On Wednesday 09 April 2008 22:20:16 Lars Marowsky-Bree wrote:
On 2008-04-09T20:26:02, Bernd Schubert [EMAIL PROTECTED] wrote:
I still think there is another bug in heartbeat, though. There is simply
no reason for heartbeat to wait $deadtime on initial startup of the
heartbeat services, when
On Wed, Apr 09, 2008 at 06:34:39PM +0200, Lars Marowsky-Bree wrote:
On 2008-04-08T19:32:58, Bernd Schubert [EMAIL PROTECTED] wrote:
Hello,
I need to set a rather huge dead time of 1200s, but the initial dead time
is
supposed to be of 120s or less. However, heartbeat tries to be
Hi all,
how can I stop/start the monitoring of a resource?
I tried to use crm_resource, but did not manage to modify the disabled
value.
Regards.
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
Steinhauer Juergen wrote:
Hi all,
how can I stop/start the monitoring of a resource?
I tried to use crm_resource, but did not manage to modify the disabled
value.
Regards.
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
Hi!
I have the following problem with a two-node cluster:
I have two DRBD resources. On the node where drbd0 is master, a certain
resource group with different resources will be activated. On the node
where drbd1 is master, this will happen with another resource group.
Now I want that the
Hi,
On Wed, Apr 09, 2008 at 08:26:02PM +0200, Bernd Schubert wrote:
Hello Lars,
On Wednesday 09 April 2008 18:34:39 Lars Marowsky-Bree wrote:
On 2008-04-08T19:32:58, Bernd Schubert [EMAIL PROTECTED] wrote:
Hello,
I need to set a rather huge dead time of 1200s, but the initial dead
Hi,
On Thu, Apr 10, 2008 at 12:05:18PM +0200, Bernd Schubert wrote:
On Wednesday 09 April 2008 22:20:16 Lars Marowsky-Bree wrote:
On 2008-04-09T20:26:02, Bernd Schubert [EMAIL PROTECTED] wrote:
I still think there is another bug in heartbeat, though. There is simply
no reason for
Hi all,
I've got my haresources file set up with two resources, one on node-a and
one on node-b. Similar to the common MySQL/Apache setup using drbd.
I don't want to set auto_failback on as i'd like to do it manually if one
node goes down.
So when node-a goes down, node-b takes both resources
On Thursday 10 April 2008 12:48:27 Lars Ellenberg wrote:
On Wed, Apr 09, 2008 at 06:34:39PM +0200, Lars Marowsky-Bree wrote:
On 2008-04-08T19:32:58, Bernd Schubert [EMAIL PROTECTED] wrote:
Hello,
I need to set a rather huge dead time of 1200s, but the initial dead
time is supposed
crm_resource -M -r $RESOURSE -H $HOST
On Thu, Apr 10, 2008 at 9:00 AM, Matt [EMAIL PROTECTED] wrote:
Hi all,
I've got my haresources file set up with two resources, one on node-a and
one on node-b. Similar to the common MySQL/Apache setup using drbd.
I don't want to set auto_failback
Apologies! hb_standby is what I couldn't find.
Thanks,
Matt
-- Forwarded message --
From: Matt [EMAIL PROTECTED]
Date: 10 Apr 2008 16:00
Subject: manually fail back a resource
To: linux-ha@lists.linux-ha.org
Hi all,
I've got my haresources file set up with two resources, one
20 matches
Mail list logo