Roland G. McIntosh wrote:
Dominik Klein wrote:
> With a failure stickiness of -30, you allow your groups resources to
> fail (400/30)=14 times. Is that what you want?
Although the default failure stickiness is -30, the group has a failure
stickiness of -100. I would like to failover after 3
I think I have found out my problem though: I didn't put the resource
location stuff for pingd. I added this snippet to the CIB to constrain
the master-slave drbd resource to not run on a node with lost
connectivity and so far in my tests it seems to work:
Slightly OT, but wit
Jason Erickson wrote:
The only part that i am confused about is where do you set the resource
score for a node?
Within the constraints section of the cib.
Something like this:
attribute="#uname" operation="eq" value="node1"/>
Hello guys,
this topic is intresting to me, but I still don't know which
new packages are required to get the same functionality as
with this old packages:
heartbeat 2.1.3-3
heartbeat-pils 2.1.3-3
heartbeat-stonith 2.1.3-3
Ok, what I know is that I do
Hello Chris,
there is no need to put the vlan logic into the resource agent. Just
configure the interface _before_ and use it _afterwards_. I have it
running for ages on two different machines and it just works.
Thomas
___
Linux-HA mailing list
L
On Thu, Mar 27, 2008 at 2:53 AM, William Francis <[EMAIL PROTECTED]> wrote:
>
> I have a few resources which I want to put into a group, but I keep
> getting this error when I run:
>
> cibadmin -C -o resources -x test.xml
> Call cib_create failed (-47): Update does conform to the DTD in
> /usr
LS,
On both my 2.1.3 centos packaged cluster nodes the cibadmin -Q command
hangs. cibadmin -Q -l does respond.
The DC has the following message in the log:
Mar 26 16:46:17 mandelbrot cib: [12093]: info: cib_process_readwrite: We
are now in R/O mode
this was shortly after mistakenly giving the c
On Mar 27, 2008, at 3:41 AM, HIDEO YAMAUCHI wrote:
Hi,
This was fixed "recently" in pacemaker and is included in the current
release.
Basically we suppressed the error - since its only important to the
CIB itself.
I tested it by the following content.
* Heartbeat 2.1.3 - Release tar ball
*
On Mar 26, 2008, at 10:50 PM, Roland G. McIntosh wrote:
These are RHEL4 packages I built myself, and exist here:
http://rgm.nu/heartbeat/
Which version of heartbeat was installed when you built the pacemaker?
The one you just built (2.1.3-15.1) or an older version?
I can imagine this occurr
Hi Thomas,
On Thu, Mar 27, 2008 at 7:01 PM, Thomas Glanzmann <[EMAIL PROTECTED]> wrote:
> Hello Chris,
> there is no need to put the vlan logic into the resource agent. Just
> configure the interface _before_ and use it _afterwards_. I have it
> running for ages on two different machines and it
On Thu, Mar 27, 2008 at 2:29 AM, Chris Donovan <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I've been struggling to find some documentation (if any exists) on
> how to add static routes once the IPaddr2 resources have come online.
> I am using IPaddr2 for 802.1q tagging (VLAN trunks). I need static
Hi,
> This was the changeset:
> http://hg.clusterlabs.org/pacemaker/stable-0.6/rev/a608a625ec61
>
> But for some reason it doesn't appear to be working for you :-/
>
> If you file a bug I'll make sure I look into it
I confirmed that a source code was changed definitely.
(even changeset whic
Hi Andreas,
On Thu, Mar 27, 2008 at 8:55 PM, Andreas Kurz <[EMAIL PROTECTED]> wrote:
> On Thu, Mar 27, 2008 at 2:29 AM, Chris Donovan <[EMAIL PROTECTED]> wrote:
> > Hello,
> >
> > I've been struggling to find some documentation (if any exists) on
> > how to add static routes once the IPaddr
Hi,
On Thu, Mar 27, 2008 at 09:35:29AM +0100, Johan Hoeke wrote:
> LS,
>
> On both my 2.1.3 centos packaged cluster nodes the cibadmin -Q command
> hangs. cibadmin -Q -l does respond.
>
> The DC has the following message in the log:
>
> Mar 26 16:46:17 mandelbrot cib: [12093]: info: cib_process
Hi,
On Wed, Mar 26, 2008 at 04:48:31PM +0200, Szasz Tamas wrote:
> Hi list, What can I do when the heartbeat(2.1.3), are started, but the
> resources are not started.
The logs should say why.
> The log file(ha-debug, ha-log) print this lines:
>
> heartbeat[12595]: 2008/03/26_16:18:48 info: Vers
Hi,
On Wed, Mar 26, 2008 at 05:12:57PM +0100, Gauthier DOUCHET wrote:
> Hello all,
>
> I'm using heartbeat 1.2.5-3 and DRBD on two Etch servers (in
> active/passive).
>
> I have a question about the configuration files of my services.
> Is it a nice idea to put my /etc/mysql (or any other config
Dejan Muhamedagic wrote:
> Hi,
>
> On Thu, Mar 27, 2008 at 09:35:29AM +0100, Johan Hoeke wrote:
>> LS,
>>
>> On both my 2.1.3 centos packaged cluster nodes the cibadmin -Q command
>> hangs. cibadmin -Q -l does respond.
>>
>> The DC has the following message in the log:
>>
>> Mar 26 16:46:17 mandel
.
> > You can write your own RA to set static routes. For your VLAN tagged
> > interfaces you can configure it like any other interface: the
> > interface with a "base" IP is configured by your OS, heartbeat
> > configures an alias to this interface.
>
> Hmm I'm not sure how to configu
Hi,
On Wed, Mar 26, 2008 at 08:39:32PM +0100, Niels de Carpentier wrote:
> I've written a custom stonith agent, but it is not totally clear what the
> status command should do.
>
> Should it return the status of the stonith device, or the status of the
> host(s) controlled by the stonith device?
On Thu, Mar 27, 2008 at 11:57:52AM +0100, Johan Hoeke wrote:
> Dejan Muhamedagic wrote:
> > Hi,
> >
> > On Thu, Mar 27, 2008 at 09:35:29AM +0100, Johan Hoeke wrote:
> >> LS,
> >>
> >> On both my 2.1.3 centos packaged cluster nodes the cibadmin -Q command
> >> hangs. cibadmin -Q -l does respond.
>
2008/3/27, Dejan Muhamedagic <[EMAIL PROTECTED]>:
>
> Hi,
>
>
> On Wed, Mar 26, 2008 at 05:12:57PM +0100, Gauthier DOUCHET wrote:
> > Hello all,
> >
> > I'm using heartbeat 1.2.5-3 and DRBD on two Etch servers (in
> > active/passive).
> >
> > I have a question about the configuration files of my se
Hi list!
I'm using heartbeat on 2 active/active firewall systems; within the last
48 hours, coinciding with an uptime of 49 days and a few hours, all
servers have suffered the same problem: /var/log/heartbeat.log grows
until fills /var free space partition with messages like attached file.
Fr
Dejan Muhamedagic wrote:
> On Thu, Mar 27, 2008 at 11:57:52AM +0100, Johan Hoeke wrote:
>> Dejan Muhamedagic wrote:
>>> Hi,
>>>
>>> On Thu, Mar 27, 2008 at 09:35:29AM +0100, Johan Hoeke wrote:
LS,
On both my 2.1.3 centos packaged cluster nodes the cibadmin -Q command
hangs. ciba
Hi Andreas,
> > Hmm I'm not sure how to configure the VLAN interface as you describe.
> > I am sure I need to build it as it's own device. It can't as far as I
> > know be a sub interface of an already existing interface. I think
> > what you are saying (please correct me) is that in ord
On Thu, Mar 27, 2008 at 01:14:13PM +0100, Johan Hoeke wrote:
> Dejan Muhamedagic wrote:
> > On Thu, Mar 27, 2008 at 11:57:52AM +0100, Johan Hoeke wrote:
> >> Dejan Muhamedagic wrote:
> >>> Hi,
> >>>
> >>> On Thu, Mar 27, 2008 at 09:35:29AM +0100, Johan Hoeke wrote:
> LS,
>
> On both
Hello,
I have read several postings in the mail archive about the external/ipmi
configuration but there are still some questions that bother me.
The last posting from Thomas: did this cib-configuration worked with
your 2-node cluster?
I have to configure also 2 nodes and would like to use the i
Hi,
are there any other commands or tools besides of crm_standby which can
be used for HA cluster node maintenance?
I wonder how you can shutdown all the nodes of your cluster properly
without initiating failovers. What do you recommend to do in such case?
I could run crm_standby on all nod
Hello Martin,
it is pure luck that I am so bored that I read this list, next time CC
me. :-)
> I have read several postings in the mail archive about the
> external/ipmi configuration but there are still some questions that
> bother me. The last posting from Thomas: did this cib-configuration
> w
Hello Danny,
> are there any other commands or tools besides of crm_standby which can
> be used for HA cluster node maintenance? I wonder how you can
> shutdown all the nodes of your cluster properly without initiating
> failovers. What do you recommend to do in such case? I could run
> crm_stan
> -Ursprüngliche Nachricht-
> Von: General Linux-HA mailing list
> Gesendet: 27.03.08 16:32:20
> An: General Linux-HA mailing list
> Betreff: [Linux-HA] HA maintenance mode
>
> I wonder how you can shutdown all the nodes of your cluster properly
> without initiating failovers. What do
We have a simple two node cluster that share an ip and one resource
(exim). Connection is by serial link. The system works fine if we
power down the master or take it offline, but if the master experiences
a drive error, making the resource unavailable the failover never happens.
This has happen
Andreas Kurz wrote:
Groups must not contain master_slave or clone resources ... the other
way round is possible: master_slave/clone resources can contain
groups.
Andreas, thanks for the clarification on that point.
My next question would be, which is the better/more normal way to do it?
On Thu, Mar 27, 2008 at 9:16 PM, William Francis <[EMAIL PROTECTED]> wrote:
> Andreas Kurz wrote:
> >
> > Groups must not contain master_slave or clone resources ... the other
> > way round is possible: master_slave/clone resources can contain
> > groups.
> >
> >
>
> Andreas, thanks for the
On Thu, Mar 27, 2008 at 2:37 PM, Chris Donovan <[EMAIL PROTECTED]> wrote:
> Hi Andreas,
>
>
> > > Hmm I'm not sure how to configure the VLAN interface as you describe.
> > > I am sure I need to build it as it's own device. It can't as far as I
> > > know be a sub interface of an already e
34 matches
Mail list logo