Please forgive the n00b question:
I've written a STONITH device script for systems that monitor their UPSes using
NUT. I think it might be of sufficient interest to include in the standard
Pacemaker distribution. What is the procedure for submitting such scripts?
I don't particularly want credit
Hi,
On Fri, Aug 27, 2010 at 05:24:21PM +0200, Bernd Schubert wrote:
> > > location l-pingdnet1-mds1 cl-pingdnet1 100: mds1
> > > location l-pingdnet1-mds2 cl-pingdnet1 100: mds2
> > > location l-pingdnet1-oss1 cl-pingdnet1 100: oss1
> > > location l-pingdnet1-oss2 cl-pingdnet1 100: oss2
> > > loca
> > location l-pingdnet1-mds1 cl-pingdnet1 100: mds1
> > location l-pingdnet1-mds2 cl-pingdnet1 100: mds2
> > location l-pingdnet1-oss1 cl-pingdnet1 100: oss1
> > location l-pingdnet1-oss2 cl-pingdnet1 100: oss2
> > location l-pingdnet1-oss3 cl-pingdnet1 100: oss3
> > location l-pingdnet1-oss4 cl-p
-Original Message-
From: Andrew Beekhof [mailto:and...@beekhof.net]
Sent: Friday, August 27, 2010 7:24 AM
To: The Pacemaker cluster resource manager
Subject: Re: [Pacemaker] order constraint based on any one of many
On Tue, Aug 24, 2010 at 4:03 AM, Patrick Irvine wrote:
> Hi Vishal & l
Sorry for the double post and while I'm reading my own mail, I found it, I
used the wrong host names in the location constraints :( That also explains
why it worked on another cluster.
Sorry for the noise,
Bernd
On Thursday, August 26, 2010, Bernd Schubert wrote:
> Hi all,
>
> I'm trying to s
On Tue, Aug 3, 2010 at 4:40 PM, Guillaume Chanaud
wrote:
> Hello,
> sorry for the delay it took, july is not the best month to get things
> working fast.
Neither is august :-)
>
> Here is the core dump file (55MB) :
> http://www.connecting-nature.com/corosync/core
> corosync version is 1.2.3
So
On Fri, Jul 30, 2010 at 8:38 AM, Thomas Guthmann wrote:
> Re,
>
>> [..] I can provide a hb_report if necessary.
> See in attachment a report for the simple config below. Note that I dumbly
> erased the conf before doing the report but I paste it below.
Thanks, I'll hopefully get to this next week
On Tue, Aug 24, 2010 at 4:03 AM, Patrick Irvine wrote:
> Hi Vishal & list,
>
> Thanks for the info. Unfortuantly that won't due since this clone (glfs) is
> the actual mounting of the user's home directorys and needs to be mounted
> whither the local glfsd(server) is running or not. I do think I
On Fri, Aug 13, 2010 at 5:43 PM, Dejan Muhamedagic wrote:
> Hi,
>
> On Fri, Aug 13, 2010 at 02:55:30PM +, Chris Picton wrote:
>> On Fri, 13 Aug 2010 14:37:18 +, Chris Picton wrote:
>>
>> >>> On Fri, Aug 13, 2010 at 01:44:28PM +, Chris Picton wrote: I have a
>> >>> drbd backed mysql ser
Tim Serong wrote:
On 8/27/2010 at 03:22 PM, Michael Smith wrote:
I have a pacemaker setup using the Xen resource agent and I've found
something weird during migration: if a VM is in the middle of
live-migrating from node 1 to node 2, and I stop the resource in crm,
pacemaker forgets about
On Thu, Aug 26, 2010 at 10:02 PM, Bernd Schubert
wrote:
> Hi all,
>
> I'm trying to start a pingd clone resource on an asymmetric cluster.
> I specified locations, but it still refuses to start pingd
>
> ===
> [r...@vrhel5-mds1 ha.d]# cat pingd.cib
> primitive pingdnet1 ocf:pacemaker:pingd
> \
>
Tim Serong wrote:
On 8/27/2010 at 03:37 PM, Michael Smith wrote:
I think I'd consider it a bug: I've disabled stonith, so dlm shouldn't
wait forever for a fence operation that isn't going to happen.
I reckon if you set the args parameter of your ocf:pacemaker:controld
resource to "-f 0 -
On Thu, Aug 26, 2010 at 4:42 PM, Ruiyuan Jiang wrote:
> Hi, Andrew
>
> Understood that. I am asking any recommendation for storage management under
> Packmaker.
Well if you have a SAN, and only want it mounted on one machine at a
time... then do you actually need any?
Even if so, then the "under
On Tue, Aug 10, 2010 at 6:57 PM, Stepan, Troy wrote:
> Hi,
>
> I applied the changeset for Bug lf#2433 (No services should be stopped until
> probes finish) to pacemaker 1.0.7-4.1.
The PE is sufficiently complex that its quite normal for backports
like this not to have the intended result.
Its q
On Mon, Aug 9, 2010 at 6:43 PM, bunkertor wrote:
> hi to all!
> i have some problems with cluster and gfs2 mounting on iscsi devices.
> i followed this guide
> "http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf"; with some changes
> depending on my envoironment; so the cluster should connect
2010/8/27 :
> Hi Andrew,
>
> I registered this problem on Bugzilla.
>
> * http://developerbugs.linux-foundation.org/show_bug.cgi?id=2476
Thanks, I'll follow up there.
(Sorry, still catching up on email from the last few weeks. Your
thread wasn't forgotten, I was just doing the "easy" stuff first
On 8/27/2010 at 03:22 PM, Michael Smith wrote:
> Hi,
>
> I have a pacemaker setup using the Xen resource agent and I've found
> something weird during migration: if a VM is in the middle of
> live-migrating from node 1 to node 2, and I stop the resource in crm,
> pacemaker forgets about
On 8/27/2010 at 03:37 PM, Michael Smith wrote:
> On Thu, 26 Aug 2010, Tim Serong wrote:
>
> > > for now I have stonith-enabled="false" in
> > > my CIB. Is there a way to make clvmd/dlm respect it?
> >
> > No. At least, I don't think so, and/or I hope not :)
>
> I think I'd consider
On 2010-08-27 11:08, jimbob palmer wrote:
>> Which means you're causing a service interruption when you don't need
>> to. Instead, your application could continue running on the same node,
>> DRBD will ensure that the application transparently writes to and reads
>> from the peer when it thinks it'
2010/8/27 Florian Haas :
> On 2010-08-27 10:31, jimbob palmer wrote:
>> 2010/8/27, Florian Haas :
>>> On 2010-08-26 16:43, jimbob palmer wrote:
How can I configure pacemaker to failover when the primary node goes
>>> diskless?
Many thanks.
>>>
>>> man drbd.conf
>>>
>>> Look for the l
On 2010-08-27 10:31, jimbob palmer wrote:
> 2010/8/27, Florian Haas :
>> On 2010-08-26 16:43, jimbob palmer wrote:
>>> How can I configure pacemaker to failover when the primary node goes
>> diskless?
>>>
>>> Many thanks.
>>
>> man drbd.conf
>>
>> Look for the local-io-error handler and the on-io-e
2010/8/27, Florian Haas :
> On 2010-08-26 16:43, jimbob palmer wrote:
>> How can I configure pacemaker to failover when the primary node goes
> diskless?
>>
>> Many thanks.
>
> man drbd.conf
>
> Look for the local-io-error handler and the on-io-error option.
>
> I doubt that it's a good idea to do
On 2010-08-26 16:43, jimbob palmer wrote:
> How can I configure pacemaker to failover when the primary node goes
diskless?
>
> Many thanks.
man drbd.conf
Look for the local-io-error handler and the on-io-error option.
I doubt that it's a good idea to do this though; you're deliberately
foregoing
On Fri, Aug 27, 2010 at 3:03 AM, wrote:
> Hi Andrew,
>
> Thank you for comment.
>
>> Why not simply remove the if(was_processing_error) block?
>> Its just a summary message, the place that set was_processing_error
>> will also have logged an error.
>
> Is this meaning to abolish the next code?
On 26/08/2010 18:44, liang...@asc-csa.gc.ca wrote:
Hi,
Hi
I installed ipvsadm and ran
ipvsadm --start-daemon=master --mcast-interface=eth0
in master node and
ipvsadm --start-daemon=backup --mcast-interface=eth0
in backup node. But still i lost ftp connection during node swap.
Did you
25 matches
Mail list logo