Hello,
On Thu, Nov 7, 2013 at 1:38 PM, Jean-Francois Malouin <
jean-francois.malo...@bic.mni.mcgill.ca> wrote:
> ... the hardware that they dropped on my lap doesn't have
> IPMI and I will definitely require stonith.
>
> What would you recommend? A switchable PDU/power fencing?
>
>
Do you have sh
Hi Cherish,
On Wed, Dec 19, 2012 at 1:11 AM, bin chen wrote:
> Hi,all
> My cluster is pacemaker 1.1.7 + corosync 2.0. I have write a
> resource agent to manage the virtual machine.The RA supports
> start,stop,migrate_from,migrate_to,monitor.
> But when I try to migrate a running
Oops, I haven't have my coffee yet this morning... I see you've written
your own RA rather than using the existing ones, my apologies for the noise
on the list.
Mark
On Wed, Dec 19, 2012 at 9:08 AM, mark - pacemaker list <
m+pacema...@nerdish.us> wrote:
> Hi Cherish,
>
>
Hi Lars,
On Wed, Aug 15, 2012 at 3:33 AM, Lars Marowsky-Bree wrote:
> On 2012-07-28T17:30:34, mark - pacemaker list
> wrote:
>
> ...
> Note that sbd is being removed from cluster-glue and split into a
> separate package upstream nowadays, so RHT's decision me
This output kind of shows it all... I can configure the cluster, put nodes
in standby and back online, move resources from one to the other, etc., but
if any rule references a node in the configuration, then 'crm configure
verify' fails saying the node doesn't exist. This is a freshly-started
clus
Good afternoon,
Looking at the corosync configuration examples on the wiki (
http://www.clusterlabs.org/wiki/Initial_Configuration ) and in the Clusters
from Scratch document (
http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/_sample_corosync_configuration.html),
there
ticed that I can't replicate the issue any longer
this morning. I guess it went away as soon as we rolled to August 1st.
Thank you,
Mark
> On Wed, Aug 1, 2012 at 2:51 PM, mark - pacemaker list
> wrote:
> > Hello,
> >
> > I suspect I've missed a dependenc
Hi Andreas,
On Wed, Aug 1, 2012 at 8:07 AM, Andreas Kurz wrote:
> On 08/01/2012 06:51 AM, mark - pacemaker list wrote:
> > Hello,
> >
> >...
>
> Pacemaker 1.1.7 is included in CentOS 6.3 no need to build it for
> yourself... or do you try to build latest git v
Hello,
I suspect I've missed a dependency somewhere in the build process, and I'm
hoping someone recognizes this as an easy fix. I've basically followed the
build guide on ClusterLabs in 'Clusters from Scratch v2', the build from
source section. The hosts are CentOS 6.3 x86_64. My only changes
Hello list,
If you're building cluster-glue from source, it builds sbd. However, If you
install cluster-glue, corosync, and pacemaker from official repos, there is
no sbd binary. The deb for cluster-glue in Debian is version 1.0.6 rather
than 1.0.5 and it has the sbd binary, so has it been decide
Hi Luca,
On Wed, Jun 20, 2012 at 6:38 AM, Luca Lesinigo wrote:
> ...
>
Andrew gave you the right answers for all of the above, I just wanted to
add something for this question:
> - what could happen if all SAS links between a single node and the storage
> stop working?
> (ie, storage array wo
On Tue, Apr 3, 2012 at 5:09 AM, Lars Marowsky-Bree wrote:
> On 2012-04-02T11:01:31, mark - pacemaker list
> wrote:
>
> > Debian's corosync/pacemaker scripts don't include a way to start SBD, you
> > have to work up something on your own to get it started prior to
Hi Lars,
On Mon, Apr 2, 2012 at 10:35 AM, Lars Marowsky-Bree wrote:
> On 2012-04-02T09:33:22, mark - pacemaker list
> wrote:
>
> > Hello,
> >
> > I'm just looking to verify that I'm understanding/configuring SBD
> > correctly. It works great in the
Hello,
I'm just looking to verify that I'm understanding/configuring SBD
correctly. It works great in the controlled cases where you unplug a node
from the network (it gets fenced via SBD) or remove its access to the
shared disk (the node suicides). However, In the event of a hardware
failure or
Hi,
On Wed, Mar 14, 2012 at 1:43 PM, Regendoerp, Achim <
achim.regendo...@galacoral.com> wrote:
> Hi,
>
> ** **
>
> Below is a cut out from the tcpdump run on both boxes. The tcpdump is the
> same on both boxes.
>
> The traffic only appears if I set the bindnetaddr in
> /etc/corosync/cor
Hello,
I have a pretty simple cluster running with three nodes, xen1, xen2, and
qnode (which runs in standby at all times and only exists for quorum).
This afternoon xen1 reset out of the blue. There is nothing in its logs,
in fact there's a gap from 15:37 to 15:47:
Feb 23 15:36:18 xen1 lrmd: [
Hi Dirk,
On Fri, Nov 25, 2011 at 6:05 AM, Hellemans Dirk D
wrote:
> Hello everyone,
>
> ** **
>
> I’ve been reading a lot lately about using Corosync/Openais in combination
> with Pacemaker: SuSe Linux documentation, Pacemaker & Linux-ha website,
> interesting blogs, mailinglists, etc. As I’
Hi,
On Mon, Oct 24, 2011 at 9:52 AM, Alan Robertson wrote:
> **
> Setting no-quorum-policy to ignore and disabling stonith is not a good
> idea. You're sort of inviting the cluster to do screwed up things.
>
>
>
Isn't "no-quorum-policy ignore" sort of required for a two-node cluster?
Without i
Hello,
I'm trying to get stonith via SBD working on Debian Squeeze. All of the
components seem to be there, and I've very carefully followed the guide at
http://www.linux-ha.org/wiki/SBD_Fencing . Where things seem to fall down is
that there's nothing at all in corosync's init script to start SBD
Hi Michael,
On Wed, Oct 5, 2011 at 2:16 AM, Michael Schwartzkopff <
mi...@schwartzkopff.org> wrote:
> > Hi,
> >
> > I know this is probably a simple request, but I'm coming up with nothing
> as
> > far as workable documenation for this. The only writeup I can find is a
> > guy doing ocfs2 on DRB
Hi,
I know this is probably a simple request, but I'm coming up with nothing as
far as workable documenation for this. The only writeup I can find is a guy
doing ocfs2 on DRBD, and he skipped ocfs2 in pacemaker, instead using
cluster.conf.
For my small setup, there's no DRBD in the equation, the
On Sat, Oct 1, 2011 at 5:32 AM, Miltiadis Koutsokeras <
m.koutsoke...@biovista.com> wrote:
> From the
> messages it seems like the manager is getting unexpected exit codes from
> the
> Apache resource. The server-status URL is accessible from 127.0.0.1 in both
> nodes.
>
>
Am I understanding corre
Hello again,
Replying to my own message with a "for the archives" post, my issue with
services being started concurrently after a node reboot came down to the
fact that I'm using the VirtualDomain RA, but by default CentOS 6.0 and
Scientific Linux 6.1 (and presumably RHEL6 as well) start libvirtd
Hello,
On Mon, Aug 22, 2011 at 2:55 AM, ihjaz Mohamed wrote:
> Hi,
>
> Has any one here come across this issue?.
>
>
Sorry for the delay, but I wanted to respond and let you know that I'm also
having this issue. I can pretty reliably kill a pretty simple cluster setup
by rebooting one of the no
Hello,
I'm trying to replicate a cluster I initially built for testing on CentOS
5.6, but with the fresher packages that come along with a 6.x release.
CentOS is still playing catch-up, so their 6.0 pacemaker packages are a bit
older. Based on that, I figured I'd try Scientific Linux 6.1 since i
Hi Steve,
On Thu, Aug 4, 2011 at 11:22 AM, Steven Dake wrote:
> ...
>
> Yes I will record if I can beat elluminate into submission.
>
> Regards
> -steve
>
>
Did you get to record this talk? I'd also love to see it, but it wasn't
possible for me to catch it live on Friday.
Thanks,
Mark
___
On Wed, Jun 15, 2011 at 4:20 PM, Dejan Muhamedagic wrote:
> On Wed, Jun 15, 2011 at 03:26:56PM -0500, mark - pacemaker list wrote:
> > On Wed, Jun 15, 2011 at 12:24 PM, imnotpc wrote:
> >
> > >
> > > What I was thinking is that the DC is never fenced
> >
&g
On Wed, Jun 15, 2011 at 12:24 PM, imnotpc wrote:
>
> What I was thinking is that the DC is never fenced
Is this actually the case? It would sure explain the one "gotcha" I've
never been able to work around in a three node cluster with stonith/SBD. If
you unplug the network cable from the DC (
Hi Kevin,
On Tue, May 24, 2011 at 9:12 AM, Kevin Stevenard wrote:
>
> Because by default on my asymmetric cluster I saw that the op monitor
> action is only executed on the node where the resource is currently running,
> and when a user start manually (not through the crm) the same resource on
>
Hi Phil,
On Wed, Apr 27, 2011 at 10:18 AM, Phil Hunt wrote:
>
> Using ocf:heartbeat:clustermon starts up a daemonized crm_mon with the
> follwing command:
>
> /usr/sbin/crm_mon -p /tmp/ClusterMon_ClusterMon.pid -d -i 15 -h
> /data/apache/www/html/crm_mon.html
>
> And it does, indeed write my cr
Hi Phil,
On Tue, Apr 19, 2011 at 3:36 PM, Phil Hunt wrote:
> Hi
> I have iscsid running, no iscsi.
Good. You don't want the system to auto-connect the iSCSI disks on
boot, pacemaker will do that for you.
>
>
>
> Here is the crm status:
>
> Last updated: Tue Apr 19 12:39:03 2011
>
Hello,
On Mon, Apr 11, 2011 at 11:11 AM, Andrew Beekhof wrote:
> On Mon, Apr 11, 2011 at 2:48 PM, Klaus Darilion
> wrote:
>>
>> Recently I got hit by running out of inodes due to too many files in
>> /var/lib/pengine.
>
> man pengine
>
> look for "-series-max"
There is no pengine man page in t
Hi Phil,
On Fri, Apr 8, 2011 at 11:13 AM, Phil Hunt wrote:
>
> Hi
>
> I have been playing with DRBD, thats cool
>
> But I have 2 VM RHEL linux boxes. They each have a boot device (20g) and a
> shared ISCSI 200G volume.
>
> I've played with ucarp and have the commands to make available/mount the
33 matches
Mail list logo