> > The message likely came from the resource agent calling crm_attribute
> > to set a node attribute. That message usually means the cluster isn't
> > running on that node, so it's highly suspect. The cib might have
> > crashed, which should be in the log as well. I'd look into that first.
>
>
> -Original Message-
> From: Users [mailto:users-boun...@clusterlabs.org] On Behalf Of Ken Gaillot
> Sent: Wednesday, August 01, 2018 2:17 PM
> To: Cluster Labs - All topics related to open-source clustering welcomed
>
> Subject: Re: [ClusterLabs] Why Won't Resources Move?
>
> On Wed,
On Wed, 2018-08-01 at 03:49 +, Eric Robinson wrote:
> I have what seems to be a healthy cluster, but I can’t get resources
> to move.
>
> Here’s what’s installed…
>
> [root@001db01a cluster]# yum list installed|egrep "pacem|coro"
> corosync.x86_64 2.4.3-2.el7_5.1
Maybe you need to use pcs resource update
2018-08-01 22:26 GMT+02:00 Casey & Gina :
> How is the interval adjusted? Based on an example I found online, I
> thought `pcs resource op monitor interval=15m vmware_fence` should work,
> but after executing that `pcs config` still shows a monitor
On Wed, 2018-08-01 at 14:47 -0600, Casey & Gina wrote:
> Actually, is it even necessary at all? Based on my other E-mail to
> the list (Fence agent ends up stopped with no clear reason why), it
> seems that sometimes the monitor fails with an "unknown error",
> resulting in a cluster that won't
On Wed, 2018-08-01 at 13:43 -0600, Casey Allen Shobe wrote:
> Here is the corosync.log for the first host in the list at the
> indicated time. Not sure what it's doing or why - all cluster nodes
> were up and running the entire time...no fencing events.
>
> Jul 30 21:46:30 [3878] q-gp2-dbpg57-1
Actually, is it even necessary at all? Based on my other E-mail to the list
(Fence agent ends up stopped with no clear reason why), it seems that sometimes
the monitor fails with an "unknown error", resulting in a cluster that won't
fail over due to inability to fence. I tried looking at the
How is the interval adjusted? Based on an example I found online, I thought
`pcs resource op monitor interval=15m vmware_fence` should work, but after
executing that `pcs config` still shows a monitor interval of 60s.
Thank you,
--
Casey
> On 2018-07-31, at 9:11 AM, Casey Allen Shobe wrote:
Here is the corosync.log for the first host in the list at the indicated time.
Not sure what it's doing or why - all cluster nodes were up and running the
entire time...no fencing events.
Jul 30 21:46:30 [3878] q-gp2-dbpg57-1cib: info: cib_perform_op:
Diff: --- 0.700.4 2
Jul
Across our clusters, I see the fence agent stop working, with no apparent
reason. It looks like shown below. I've found that I can do a `pcs resource
cleanup vmware_fence` to cause it to start back up again in a few seconds, but
why is this happening and how can I prevent it?
vmware_fence
On Wed, 2018-08-01 at 15:26 +0200, Jan Pokorný wrote:
> Hello,
>
> On 01/08/18 13:46 +0100, lejeczek wrote:
> > is it possible to tell the cluster to exclude or ban resources to
> > run on a node which I'd like to add the cluster? (as one command?)
> >
> > (or any other way that would assure
Hello,
On 01/08/18 13:46 +0100, lejeczek wrote:
> is it possible to tell the cluster to exclude or ban resources to
> run on a node which I'd like to add the cluster? (as one command?)
>
> (or any other way that would assure that no resources would be moved to that
> node, in case cluster would
On 08/01/2018 12:17 PM, Ulrich Windl wrote:
> Hi!
>
> Reading the SBD manual page was a long-time frustrating process for me (it's
> incomplete and sometimes hard to read, or it's hard to find the information
> you are looking for), so I eventually started to document SBD somewhat better
> in
hi guys
is it possible to tell the cluster to exclude or ban resources to run on
a node which I'd like to add the cluster? (as one command?)
(or any other way that would assure that no resources would be moved to
that node, in case cluster would decide for some reason that was a good
idea)
Hi!
One thing I found out in the meantime is that hpwdt ("HP iLO2+ HW Watchdog
Timer") calls panic() in hpwdt_pretimeout(). However panic() never returns, and
so the notify_die() from do_nmi() never finishes. Possibly this never worked,
be it Xen or not.
The interesting thing is that the HP
Hi!
Reading the SBD manual page was a long-time frustrating process for me (it's
incomplete and sometimes hard to read, or it's hard to find the information you
are looking for), so I eventually started to document SBD somewhat better in
the manual page. Actually I was hoping to improve the
>>> Klaus Wenninger schrieb am 01.08.2018 um 08:28 in
Nachricht <5149ea3c-3c14-57be-034e-4f1e0d8fb...@redhat.com>:
> On 08/01/2018 08:06 AM, Ulrich Windl wrote:
>> Hi Klaus,
>>
>> sorry for the late response, but in the meantime I found out some more
> facts:
> np
>> 1) Triggering of the
17 matches
Mail list logo