Hi!
So what is the correct scenario then?
Editing CIB and removing 'monitor' operation altogether with
making resource unmanaged?
Best regards,
Alexandr
30.05.2013 04:54, Andrew Beekhof пишет:
Yes, i made j
Andrew,
How should this be done?
Just removing 'op monitor interval="15" timeout="20"' from the
resource primitive?
24.05.2013 07:29, Andrew Beekhof пишет:
A better approach would have been to disable the recurring monitor - then the
Hi Andrew,
Did you set is-managed=false for the group or a resource in the group?
I'm assuming the latter - basically the cluster noticed your resource was not running anymore.
While it did not try and do anything to fix that resource, it did stop anything that neede
Hi, All!
On one of my clusters I have resources groups, second group depends
on first resource in the first group. Today I needed to restart one
service from the first group (no dependancies other than group), so
I made in unmanaged:
May 23 14:14:22 kennedy
Hi!
I have a two-node cluster (virtual machines) with several resources and
shared storage.
When the connectivity is lost (for some reason still needed to be
debuged), here is what I get (I am skipping unrelated messages)
May 14 16:49:21 wcs2 corosync[27531]: [TOTEM ] The token was lost in
Bad
handle
Apr 19 12:45:31 kennedy pacemakerd[17080]: error:
send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad
handle
19.04.2013 11:44, Rasto Levrinc пишет:
On Fri, Apr 19, 2013 at 9:11 AM, Alexandr A. Alexandrov
wr
Hi Rasto,
Note that on RHEL
6/CentOS 6, you should run the Pacemaker through CMAN and not
a Corosync plugin
Not glad to hear that... We are using Pacemaker+Corosync
everywhere (SuSe, CentOS, OracleLinux servers).
Is there any wa
Hi Andreas!
For this purpose I put resources into 'unmanaged' state with 'crm
resource unmanage ' - and after that tou can start/stop
pacemaker/corosync without interrupting running resources.
09.04.2013 11:44, Andreas Mock пишет:
What would be the right procedure to restart pacemaker
freein
Hi!
Before giving details of the errors, my question is do you think a hybrid setup with different OSs (RHEL 5.x and RHEL 6.x) is possible?
Building from latest sources on both servers should also resolv
this, I suppose.
_
Carlos,
Increasing corosync timeouts and 'monitor' action timeouts in
pacemaker might help, but do you have separate leased network
connection for corosync? It is better to connect your servers
directly with cross cable (to be independent of switches/networ
, "Alexandr A. Alexandrov" wrote:
Lars, thanks for your answer.
So, what is an option? Build everything (1.1.8 and etc) from current git
source?
Probably. Or ask for a replacement on EL - I guess qdisk or the new
fence_sanlock might be reasonably equiv
David,
Exactly... Commenting out helped.
But I ended up with older source which compiles without modifying
source.
05.02.2013 20:41, David Vossel:
In file included from utils.c:54:
../../include/crm/common/mainloop.h:60: error
sions/packages, if needed
Best regards,
Alexandr A. Alexandrov
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http:
Lars,
Thanks, I will first try building everything from latest sources.
As I understand, fence_sanlock/qdisk are integrated into cman
infrastructure. I would prefer to stay with Pacemaker since that
is what we have been using for quite some time, and also w
I am not sure if the 1.1.7 release for el6 is build with support for the
fencing agents that cluster-glue provides.
Seems it's time for another round of alignment discussions this year ...
Regards,
Lars
Lars, thanks for your answer.
So, what is an option? Build everything (1.1.8 and etc)
Hi!
I am trying to implement SBD fencing in a two-node cluster (having
shared storage).
Everything is set up as described on linux-ha.org, SBD is working:
# sbd -d /dev/sdb1 list
0 wcs1clear
1 wcs2clear
However, when I try to create a stonith resource, I get:
Stack: opena
16 matches
Mail list logo