Hi
I think there is no easy command line to modify a timeout, except if you
can write a good filter script to be used on
command line such as :
crm -F configure filter "sed, or whatever ..."
Alain
Le 28/07/2014 11:45, Dang Zhiqiang a écrit :
thanks,
I know this method, but I want to modify it
Hi Mike,
I don't know why mysqld is missing, but mysqld_safe will not work in
pacemaker, as it is "HA in itself" , meaning that if you
stop the daemon, it will be automatically re-started on the same node,
completely by itself, without any pacemaker order or even configuration.
And this leads o
Hi,
I'm fighting with the crm configure filter command to change all
migration-threshold values in the configuration :
crm -F configure filter \"sed '/threshold="1"/s/="1"/="0"/g'\"
does not change anything , and I've tried to add several \ around the "
, around the = etc. nothing works ..
Hi
Sorry to ask again about this problem , does somebody has the answer ?
Thanks
Alain
Le 06/12/2013 08:57, Moullé Alain a écrit :
Hi,
I've found a thread talking about this problem on 1.1.7, but at the
end , is the patch :
https://github.com/ClusterLabs/pacemaker/c
Hi,
I've found a thread talking about this problem on 1.1.7, but at the end
, is the patch :
https://github.com/ClusterLabs/pacemaker/commit/03f6105592281901cc10550b8ad19af4beb5f72f
sufficient and correct to solve the problem ?
Thanks
Alain
Le 03/12/2013 10:15, Moullé Alain a écrit
Hi,
with : pacemaker-1.1.7-6 & corosync-1.4.1-15
On crm migrate , I'm randomly facing this problem :
... node1 daemon warning cib warning: cib_peer_callback: Discarding
cib_apply_diff message (342) from node2: not in our membership
whereas the node2 is healthy and always member of the cl
Hi
About "switching between rings" , the information I had was that it is
dependant to rrp mode,
and in the case rrp mode is active, the information I had was that both
rings were used at
the same time ...
It is it right or wrong ?
Thanks
Alain
Le 17/10/2013 16:25, Digimer a écrit :
On 17/10/
Hi Andrew,
thanks.
and when switching to last option 4:
For RHEL7+, option 4: corosync + cpg + quorumd + mcp
what will be the status around the two binaries to use ?
Thanks
Alain
Le 15/10/2013 22:40, Andrew Beekhof a écrit :
On 16/10/2013, at 1:35 AM, Moullé Alain wrote:
OK , I was
:25:37, Moullé Alain wrote:
Hi Lars,
thanks a lot for information.
I 'll try, but the documentation asks for gfs2-cluster rpm installation, and
for now I don't find this rpm on RHEL6.4, and don't know
if it is always required ... but it is not in your side ;-)
gfs2-cluster? I t
ky-Bree a écrit :
On 2013-10-15T14:15:50, Moullé Alain wrote:
in fact, I would like to know if someone has configured gfs2 under Pacemaker
with the dlm-controld and gfs-controld from cman-3.0.12 rpm (so without any
more the dlm-controld.pcml and gfs-controld.pcml) ?
And if it works fine
Hi,
in fact, I would like to know if someone has configured gfs2 under
Pacemaker with the dlm-controld and gfs-controld from cman-3.0.12 rpm
(so without any more the dlm-controld.pcml and gfs-controld.pcml) ?
And if it works fine with Pacemaker ?
Thanks
Alain
Le 11/10/2013 16:32, Moullé
Hi
I'm trying to configure again gfs2 under Pacemaker on RHEL6.4
About rpms to be installed, I thought I had to install both (from
previous RHEL6) :
dlm-pcmk-3.0.12-23.el6.x86_64.rpm
gfs-pcmk-3.0.12-23.el6.x86_64.rpm
but yum returns that both are obsoleted by cman, trying to install
cman-3.0.
Hi
Yes Lars, but in fact it seems that the problem is not in the monitoring
of SAN resources managed under Pacemaker, but on corosync management of
heartbeat tokens during such IO loads on SAN
Regards
Alain
Le 02/10/2013 15:08, Lars Marowsky-Bree a écrit :
On 2013-10-02T13:40:16, Ulrich Windl
>>"There is one notable exception: If you have shared storage (SAN,
NAS, NFS), the cause of the slowness may be external to the systems
being monitored, thus fencing those will not improve the situation, most
likely."
Yes, this is exactly the case I 'm facing ...
Alain
Le 02/10/2013 13:40, Ulr
Thanks for all your first responses but ...
I forgot to mention that it is a general case , not specifically with
drbd that I never used in my Pacemaker configuration.
And that I use to set two heartbeat networks in rrp mode , with at least
one of them completely dedicated to heartbeat,
so I don
Hi,
with stack Pacemaker/corosync;
suppose that a node in a HA cluster is so loaded (IOs, etc.) during more
than the heartbeat timeout value but temporarily loaded, so loaded that
it can't even no more manage heartbeat tokens, and it is fenced because
he can't manage heartbeat tokens, whereis
Hi,
sorry for the delay on this thread, I was unavailable a few weeks, but
just FYI, I wanted to share some results I got a few weeks ago:
I've tried some tests on a configuration and start/stop of 500 Dummy
resources, and I got these time values :
1/ configuration with successive crm comm
Hello,
A simple question : is there a maximum number of resources (let's say
simple primitives) that Pacemaker can support at first at configuration
of ressources via crm, and of course after configuration when Pacemaker
has to monitor all the primitives ?
(more precisely, could we envisage
your own choice!
Alain
Le 27/08/2013 10:20, Francis SOUYRI a écrit :
Bonjour Alain,
Check this:
http://www.sebastien-han.fr/blog/2012/08/01/corosync-rrp-configuration/
Best regards.
Francis
On 08/27/2013 08:03 AM, Moullé Alain wrote:
Hi,
So what's the real difference between
nc.conf is not clear for me on this point)
Thanks
Alain
Le 26/08/2013 16:58, Digimer a écrit :
Nope, that just enables RRP (without it, the failure of ring 0 would
fail the cluster)
On 26/08/13 10:27, Moullé Alain wrote:
Hi,
sorry but I thought that if we set "rrp_mode" to "activ
Hi,
sorry but I thought that if we set "rrp_mode" to "active", corosync is
using both rings "at the same time" , isn't it ?
Alain
Le 26/08/2013 15:53, Digimer a écrit :
On 26/08/13 09:14, Francis SOUYRI wrote:
Hello,
Is Corosync works with eth and bond at the same time ?
with the config bel
Hi,
I always have : default-resource-stickiness="5000"
Thanks
Alain
Le 25/04/2013 10:52, fabian.herschel a écrit :
> Hi Alain,
>
> could you doublecheck, if the effect in your second test also happens, when
> you set a stickness/default-stickyness tobe something like 1000?
>
> In your case when N
Hi,
a behavior which is not clear for me :
1/ Let's say we have 2 nodes node1 & node2 in the HA cluster, and 3
Dummy resources : resname1, resname2, resname3
and the forbidden colocation set like this :
colocation forbidden-coloc-resname1-resname2 -inf: resname1 resname2
colocation forbidden-co
Hi
Or perhaps you can also choose to set the meta parameter failure-timeout
to xx secondes, so that resource could
migrate back after a failover in case of failure again ...
failure-timeout (Default : 0=disabled) : How many seconds to wait
before acting as if the failure has not occured,
and p
Hi,
I wonder if there is documentation somewhere to know how to exploit such
file for example : /var/lib/pengine/pe-input-890 from the original
zipped file :
/var/lib/pengine/pe-input-890.bz2
I mean it seems that it is quite like a cib.xml or a mix a information , but
what can I get as interesti
Hi Andrew,
that's fine for me even in two steps , but I don't recognize the command
to be used
to set
rsc.managed=false + rsc.op.enabled=false
is it a special crm syntax ?
Thanks again.
Alain
Le 27/03/2013 10:00, Andrew Beekhof a écrit :
> On Wed, Mar 27, 2013 at 6:30 PM,
ct read-only, but do change the state
> of a resource (which may be quite unexpected). One example is the RAID RA,
> which tries to re-add missing devices.
>
>
> Regards,
> Ulrich
>
>>>> Moullé Alain schrieb am 27.03.2013 um 07:56 in
> Nachricht
> <51529820.
r 2013 16:25:54 +0100 Moullé Alain wrote:
>> I've tested two things :
>>
>> 1/ if we set maintenance-mode=true :
>>
>> all the configured ressources become 'unmanaged' , as displayed
>> with crm_mon
>> ok start stop are no more ac
Hi,
I've tested two things :
1/ if we set maintenance-mode=true :
all the configured ressources become 'unmanaged' , as displayed
with crm_mon
ok start stop are no more accepted
and it seems that ressources are no more monitored any more by
pacemaker
2/ if we target only one re
Hi
ooops, I made a mistake this morning, should be that :
group G1 B & C
[ order advisory D to I (if you need to launch only one at a time,
otherwise it is not needed) ]
order mandatory for each D to I and group G1 : G1 than D, G1 than E, etc.
colocations between each D to I and group G1
Alai
Hi
If I well understand, I would have tried :
group G1 mandatory B & C
group G2 advisory D to I
order mandatory group G1 than G2
this should work if I understand well your needs.
Regards
Alain
> In the simplest terms, we currently have resources:
>
> A = drbd
> B = filesystem
> C = cluster IP
>
31 matches
Mail list logo