The third, and possibly final, release candidate for Pacemaker 2.0.3 is
now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.3-rc3
If there are no serious issues found in this release, I will release it
as the final 2.0.3 in another week or so.
This fixes some
The second release candidate for Pacemaker 2.0.3 is now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.3-rc2
This has minor bug fixes and documentation improvements compared to
rc1, especially in crm_mon. Two recent suggestions from this mailing
list were
Hi all,
I am happy to announce that source code for the first release candidate
for Pacemaker version 2.0.3 is now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.3-rc1
Highlights previously discussed on this list include a dynamic cluster
recheck interval (you
Hi Team,
Thanks for your support,
Actually we are facing issue in running PCS in a pod on openshift.
Scenario :- We have two zabbix VM's which are running in a PCS cluster. Now we
want that pod in the VM pcs cluster, We have already install the packages and
start the pcsd service in the pod.
On Fri, 27 Sep 2019 12:14:09 -0500
Ken Gaillot wrote:
> On Fri, 2019-09-27 at 19:03 +0530, Shital A wrote:
> >
> >
> > On Tue, 24 Sep 2019, 22:20 Shital A,
> > wrote:
> > > Hello,
> > >
> > > We have setup active-passive cluster using streaming replication on
> > > Rhel7.5. We are testing
On Fri, 2019-09-27 at 19:03 +0530, Shital A wrote:
>
>
> On Tue, 24 Sep 2019, 22:20 Shital A,
> wrote:
> > Hello,
> >
> > We have setup active-passive cluster using streaming replication on
> > Rhel7.5. We are testing pacemaker for automated failover.
> > We are seeing below issues with the
On Tue, 24 Sep 2019, 22:20 Shital A, wrote:
> Hello,
>
> We have setup active-passive cluster using streaming replication on
> Rhel7.5. We are testing pacemaker for automated failover.
> We are seeing below issues with the setup :
>
> 1. When a failover is triggered when data is being added to
Hello,
We have setup active-passive cluster using streaming replication on
Rhel7.5. We are testing pacemaker for automated failover.
We are seeing below issues with the setup :
1. When a failover is triggered when data is being added to the primary by
killing primary (killall -9 postgres), the
ovided for you (below) still
stand, don't expect any ClusterLabs rembranding-by-force of what
practically amounts to a dead project now.
Thanks for understanding.
And keep in mind, if I were you, I'd skip CMAN and RHEL 6 today.
> -Original Message-
> From: Jan Pokorný
> Sent: F
Thilak J
-Original Message-
From: Jan Pokorný
Sent: Friday, August 30, 2019 20:15
To: users@clusterlabs.org
Subject: Re: [ClusterLabs] Pacemaker 1.1.12 does not compile with CMAN Stack.
On 30/08/19 13:03 +, Somanath Jeeva wrote:
In Pacemaker 1.1.12 version try to compile with CMAN
sites (even for Publicly available)).
With Regards
Somanath Thilak J
-Original Message-
From: Jan Pokorný
Sent: Friday, August 30, 2019 20:15
To: users@clusterlabs.org
Subject: Re: [ClusterLabs] Pacemaker 1.1.12 does not compile with CMAN Stack.
On 30/08/19 13:03 +, Somanath Jeeva
019.
But let's assume there's a reason.
> but we are unable to achieve that .
>
> Source taken path :
> https://github.com/ClusterLabs/pacemaker/tree/Pacemaker-1.1.12
>
> After Extracting, we installed required dependencies as per
> README.markdown,
>
Hi Team ,
In Pacemaker 1.1.12 version try to compile with CMAN Stack , but we are unable
to achieve that .
Source taken path :
https://github.com/ClusterLabs/pacemaker/tree/Pacemaker-1.1.12
After Extracting , we installed required dependencies as per README.markdown,
## Installing from
On 27/08/19 15:27 +0200, Ulrich Windl wrote:
> Systemd think he's the boss, doing what he wants: Today I noticed that all
> resources are run inside control group "pacemaker.service" like this:
> ├─pacemaker.service
> │ ├─ 26582 isredir-ML1: listening on 172.20.17.238/12503 (2/1)
> │ ├─
Hi!
Systemd think he's the boss, doing what he wants: Today I noticed that all
resources are run inside control group "pacemaker.service" like this:
├─pacemaker.service
│ ├─ 26582 isredir-ML1: listening on 172.20.17.238/12503 (2/1)
│ ├─ 26601 /usr/bin/perl -w /usr/sbin/ldirectord
On Tue, Aug 20, 2019 at 1:03 AM Del Monaco, Andrea
wrote:
>
> Hi Users,
>
>
>
> As per title – do you know if there is some resource in pacemaker that allows
> a filesystem (md array) to be mounted and then run the quotaon command on it
Is not quota information persistent so it is enough to run
Hi Users,
As per title – do you know if there is some resource in pacemaker that allows a
filesystem (md array) to be mounted and then run the quotaon command on it if
the quota options are specified and if the FS is ext4?
If not, what would be the best way to proceed from this point on? I
Gentle Reminder!!
On Wed, Jul 17, 2019 at 1:14 PM Rohit Saini
wrote:
> Gentle Reminder!!
>
> On Mon, Jul 15, 2019 at 12:10 PM Rohit Saini <
> rohitsaini111.fo...@gmail.com> wrote:
>
>> Hi All,
>>
>> I know pacemaker booth is being used for geographical redundancy.
>> Currently I am using
ordered]
> CRM_alert_status:
> A numerical code used by Pacemaker to represent the operation
> result (resource alerts only)
See
https://github.com/ClusterLabs/pacemaker/blob/Pacemaker-2.0.2/include/crm/services.h#L118-L129
> CRM_alert_desc:
> Detail about event. For node alerts, this is t
On Tue, 2019-07-16 at 13:53 +, Gershman, Vladimir wrote:
> Hi,
>
> Is there a list of all possible alerts/events that Peacemaker can
> send out? Preferable with criticality levels for the alerts (minor,
> major, critical).
I'm not sure whether you're using "alerts" in a general sense here,
Hi,
Is there a list of all possible alerts/events that Peacemaker can send out?
Preferable with criticality levels for the alerts (minor, major, critical).
Thank you,
Vlad
Equipment Management (EM) System Engineer
___
Manage your subscription:
Gentle Reminder!!
On Mon, Jul 15, 2019 at 12:10 PM Rohit Saini
wrote:
> Hi All,
>
> I know pacemaker booth is being used for geographical redundancy.
> Currently I am using pacemaker/corosync for my local two-node redundancy.
> As I understand, booth needs atleast 3 nodes to work correctly to
On 7/15/19 9:57 PM, Ken Gaillot wrote:
> On Mon, 2019-07-15 at 12:10 +0530, Rohit Saini wrote:
>> Hi All,
>>
>> I know pacemaker booth is being used for geographical redundancy.
>> Currently I am using pacemaker/corosync for my local two-node
>> redundancy.
>> As I understand, booth needs atleast
On Mon, 2019-07-15 at 12:10 +0530, Rohit Saini wrote:
> Hi All,
>
> I know pacemaker booth is being used for geographical redundancy.
> Currently I am using pacemaker/corosync for my local two-node
> redundancy.
> As I understand, booth needs atleast 3 nodes to work correctly to do
> the
On Thu, 2019-06-06 at 10:12 -0500, Ken Gaillot wrote:
>
While I appreciate brevity, this was my e-mail client eating a draft.
:-/
Source code for the Pacemaker 2.0.2 and 1.1.21 releases is now
available:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.2
https://github.
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/
On Thu, 2019-05-30 at 23:39 +, Harvey Shepherd wrote:
> Hi All,
>
> I'm running Pacemaker 2.0.1 on a cluster containing two nodes; one
> master and one slave. I have a main master/slave resource
> (m_main_system), a group of resources that run in active-active mode
> (active_active - i.e. run
Hi All,
I'm running Pacemaker 2.0.1 on a cluster containing two nodes; one master and
one slave. I have a main master/slave resource (m_main_system), a group of
resources that run in active-active mode (active_active - i.e. run on both
nodes), and a group that runs in active-disabled mode
Source code for the third (and likely final) release candidate for
Pacemaker version 2.0.2 is now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.2-rc3
This fixes regressions found in rc2. I expect this will become the
final release next week. For details
[forwarding to respective upstream list, this has little to do with
systemd, I suggest following up only there, detaching from systemd ML]
On 29/05/19 17:23 +0100, lejeczek wrote:
> something I was hoping one expert could shed bit more light onto - I
> have a pacemaker cluster composed of three
Source code for the second (and possibly final) release candidate for
Pacemaker version 2.0.2 is now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.2-rc2
This fixes a few memory issues found in rc1. If no issues are found in
this one in a week or so, I'll
On 30/04/19 07:55 +0200, Ulrich Windl wrote:
Jan Pokorný schrieb am 29.04.2019 um 17:22
in Nachricht <20190429152200.ga19...@redhat.com>:
>> On 29/04/19 14:58 +0200, Jan Pokorný wrote:
>>> On 29/04/19 08:20 +0200, Ulrich Windl wrote:
>>> Jan Pokorný schrieb am 25.04.2019 um 18:49
st the realtime
corosync process! but allegedly, it is not too verbose if nothing
interesting happens unless set to be more verbose) since
Pacemaker-2.0.0:
https://github.com/ClusterLabs/pacemaker/commit/b8075c86d35f3d37b0cbac86a8c90f1ac1091c33
Great!
But we can do better for those who would p
On 29/04/19 08:20 +0200, Ulrich Windl wrote:
Jan Pokorný schrieb am 25.04.2019 um 18:49
in Nachricht <20190425164946.gf23...@redhat.com>:
>> On 24/04/19 09:32 ‑0500, Ken Gaillot wrote:
>>> On Wed, 2019‑04‑24 at 16:08 +0200, wf...@niif.hu wrote:
Make install creates
On Thu, 2019-04-25 at 18:49 +0200, Jan Pokorný wrote:
> On 24/04/19 09:32 -0500, Ken Gaillot wrote:
> > On Wed, 2019-04-24 at 16:08 +0200, wf...@niif.hu wrote:
> > > Make install creates /var/log/pacemaker with mode 0770, owned by
> > > hacluster:haclient. However, if I create the directory as
>
On 24/04/19 09:32 -0500, Ken Gaillot wrote:
> On Wed, 2019-04-24 at 16:08 +0200, wf...@niif.hu wrote:
>> Make install creates /var/log/pacemaker with mode 0770, owned by
>> hacluster:haclient. However, if I create the directory as root:root
>> instead, pacemaker.log appears as hacluster:haclient
Source code for the first release candidate for Pacemaker version 2.0.2
is now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.2-rc1
This is primarily a security release, with stricter two-way
authentication of inter-process communication. The most significant
On Wed, 2019-04-24 at 16:08 +0200, wf...@niif.hu wrote:
> Hi,
>
> Make install creates /var/log/pacemaker with mode 0770, owned by
> hacluster:haclient. However, if I create the directory as root:root
> instead, pacemaker.log appears as hacluster:haclient all the
> same. What
> breaks in this
Hi,
Make install creates /var/log/pacemaker with mode 0770, owned by
hacluster:haclient. However, if I create the directory as root:root
instead, pacemaker.log appears as hacluster:haclient all the same. What
breaks in this setup besides log rotation (which can be fixed by
removing the su
On 17/04/19 12:09 -0500, Ken Gaillot wrote:
> Without the patches, a mitigation is to prevent local user access to
> cluster nodes except for cluster administrators (which is the
> recommended and most common deployment model).
Not trying to artificially amplify the risk in response to the above,
in
environment variables to local users with permissions to access the
pacemaker log but not wherever the environment variables are set.
Pull requests patching these vulnerabilities for the master and 1.1
branches of pacemaker will be merged shortly:
https://github.com/ClusterLabs/pacemaker/pull/1749
https
andidate was a fix for a regression discovered in 2.0.0 regarding live
> migration (1.1 was not affected).
>
> 2.0.1:
>
> https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.1
>
>
> 1.1.20 (with selected backports from 2.0.1):
>
> https://github.com/Clu
was not affected).
2.0.1:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.1
1.1.20 (with selected backports from 2.0.1):
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-1.1.20
--
Ken Gaillot
___
Users mailing list
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
Source code for the 5th (and likely final) release candidate for
Pacemaker version 2.0.1 is now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.1-rc5
The only significant change is some refactoring to make the scheduler
regression tests pass again with glib
Ken Gaillot writes:
> On Mon, 2019-02-25 at 12:48 +0100, wf...@niif.hu wrote:
>
>> Ken Gaillot writes:
>>
>>> We should be getting close to final release.
>>
>> How close are we to the final release? I'm asking because the Debian
>> full freeze date is 2019-03-12 and migration requires 10
On Mon, 2019-02-25 at 12:48 +0100, wf...@niif.hu wrote:
> Ken Gaillot writes:
>
> > We should be getting close to final release.
>
> Hi Ken,
>
> How close are we to the final release? I'm asking because the Debian
> full freeze date is 2019-03-12 and migration requires 10 days now,
> thus
>
Ken Gaillot writes:
> We should be getting close to final release.
Hi Ken,
How close are we to the final release? I'm asking because the Debian
full freeze date is 2019-03-12 and migration requires 10 days now, thus
I'd have to upload any significant changes ASAP to catch Debian buster.
--
On Fri, 2019-02-15 at 08:55 +0800, ma.jinf...@zte.com.cn wrote:
> There is a issue that pacemaker don't schedule resource which is in
> docker container after docker is restarted but the pacemaker cluster
> show the resource is started ,it seems to be a bug of pacemaker .
> I am very confused
There is a issue that pacemaker don't schedule resource which is in docker
container after docker is restarted but the pacemaker cluster show the resource
is started ,it seems to be a bug of pacemaker .
I am very confused what happend when pengine print those logs(pengine:
notice:
ur witnessed with older versions of glib in the game.
> > >
> > > Our immediate response is to, at the very least, make the
> > > cts-scheduler regression suite (the only localhost one that was
> > > rendered broken with 52 tests out of 733 failed) skip tho
ession suite (the only localhost one that was
> > rendered broken with 52 tests out of 733 failed) skip those tests
> > where reliance on the exact order of hash-table-driven items was
> > sported, so it won't fail as a whole:
> >
> >
https://github.com/ClusterLabs/pacemaker
On 11/02/19 15:03 -0600, Ken Gaillot wrote:
> On Fri, 2019-02-01 at 08:10 +0100, Jan Pokorný wrote:
>> On 28/01/19 09:47 -0600, Ken Gaillot wrote:
>>> On Mon, 2019-01-28 at 18:04 +0530, Dileep V Nair wrote:
>>> Pacemaker can handle the clock jumping forward, but not backward.
>>
>> I am rather
to, at the very least, make the
> cts-scheduler regression suite (the only localhost one that was
> rendered broken with 52 tests out of 733 failed) skip those tests
> where reliance on the exact order of hash-table-driven items was
> sported, so it won't fail as a whole:
>
> https:
On Fri, 2019-02-01 at 08:10 +0100, Jan Pokorný wrote:
> On 28/01/19 09:47 -0600, Ken Gaillot wrote:
> > On Mon, 2019-01-28 at 18:04 +0530, Dileep V Nair wrote:
> > Pacemaker can handle the clock jumping forward, but not backward.
>
> I am rather surprised, are we not using monotonic time only,
On 28/01/19 09:47 -0600, Ken Gaillot wrote:
> On Mon, 2019-01-28 at 18:04 +0530, Dileep V Nair wrote:
> Pacemaker can handle the clock jumping forward, but not backward.
I am rather surprised, are we not using monotonic time only, then?
If so, why?
We shall not need any explicit time
On 30/01/19 11:07 -0600, Ken Gaillot wrote:
> For those on the bleeding edge, the newest versions of GCC and glib
> cause some issues. GCC 9 does stricter checking of print formats that
> required a few log message fixes in this release (i.e. using GCC 9 with
> the -Werror option will fail with
Source code for the fourth release candidate for Pacemaker version
2.0.1 is now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.1-rc4
This candidate has a few more bug fixes. We should be getting close to
final release.
For those on the bleeding edge
aged Applications
+91 98450 22258 Mobile
dilen...@in.ibm.com
IBM Services
From: Ken Gaillot
To: Cluster Labs - All topics related to open-source clustering
welcomed
Date: 01/28/2019 09:18 PM
Subject:Re: [ClusterLabs] Pacemaker log showing time mismatch after
On Mon, 2019-01-28 at 18:04 +0530, Dileep V Nair wrote:
> Hi,
>
> I am seeing that there is a log entry showing Recheck Timer popped
> and the time in pacemaker.log went back in time. After sometime, the
> time issue Around the same time the resources also failed over (Slave
> became master). Do
Hi,
I am seeing that there is a log entry showing Recheck Timer popped
and the time in pacemaker.log went back in time. After sometime, the time
issue Around the same time the resources also failed over (Slave became
master). Do anyone know why this behavior ?
Jan 23 01:16:48 [9383]
On 21/01/19 09:17 +0100, Ulrich Windl wrote:
> IMHO it's like in Perl: When relying the hash keys to be returned
> in any particular (or even stable) order, the idea is just broken!
> Either keep the keys in an extra array for ordering, or sort them
> in some way...
Exactly, IT silos lacking
d broken with 52 tests out of 733 failed) skip those tests
where reliance on the exact order of hash-table-driven items was
sported, so it won't fail as a whole:
https://github.com/ClusterLabs/pacemaker/pull/1677/commits/d76a2614ded697fb4adb117e5a6633008c31f60e
> Variations like
It was discovered that this release of glib project changed sligthly
some parameters of how distribution of values within hash tables
structures work, undermining pacemaker's hard (alas unfeasible) attempt
to turn this data type into fully predictable entity.
Current impact is unknown beside
Source code for the first release candidate for Pacemaker version
1.1.20 is now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-1.1.20-rc1
This release consists of backports from the Pacemaker 2.0.1 release (as
of rc3). For details, see the change log:
https
On Mon, 2019-01-14 at 11:48 +0100, wf...@niif.hu wrote:
> Hi,
>
> Recently I spent some time mapping the interrelations of the C header
> files constituting the Pacemaker API. In the end I decided they were
> so
> tightly interdependent that there was really no useful way to ship
> parts
> of
Hi,
Recently I spent some time mapping the interrelations of the C header
files constituting the Pacemaker API. In the end I decided they were so
tightly interdependent that there was really no useful way to ship parts
of the API separately, thus I did away with the separate lib*-dev Debian
Source code for the second release candidate for Pacemaker version
2.0.1 is now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.1-rc2
This release fixes two regressions in rc1: a serious one related to
bundle recovery, and a minor one related to stonith_admin
Source code for the first release candidate for Pacemaker version 2.0.1
is now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.1-rc1
This is primarily a bug fix release, but there are a few new features:
* SBD using a watchdog device may now be used for fencing
On Fri, 2018-11-16 at 16:33 +0800, ma.jinf...@zte.com.cn wrote:
> There is a problem in my program about pacemake that pacemaker
> failed to restart subprocess of host if container also uses
> pacemaker cluster!
That might not be supportable with the current code. It's possible to
have a nested
There is a problem in my program about pacemake that pacemaker failed to
restart subprocess of host if container also uses pacemaker cluster!
The environment is as follows:
1. corosync version 2.4.0pacemaker version 1.1.16
2. three node clusters, and container also has a pacemaker cluster
On 09/11/18 13:24 +, Ian Underhill wrote:
> Yep all my pcs commands run on a live cluster. The design needs
> resources to respond in specific ways before moving on to other
> shutdown requests.
>
> So it seems that these pcs commands that run on different nodes at
> the same time, is the
On Thu, 2018-11-08 at 12:14 +, Ian Underhill wrote:
> seems this issue has been raised before, but has gone quite, with no
> solution
>
> https://lists.clusterlabs.org/pipermail/users/2017-October/006544.htm
> l
In that case, something appeared to be explicitly re-enabling the
disabled
seems this issue has been raised before, but has gone quite, with no
solution
https://lists.clusterlabs.org/pipermail/users/2017-October/006544.html
I know my resource agents successfully return the correct status to the
start\stop\monitor requests
On Thu, Nov 8, 2018 at 11:40 AM Ian Underhill
Sometimes Im seeing that a resource group that is in the process of being
disable is auto restarted by pacemaker.
When issuing pcs disable command to disable different resource groups at
the same time (on different nodes, at the group level) the result is that
sometimes the resource is stopped
Hi,
I have been using Pacemaker + PostgreSQL 9.4 for many year
without any issue. Recently, I setup another cluster with Pacemaker +
PostgreSQL 9.6 on CentOS 6. However the cluster didn’t seem to have problem to
set the Slave’s score from “-INFINITY” to “100”. When the
>If you build from source, you can apply the patch that fixes the issue
>to the 1.1.14 code base:
>https://github.com/ClusterLabs/pacemaker/commit/98457d1635db1222f93599b6021e662e766ce62d
[1]
Just applied the patch and now it works as expected. The unseen node is
only rebooted once o
nk about this workaround?
>
>
> The other solution would be updating pacemaker, but this 1.1.14 I
> have tested on many servers, and I don't want to take the risk to
> update to 1.1.15 and (maybe) have some other new issues...
>
> Thanks a lot!
> Cesar
If you build f
>
> P.S. If the issue is just a matter of timing when you're starting both
> nodes, you can start corosync on both nodes first, then start pacemaker
> on both nodes. That way pacemaker on each node will immediately see the
> other node's presence.
> --
Well rebooting a server lasts 2 minutes
On Wed, 2018-09-05 at 09:51 -0500, Ken Gaillot wrote:
> On Wed, 2018-09-05 at 16:38 +0200, Cesar Hernandez wrote:
> > Hi
> >
> > >
> > > Ah, this rings a bell. Despite having fenced the node, the
> > > cluster
> > > still considers the node unseen. That was a regression in 1.1.14
> > > that
> >
On Wed, 2018-09-05 at 16:38 +0200, Cesar Hernandez wrote:
> Hi
>
> >
> > Ah, this rings a bell. Despite having fenced the node, the cluster
> > still considers the node unseen. That was a regression in 1.1.14
> > that
> > was fixed in 1.1.15. :-(
> >
>
> Oh :( I'm using Pacemaker-1.1.14.
Hi
>
> Ah, this rings a bell. Despite having fenced the node, the cluster
> still considers the node unseen. That was a regression in 1.1.14 that
> was fixed in 1.1.15. :-(
>
Oh :( I'm using Pacemaker-1.1.14.
Do you know if this reboot retries are just run 3 times? All the tests I've
On Wed, 2018-09-05 at 13:31 +0200, Cesar Hernandez wrote:
> Hi
>
> >
> > The first fencing is legitimate -- the node hasn't been seen at
> > start-
> > up, and so needs to be fenced. The second fencing will be the one
> > of
> > interest. Also, look for the result of the first fencing.
>
> The
Hi
>
> The first fencing is legitimate -- the node hasn't been seen at start-
> up, and so needs to be fenced. The second fencing will be the one of
> interest. Also, look for the result of the first fencing.
The first fencing has finished with OK, as well as the other two fencing
operations.
On Fri, 2018-08-31 at 08:37 +0200, Cesar Hernandez wrote:
> Hi
>
> >
> >
> > Do you mean you have a custom fencing agent configured? If so,
> > check
> > the return value of each attempt. Pacemaker should request fencing
> > only
> > once as long as it succeeds (returns 0), but if the agent
Hi
>
>
> Do you mean you have a custom fencing agent configured? If so, check
> the return value of each attempt. Pacemaker should request fencing only
> once as long as it succeeds (returns 0), but if the agent fails
> (returns nonzero or times out), it will retry, even if the reboot
> worked
On Thu, 2018-08-30 at 17:24 +0200, Cesar Hernandez wrote:
> Hi
>
> I have a two-node corosync+pacemaker which, starting only one node,
> it fences the other node. It's ok as the default behaviour as the
> default "startup-fencing" is set to true.
> But, the other node is rebooted 3 times, and
Hi
I have a two-node corosync+pacemaker which, starting only one node, it fences
the other node. It's ok as the default behaviour as the default
"startup-fencing" is set to true.
But, the other node is rebooted 3 times, and then, the remaining node starts
resources and doesn't fence the node
On Mon, 2018-08-13 at 18:13 +0200, FeldHost™ Admin wrote:
> Hello, thanks for reply, so basiclly, can I leverage existing cli
> tools and do for ex. call crm node fence xyz?
Yes
>
> S pozdravem Kristián Feldsam
> Tel.: +420 773 303 353, +421 944 137 535
> E-mail.: supp...@feldhost.cz
>
>
Hello, thanks for reply, so basiclly, can I leverage existing cli tools and do
for ex. call crm node fence xyz?
S pozdravem Kristián Feldsam
Tel.: +420 773 303 353, +421 944 137 535
E-mail.: supp...@feldhost.cz
www.feldhost.cz - FeldHost™ – Hostingové služby prispôsobíme vám. Máte
špecifické
On Sat, 2018-08-11 at 17:38 +0200, FeldHost™ Admin wrote:
> Hi all, I have question:
>
> We have Corosync/Pacemaker cluster running for KVM virtualisation. VM
> Instances are managed by external software (Opennebula). To achieve
> automatic migration of running VMs from failed node, external sw
Hi all, I have question:
We have Corosync/Pacemaker cluster running for KVM virtualisation. VM Instances
are managed by external software (Opennebula). To achieve automatic migration
of running VMs from failed node, external sw need fence node and confirm that
was fenced successfully. When
Many thanks for all of the replies. Perhaps my choice of dummy resource names
was misleading, as our production resources aren’t really in a master / slave
relationship. Just incase it helps, here is what we want to achieve.
- only start resource B if resource A is already running.
- if both
On Wed, 2018-08-08 at 20:55 +0300, Andrei Borzenkov wrote:
> 08.08.2018 16:59, Ken Gaillot пишет:
> > On Wed, 2018-08-08 at 07:36 +0300, Andrei Borzenkov wrote:
> > > 06.08.2018 20:07, Devin A. Bougie пишет:
> > > > What is the best way to make sure pacemaker doesn’t attempt to
> > > > recover or
08.08.2018 16:59, Ken Gaillot пишет:
> On Wed, 2018-08-08 at 07:36 +0300, Andrei Borzenkov wrote:
>> 06.08.2018 20:07, Devin A. Bougie пишет:
>>> What is the best way to make sure pacemaker doesn’t attempt to
>>> recover or restart a resource if a resource it depends on is not
>>> started?
>>>
>>>
On Wed, 2018-08-08 at 07:36 +0300, Andrei Borzenkov wrote:
> 06.08.2018 20:07, Devin A. Bougie пишет:
> > What is the best way to make sure pacemaker doesn’t attempt to
> > recover or restart a resource if a resource it depends on is not
> > started?
> >
> > For example, we have two dummy
08.08.2018 07:36, Andrei Borzenkov пишет:
> 06.08.2018 20:07, Devin A. Bougie пишет:
>> What is the best way to make sure pacemaker doesn’t attempt to recover or
>> restart a resource if a resource it depends on is not started?
>>
>> For example, we have two dummy resources that simply sleep -
06.08.2018 20:07, Devin A. Bougie пишет:
> What is the best way to make sure pacemaker doesn’t attempt to recover or
> restart a resource if a resource it depends on is not started?
>
> For example, we have two dummy resources that simply sleep - master_sleep and
> slave_sleep. We then have a
What is the best way to make sure pacemaker doesn’t attempt to recover or
restart a resource if a resource it depends on is not started?
For example, we have two dummy resources that simply sleep - master_sleep and
slave_sleep. We then have a non-symmetrical ordering constraint that ensures
Source code for the final release of Pacemaker version 1.1.19 is
available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-1.1.19
This is a maintenance release that backports selected fixes and
features from the 2.0.0 version. The 1.1 series is no longer actively
maintained
201 - 300 of 680 matches
Mail list logo