Re: [ClusterLabs] custom resource agent FAILED (blocked)

2018-04-12 Thread emmanuel segura
the start function, need to start the resource when monitor doesn't return success 2018-04-12 23:38 GMT+02:00 Bishoy Mikhael : > Hi All, > > I'm trying to create a resource agent to promote a standby HDFS namenode > to active when the virtual IP failover to another node. >

[ClusterLabs] Failing operations immediately when node is known to be down

2018-04-12 Thread Ryan Thomas
I’m trying to implement a HA solution which recovers very quickly when a node fails. It my configuration, when I reboot a node, I see in the logs that pacemaker realizes the node is down, and decides to move all resources to the surviving node. To do this, it initiates a ‘stop’ operation on each

[ClusterLabs] custom resource agent FAILED (blocked)

2018-04-12 Thread Bishoy Mikhael
Hi All, I'm trying to create a resource agent to promote a standby HDFS namenode to active when the virtual IP failover to another node. I've taken the skeleton from the Dummy OCF agent. The modifications I've done to the Dummy agent are as follows: HDFSHA_start() { HDFSHA_monitor if [

Re: [ClusterLabs] Corosync 2.4.4 is available at corosync.org!

2018-04-12 Thread Ferenc Wágner
Jan Pokorný writes: > On 12/04/18 14:33 +0200, Jan Friesse wrote: > >> This release contains a lot of fixes, including fix for >> CVE-2018-1084. > > Security related updates would preferably provide more context Absolutely, thanks for providing that! Looking at the git

Re: [ClusterLabs] 答复: No slave is promoted to be master

2018-04-12 Thread Ken Gaillot
On Thu, 2018-04-12 at 07:29 +, 范国腾 wrote: > Hello, >   > We use the following command to create the cluster. Node2 is always > the master when the cluster starts. Why does pacemaker not select > node1 as the default master? > How to configure if we want node1 to be the default master? You can

Re: [ClusterLabs] Corosync 2.4.4 is available at corosync.org!

2018-04-12 Thread Jan Pokorný
On 12/04/18 14:33 +0200, Jan Friesse wrote: > I am pleased to announce the latest maintenance release of Corosync > 2.4.4 available immediately from our website at > http://build.clusterlabs.org/corosync/releases/. > > This release contains a lot of fixes, including fix for CVE-2018-1084.

[ClusterLabs] Corosync 2.4.4 is available at corosync.org!

2018-04-12 Thread Jan Friesse
I am pleased to announce the latest maintenance release of Corosync 2.4.4 available immediately from our website at http://build.clusterlabs.org/corosync/releases/. This release contains a lot of fixes, including fix for CVE-2018-1084. Complete changelog for 2.4.4: Andrey Ter-Zakhariants (1):

Re: [ClusterLabs] No slave is promoted to be master

2018-04-12 Thread Jehan-Guillaume de Rorthais
Hi, On Thu, 12 Apr 2018 08:31:39 + 范国腾 wrote: > Thank you very much for help check this issue. The information is in the > attachment. > > I have restarted the cluster after I send my first email. Not sure if it > affects the checking of "the result of "crm_simulate

Re: [ClusterLabs] Pacemake/Corosync good fit for embedded product?

2018-04-12 Thread Klaus Wenninger
On 04/12/2018 04:37 AM, David Hunt wrote: > Thanks Guys, > > Ideally I would like to have event driven (rather than slower polled) > inputs into pacemaker to quickly trigger the fall over. I assume > adding event driven inputs to pacemaker isn't straightforward? If it > was possible to add event

Re: [ClusterLabs] No slave is promoted to be master

2018-04-12 Thread Jehan-Guillaume de Rorthais
Hi, On Thu, 12 Apr 2018 03:14:52 + 范国腾 wrote: > We have three nodes in the cluster. When the master postgres resource in one > node(db1) crashed and could not start any more, we hope that one of the slave > node(db2,db3) could be promoted be master. But it does not. >

[ClusterLabs] 答复: No slave is promoted to be master

2018-04-12 Thread 范国腾
Hello, We use the following command to create the cluster. Node2 is always the master when the cluster starts. Why does pacemaker not select node1 as the default master? How to configure if we want node1 to be the default master? pcs cluster setup --name cluster_pgsql node1 node2 pcs resource