Re: [ClusterLabs] Galera 10.1 cluster

2017-01-10 Thread Hao QingFeng
在 2017-01-06 18:27, Oscar Segarra 写道: Hi, I get errors like the following: 017-01-06 11:25:47 139902389713152 [ERROR] WSREP: failed to open gcomm backend connection: 131: invalid UUID: (FATAL) at gcomm/src/pc.cpp:PC():267 2017-01-06 11:25:47 139902389713152 [ERROR] WSREP:

Re: [ClusterLabs] No match for shutdown action on

2017-01-10 Thread Ken Gaillot
On 01/10/2017 11:38 AM, Denis Gribkov wrote: > Hi Everyone, > > When I run: > > # pcs resource cleanup resource_name > > I'm getting a block of messages in log on current DC node: > > Jan 10 18:12:13 node1 crmd[21635]: warning: No match for shutdown > action on node2 > Jan 10 18:12:13 node1

Re: [ClusterLabs] question about dc-deadtime

2017-01-10 Thread Ken Gaillot
On 01/10/2017 11:59 AM, Chris Walker wrote: > > > On Mon, Jan 9, 2017 at 6:55 PM, Andrew Beekhof > wrote: > > On Fri, Dec 16, 2016 at 8:52 AM, Chris Walker > > >

Re: [ClusterLabs] question about dc-deadtime

2017-01-10 Thread Chris Walker
On Mon, Jan 9, 2017 at 6:55 PM, Andrew Beekhof wrote: > On Fri, Dec 16, 2016 at 8:52 AM, Chris Walker > wrote: > > Thanks for your response Ken. I'm puzzled ... in my case node remain > > UNCLEAN (offline) until dc-deadtime expires, even when

[ClusterLabs] No match for shutdown action on

2017-01-10 Thread Denis Gribkov
Hi Everyone, When I run: # pcs resource cleanup resource_name I'm getting a block of messages in log on current DC node: Jan 10 18:12:13 node1 crmd[21635]: warning: No match for shutdown action on node2 Jan 10 18:12:13 node1 crmd[21635]: warning: No match for shutdown action on node3 Jan

[ClusterLabs] resource-agents v4.0.0 rc1

2017-01-10 Thread Oyvind Albrigtsen
ClusterLabs is happy to announce resource-agents v4.0.0 rc1. Source code is available at: https://github.com/ClusterLabs/resource-agents/releases/tag/v4.0.0rc1 The most significant enhancements in this release are: - new resource agents: - garbd - awseip - awsvip - pgagent - CI: automatic

Re: [ClusterLabs] large cluster with corosync

2017-01-10 Thread Jan Friesse
Arne Jansen napsal(a): On 04.01.2017 13:52, Jan Friesse wrote: Variables you can try tweak. - Definitively start with increase totem.config (default 1000, you can try 1) what does that do? Haven't found it in corosync.conf(5) Sorry, typo. I meant totem.token. - If it doesn't

[ClusterLabs] Pacemaker kill does not cause node fault ???

2017-01-10 Thread Stefan Schloesser
Hi, I am currently testing a 2 node cluster under Ubuntu 16.04. The setup seems to be working ok including the STONITH. For test purposes I issued a "pkill -f pace" killing all pacemaker processes on one node. Result: The node is marked as "pending", all resources stay on it. If I manually

Re: [ClusterLabs] large cluster with corosync

2017-01-10 Thread Arne Jansen
On 04.01.2017 13:52, Jan Friesse wrote: Variables you can try tweak. - Definitively start with increase totem.config (default 1000, you can try 1) what does that do? Haven't found it in corosync.conf(5) - If it doesn't help, try increase totem.join (default is 50, 1000 may work) and