Re: [ClusterLabs] Howto stonith in the case of any interface failure?

2019-10-09 Thread Andrei Borzenkov
On Wed, Oct 9, 2019 at 10:59 AM Kadlecsik József wrote: > > Hello, > > The nodes in our cluster have got backend and frontend interfaces: the > former ones are for the storage and cluster (corosync) traffic and the > latter ones are for the public services of KVM guests only. > > One of the nodes

[ClusterLabs] SBD with shared device - loss of both interconnect and shared device?

2019-10-09 Thread Andrei Borzenkov
What happens if both interconnect and shared device is lost by node? I assume node will reboot, correct? Now assuming (two node cluster) second node still can access shared device it will fence (via SBD) and continue takeover, right? If both nodes lost shared device, both nodes will reboot and

Re: [ClusterLabs] Howto stonith in the case of any interface failure?

2019-10-09 Thread Digimer
On 2019-10-09 3:58 a.m., Kadlecsik József wrote: > Hello, > > The nodes in our cluster have got backend and frontend interfaces: the > former ones are for the storage and cluster (corosync) traffic and the > latter ones are for the public services of KVM guests only. > > One of the nodes has

[ClusterLabs] change of the configuration of a resource which is part of a clone

2019-10-09 Thread Lentes, Bernd
Hi, i finally managed to find out how i can simulate configuration changes and see their results before committing them. OMG. That makes live much more relaxed. I need to change the configuration of a resource which is part of a group, the group is running as a clone on all nodes.

Re: [ClusterLabs] Howto stonith in the case of any interface failure?

2019-10-09 Thread Kadlecsik József
On Wed, 9 Oct 2019, Ken Gaillot wrote: > > One of the nodes has got a failure ("watchdog: BUG: soft lockup - > > CPU#7 stuck for 23s"), which resulted that the node could process > > traffic on the backend interface but not on the fronted one. Thus the > > services became unavailable but the

Re: [ClusterLabs] Howto stonith in the case of any interface failure?

2019-10-09 Thread Kadlecsik József
Hi, On Wed, 9 Oct 2019, Jan Pokorný wrote: > On 09/10/19 09:58 +0200, Kadlecsik József wrote: > > The nodes in our cluster have got backend and frontend interfaces: the > > former ones are for the storage and cluster (corosync) traffic and the > > latter ones are for the public services of KVM

Re: [ClusterLabs] Howto stonith in the case of any interface failure?

2019-10-09 Thread Ken Gaillot
On Wed, 2019-10-09 at 09:58 +0200, Kadlecsik József wrote: > Hello, > > The nodes in our cluster have got backend and frontend interfaces: > the > former ones are for the storage and cluster (corosync) traffic and > the > latter ones are for the public services of KVM guests only. > > One of

Re: [ClusterLabs] [ClusterLabs Developers] FYI: looks like there are DNS glitches with clusterlabs.org subdomains

2019-10-09 Thread Ken Gaillot
Due to a mix-up, all of clusterlabs.org is currently without DNS service. :-( List mail may continue to work for a while as mail servers rely on DNS caches, so hopefully this reaches most of our subscribers. No estimate yet for when it will be recovered. On Wed, 2019-10-09 at 11:06 +0200, Jan

[ClusterLabs] announcement: schedule for resource-agents release 4.4.0

2019-10-09 Thread Oyvind Albrigtsen
Hi, This is a tentative schedule for resource-agents v4.4.0: 4.4.0-rc1: October 16. 4.4.0: October 23. I've modified the corresponding milestones at https://github.com/ClusterLabs/resource-agents/milestones If there's anything you think should be part of the release please open an issue, a

Re: [ClusterLabs] Howto stonith in the case of any interface failure?

2019-10-09 Thread Jan Pokorný
On 09/10/19 09:58 +0200, Kadlecsik József wrote: > The nodes in our cluster have got backend and frontend interfaces: the > former ones are for the storage and cluster (corosync) traffic and the > latter ones are for the public services of KVM guests only. > > One of the nodes has got a failure

[ClusterLabs] Howto stonith in the case of any interface failure?

2019-10-09 Thread Kadlecsik József
Hello, The nodes in our cluster have got backend and frontend interfaces: the former ones are for the storage and cluster (corosync) traffic and the latter ones are for the public services of KVM guests only. One of the nodes has got a failure ("watchdog: BUG: soft lockup - CPU#7 stuck for