Re: [ClusterLabs] Why "Stop" action isn't called during failover?

2017-11-22 Thread Euronas Support
Thanks for the answer Ken,
The constraints are:

colocation vmgi_with_filesystem1 inf: vmgi filesystem1
colocation vmgi_with_libvirtd inf: vmgi cl_libvirtd
order vmgi_after_filesystem1 inf: filesystem1 vmgi
order vmgi_after_libvirtd inf: cl_libvirtd vmgi

On 20.11.2017 16:44:00 Ken Gaillot wrote:
> On Fri, 2017-11-10 at 11:15 +0200, Klecho wrote:
> > Hi List,
> > 
> > I have a VM, which is constraint dependant on its storage resource.
> > 
> > When the storage resource goes down, I'm observing the following:
> > 
> > (pacemaker 1.1.16 & corosync 2.4.2)
> > 
> > Nov 10 10:04:36 [1202] NODE-2pengine: info: LogActions:  
> > Leave   vm_lomem1   (Started NODE-2)
> > 
> > Filesystem(p_AA_Filesystem_Drive16)[2097324]: 2017/11/10_10:04:37
> > INFO: 
> > sending signal TERM to: libvirt+ 1160142   1  0 09:01 ?
> > Sl 0:07 qemu-system-x86_64
> > 
> > 
> > The VM (VirtualDomain RA) gets killed without calling "Stop" RA
> > action.
> > 
> > Isn't the proper way to call "Stop" for all related resources in such
> > cases?
> 
> Above, it's not Pacemaker that's killing the VM, it's the Filesystem
> resource itself.
> 
> When the Filesystem agent gets a stop request, if it's unable the
> unmount the filesystem, it can try further action according to its
> force_unmount option: "This option allows specifying how to handle
> processes that are currently accessing the mount directory ... Default
> value, kill processes accessing mount point".
> 
> What does the configuration for the resources and constraints look
> like? Based on what you described, Pacemaker shouldn't try to stop the
> Filesystem resource before successfully stopping the VM first.

-- 
EuroNAS GmbH
Germany:  +49 89 325 33 931

http://www.euronas.com
http://www.euronas.com/contact-us/

Ettaler Str. 3
82166 Gräfelfing / Munich
Germany

Registergericht : Amtsgericht München
Registernummer : HRB 181698
Umsatzsteuer-Identifikationsnummer (USt-IdNr.) : DE267136706


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Instant service restart during failback

2017-04-21 Thread Euronas Support
Seems that replacing inf: with 0: in some colocation constraints fixes the 
problem, but still cannot understand why it worked for one node and not for 
the other.

On 20.4.2017 12:16:02 Klechomir wrote:
> Hi Klaus,
> It would have been too easy if it was interleave.
> All my cloned resoures have interlave=true, of course.
> What bothers me more is that the behaviour is asymmetrical.
> 
> Regards,
> Klecho
> 
> On 20.4.2017 10:43:29 Klaus Wenninger wrote:
> > On 04/20/2017 10:30 AM, Klechomir wrote:
> > > Hi List,
> > > Been investigating the following problem recently:
> > > 
> > > Have two node cluster with 4 cloned (2 on top of 2) + 1 master/slave
> > > services on it (corosync+pacemaker 1.1.15)
> > > The failover works properly for both nodes, i.e. when one node is
> > > restarted/turned in standby, the other properly takes over, but:
> > > 
> > > Every time when node2 has been in standby/turned off and comes back,
> > > everything recovers propery.
> > > Every time when node1 has been in standby/turned off and comes back,
> > > part
> > > of the cloned services on node2 are getting instantly restarted, at the
> > > same second when node1 re-appeares, without any apparent reason (only
> > > the
> > > stop/start messages in the debug).
> > > 
> > > Is there some known possible reason for this?
> > 
> > That triggers some deja-vu feeling...
> > Did you have a similar issue a couple of weeks ago?
> > I remember in that particular case 'interleave=true' was not the
> > solution to the problem but maybe here ...
> > 
> > Regards,
> > Klaus
> > 
> > > Best regards,
> > > Klecho
> > > 
> > > ___
> > > Users mailing list: Users@clusterlabs.org
> > > http://lists.clusterlabs.org/mailman/listinfo/users
> > > 
> > > Project Home: http://www.clusterlabs.org
> > > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > Bugs: http://bugs.clusterlabs.org
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

-- 
EuroNAS GmbH
Germany:  +49 89 325 33 931

http://www.euronas.com
http://www.euronas.com/contact-us/

Ettaler Str. 3
82166 Gräfelfing / Munich
Germany

Registergericht : Amtsgericht München
Registernummer : HRB 181698
Umsatzsteuer-Identifikationsnummer (USt-IdNr.) : DE267136706


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org