Re: [ClusterLabs] Early VM resource migration

2015-12-17 Thread Klechomir
Hi Ken,

I've tried with and without colocation. The rule was:
colocation bla2 inf: VM_VM1 AA_Filesystem_CDrive1

In both cases the VM_VM1 tries to live migrate back the coming after standby 
node while cloned AA_Filesystem_CDrive1 isn't up on it yet.
Same result with pacemaker 1.14-rc2

Regards,

On 16.12.2015 11:08:35 Ken Gaillot wrote:
> On 12/16/2015 10:30 AM, Klechomir wrote:
> > On 16.12.2015 17:52, Ken Gaillot wrote:
> >> On 12/16/2015 02:09 AM, Klechomir wrote:
> >>> Hi list,
> >>> I have a cluster with VM resources on a cloned active-active storage.
> >>> 
> >>> VirtualDomain resource migrates properly during failover (node standby),
> >>> but tries to migrate back too early, during failback, ignoring the
> >>> "order" constraint, telling it to start when the cloned storage is up.
> >>> This causes unnecessary VM restart.
> >>> 
> >>> Is there any way to make it wait, until its storage resource is up?
> >> 
> >> Hi Klecho,
> >> 
> >> If you have an order constraint, the cluster will not try to start the
> >> VM until the storage resource agent returns success for its start. If
> >> the storage isn't fully up at that point, then the agent is faulty, and
> >> should be modified to wait until the storage is truly available before
> >> returning success.
> >> 
> >> If you post all your constraints, I can look for anything that might
> >> affect the behavior.
> > 
> > Thanks for the reply, Ken
> > 
> > Seems to me that that the constraints for a cloned resources act a a bit
> > different.
> > 
> > Here is my config:
> > 
> > primitive p_AA_Filesystem_CDrive1 ocf:heartbeat:Filesystem \
> > 
> > params device="/dev/CSD_CDrive1/AA_CDrive1"
> > 
> > directory="/volumes/AA_CDrive1" fstype="ocfs2" options="rw,noatime"
> > primitive VM_VM1 ocf:heartbeat:VirtualDomain \
> > 
> > params config="/volumes/AA_CDrive1/VM_VM1/VM1.xml"
> > 
> > hypervisor="qemu:///system" migration_transport="tcp" \
> > 
> > meta allow-migrate="true" target-role="Started"
> > 
> > clone AA_Filesystem_CDrive1 p_AA_Filesystem_CDrive1 \
> > 
> > meta interleave="true" resource-stickiness="0"
> > 
> > target-role="Started"
> > order VM_VM1_after_AA_Filesystem_CDrive1 inf: AA_Filesystem_CDrive1 VM_VM1
> > 
> > Every time when a node comes back from standby, the VM tries to live
> > migrate to it long before the filesystem is up.
> 
> In most cases (including this one), when you have an order constraint,
> you also need a colocation constraint.
> 
> colocation = two resources must be run on the same node
> 
> order = one resource must be started/stopped/whatever before another
> 
> Or you could use a group, which is essentially a shortcut for specifying
> colocation and order constraints for any sequence of resources.
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Early VM resource migration

2015-12-16 Thread Ken Gaillot
On 12/16/2015 10:30 AM, Klechomir wrote:
> On 16.12.2015 17:52, Ken Gaillot wrote:
>> On 12/16/2015 02:09 AM, Klechomir wrote:
>>> Hi list,
>>> I have a cluster with VM resources on a cloned active-active storage.
>>>
>>> VirtualDomain resource migrates properly during failover (node standby),
>>> but tries to migrate back too early, during failback, ignoring the
>>> "order" constraint, telling it to start when the cloned storage is up.
>>> This causes unnecessary VM restart.
>>>
>>> Is there any way to make it wait, until its storage resource is up?
>> Hi Klecho,
>>
>> If you have an order constraint, the cluster will not try to start the
>> VM until the storage resource agent returns success for its start. If
>> the storage isn't fully up at that point, then the agent is faulty, and
>> should be modified to wait until the storage is truly available before
>> returning success.
>>
>> If you post all your constraints, I can look for anything that might
>> affect the behavior.
> Thanks for the reply, Ken
> 
> Seems to me that that the constraints for a cloned resources act a a bit
> different.
> 
> Here is my config:
> 
> primitive p_AA_Filesystem_CDrive1 ocf:heartbeat:Filesystem \
> params device="/dev/CSD_CDrive1/AA_CDrive1"
> directory="/volumes/AA_CDrive1" fstype="ocfs2" options="rw,noatime"
> primitive VM_VM1 ocf:heartbeat:VirtualDomain \
> params config="/volumes/AA_CDrive1/VM_VM1/VM1.xml"
> hypervisor="qemu:///system" migration_transport="tcp" \
> meta allow-migrate="true" target-role="Started"
> clone AA_Filesystem_CDrive1 p_AA_Filesystem_CDrive1 \
> meta interleave="true" resource-stickiness="0"
> target-role="Started"
> order VM_VM1_after_AA_Filesystem_CDrive1 inf: AA_Filesystem_CDrive1 VM_VM1
> 
> Every time when a node comes back from standby, the VM tries to live
> migrate to it long before the filesystem is up.

In most cases (including this one), when you have an order constraint,
you also need a colocation constraint.

colocation = two resources must be run on the same node

order = one resource must be started/stopped/whatever before another

Or you could use a group, which is essentially a shortcut for specifying
colocation and order constraints for any sequence of resources.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Early VM resource migration

2015-12-16 Thread Ken Gaillot
On 12/16/2015 02:09 AM, Klechomir wrote:
> Hi list,
> I have a cluster with VM resources on a cloned active-active storage.
> 
> VirtualDomain resource migrates properly during failover (node standby),
> but tries to migrate back too early, during failback, ignoring the
> "order" constraint, telling it to start when the cloned storage is up.
> This causes unnecessary VM restart.
> 
> Is there any way to make it wait, until its storage resource is up?

Hi Klecho,

If you have an order constraint, the cluster will not try to start the
VM until the storage resource agent returns success for its start. If
the storage isn't fully up at that point, then the agent is faulty, and
should be modified to wait until the storage is truly available before
returning success.

If you post all your constraints, I can look for anything that might
affect the behavior.

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org