Re: [ClusterLabs] Colocation and ordering with live migration

2016-10-10 Thread Pavel Levshin


Thanks for all suggestions. It is really odd for me that this usecase, 
which is very basic for simple virtualization cluster, is not described 
in every FAQ out there...


It appears that my setup is working correctly with non-symmetrical 
ordering constraints:


Ordering Constraints:

  start dlm-clone then start clvmd-clone (kind:Mandatory)

  start clvmd-clone then start cluster-config-clone (kind:Mandatory)

  start cluster-config-clone then start libvirtd-clone (kind:Mandatory)

  stop vm_smartbv2 then stop libvirtd-clone (kind:Mandatory) 
(non-symmetrical)


  stop vm_smartbv1 then stop libvirtd-clone (kind:Mandatory) 
(non-symmetrical)


  start libvirtd-clone then start vm_smartbv2 (kind:Optional) 
(non-symmetrical)


  start libvirtd-clone then start vm_smartbv1 (kind:Optional) 
(non-symmetrical)


Colocation Constraints:

  clvmd-clone with dlm-clone (score:INFINITY)

  cluster-config-clone with clvmd-clone (score:INFINITY)

  libvirtd-clone with cluster-config-clone (score:INFINITY)

  vm_smartbv1 with libvirtd-clone (score:INFINITY)

  vm_smartbv2 with libvirtd-clone (score:INFINITY)


This is strange, I could swear I've tried this before without success...


It could be possible to modify VirtualDomain RA to include additional 
monitor call, which would block the agent when libvirtd does not work. 
On the other side, if VirtualDomain is able to monitor VM state without 
libvirtd, using emulator process, then it can be an obvious extension to 
issue forced stop just by killing the process. At least this could save 
us a fencing.


Still I do not understand why optional constraint has not effect when 
both VM and libvirtd are scheduled to stop, if there is live migration 
in place. It looks like a bug.



--
Pavel Levshin

10.10.2016 20:58, Klaus Wenninger:


On 10/10/2016 06:56 PM, Ken Gaillot wrote:

On 10/10/2016 10:21 AM, Klaus Wenninger wrote:

On 10/10/2016 04:54 PM, Ken Gaillot wrote:

On 10/10/2016 07:36 AM, Pavel Levshin wrote:

10.10.2016 15:11, Klaus Wenninger:

On 10/10/2016 02:00 PM, Pavel Levshin wrote:

10.10.2016 14:32, Klaus Wenninger:

Why are the order-constraints between libvirt & vms optional?

If they were mandatory, then all the virtual machines would be
restarted when libvirtd restarts. This is not desired nor needed. When
this happens, the node is fenced because it is unable to restart VM in
absence of working libvirtd.

Was guessing something like that ...
So let me reformulate my question:
Why does libvirtd have to be restarted?
If it is because of config-changes making it reloadable might be a
solution ...


Right, config changes come to my mind first of all. But sometimes a
service, including libvirtd, may fail unexpectedly. In this case I would
prefer to restart it without disturbing VirtualDomains, which will fail
eternally.

I think the mandatory colocation of VMs with libvirtd negates your goal.
If libvirtd stops, the VMs will have to stop anyway because they can't
be colocated with libvirtd. Making the colocation optional should fix that.


The question is, why the cluster does not obey optional constraint, when
both libvirtd and VM stop in a single transition?

If it truly is in the same transition, then it should be honored.

You have *mandatory* constraints for DLM -> CLVMd -> cluster-config ->
libvirtd, but only an *optional* constraint for libvirtd -> VMs.
Therefore, libvirtd will generally have to wait longer than the VMs to
be started.

It might help to add mandatory constraints for cluster-config -> VMs.
That way, they have the same requirements as libvirtd, and are more
likely to start in the same transition.

However I'm sure there are still problematic situations. What you want
is a simple idea, but a rather complex specification: "If rsc1 fails,
block any instances of this other RA on the same node."

It might be possible to come up with some node attribute magic to
enforce this. You'd need some custom RAs. I imagine something like one
RA that sets a node attribute, and another RA that checks it.

The setter would be grouped with libvirtd. Anytime that libvirtd starts,
the setter would set a node attribute on the local node. Anytime that
libvirtd stopped or failed, the setter would unset the attribute value.

The checker would simply monitor the attribute, and fail if the
attribute is unset. The group would have on-fail=block. So anytime the
the attribute was unset, the VM would not be started or stopped. (There
would be no constraints between the two groups -- the checker RA would
take the place of constraints.)

In how far would that behave differently to just putting libvirtd
into this on-fail=block group? (apart from of course the
possibility to group the vms into more than one group ...)

You could stop or restart libvirtd without stopping the VMs. It would
cause a "failure" of the checker that would need to be cleaned later,
but the VMs wouldn't stop.

Ah, yes forgot about the manual restart case. Had already
turned that into a reload 

Re: [ClusterLabs] Colocation and ordering with live migration

2016-10-10 Thread Klaus Wenninger
On 10/10/2016 06:56 PM, Ken Gaillot wrote:
> On 10/10/2016 10:21 AM, Klaus Wenninger wrote:
>> On 10/10/2016 04:54 PM, Ken Gaillot wrote:
>>> On 10/10/2016 07:36 AM, Pavel Levshin wrote:
 10.10.2016 15:11, Klaus Wenninger:
> On 10/10/2016 02:00 PM, Pavel Levshin wrote:
>> 10.10.2016 14:32, Klaus Wenninger:
>>> Why are the order-constraints between libvirt & vms optional?
>> If they were mandatory, then all the virtual machines would be
>> restarted when libvirtd restarts. This is not desired nor needed. When
>> this happens, the node is fenced because it is unable to restart VM in
>> absence of working libvirtd.
> Was guessing something like that ...
> So let me reformulate my question:
>Why does libvirtd have to be restarted?
> If it is because of config-changes making it reloadable might be a
> solution ...
>
 Right, config changes come to my mind first of all. But sometimes a
 service, including libvirtd, may fail unexpectedly. In this case I would
 prefer to restart it without disturbing VirtualDomains, which will fail
 eternally.
>>> I think the mandatory colocation of VMs with libvirtd negates your goal.
>>> If libvirtd stops, the VMs will have to stop anyway because they can't
>>> be colocated with libvirtd. Making the colocation optional should fix that.
>>>
 The question is, why the cluster does not obey optional constraint, when
 both libvirtd and VM stop in a single transition?
>>> If it truly is in the same transition, then it should be honored.
>>>
>>> You have *mandatory* constraints for DLM -> CLVMd -> cluster-config ->
>>> libvirtd, but only an *optional* constraint for libvirtd -> VMs.
>>> Therefore, libvirtd will generally have to wait longer than the VMs to
>>> be started.
>>>
>>> It might help to add mandatory constraints for cluster-config -> VMs.
>>> That way, they have the same requirements as libvirtd, and are more
>>> likely to start in the same transition.
>>>
>>> However I'm sure there are still problematic situations. What you want
>>> is a simple idea, but a rather complex specification: "If rsc1 fails,
>>> block any instances of this other RA on the same node."
>>>
>>> It might be possible to come up with some node attribute magic to
>>> enforce this. You'd need some custom RAs. I imagine something like one
>>> RA that sets a node attribute, and another RA that checks it.
>>>
>>> The setter would be grouped with libvirtd. Anytime that libvirtd starts,
>>> the setter would set a node attribute on the local node. Anytime that
>>> libvirtd stopped or failed, the setter would unset the attribute value.
>>>
>>> The checker would simply monitor the attribute, and fail if the
>>> attribute is unset. The group would have on-fail=block. So anytime the
>>> the attribute was unset, the VM would not be started or stopped. (There
>>> would be no constraints between the two groups -- the checker RA would
>>> take the place of constraints.)
>> In how far would that behave differently to just putting libvirtd
>> into this on-fail=block group? (apart from of course the
>> possibility to group the vms into more than one group ...)
> You could stop or restart libvirtd without stopping the VMs. It would
> cause a "failure" of the checker that would need to be cleaned later,
> but the VMs wouldn't stop.

Ah, yes forgot about the manual restart case. Had already
turned that into a reload in my mind ;-)
As long as libvirtd is a systemd-unit ... would a restart
via systemd create a similar behavior?
But forget about it ... with pacemaker enabling to receive
systemd-events we should probably not foster this
use-case ;-)

>
>>> I haven't thought through all possible scenarios, but it seems feasible
>>> to me.
>>>
 In my eyes, these services are bound by a HARD obvious colocation
 constraint: VirtualDomain should never ever be touched in absence of
 working libvirtd. Unfortunately, I cannot figure out a way to reflect
 this constraint in the cluster.


 -- 
 Pavel Levshin
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Colocation and ordering with live migration

2016-10-10 Thread Ken Gaillot
On 10/10/2016 10:21 AM, Klaus Wenninger wrote:
> On 10/10/2016 04:54 PM, Ken Gaillot wrote:
>> On 10/10/2016 07:36 AM, Pavel Levshin wrote:
>>> 10.10.2016 15:11, Klaus Wenninger:
 On 10/10/2016 02:00 PM, Pavel Levshin wrote:
> 10.10.2016 14:32, Klaus Wenninger:
>> Why are the order-constraints between libvirt & vms optional?
> If they were mandatory, then all the virtual machines would be
> restarted when libvirtd restarts. This is not desired nor needed. When
> this happens, the node is fenced because it is unable to restart VM in
> absence of working libvirtd.
 Was guessing something like that ...
 So let me reformulate my question:
Why does libvirtd have to be restarted?
 If it is because of config-changes making it reloadable might be a
 solution ...

>>> Right, config changes come to my mind first of all. But sometimes a
>>> service, including libvirtd, may fail unexpectedly. In this case I would
>>> prefer to restart it without disturbing VirtualDomains, which will fail
>>> eternally.
>> I think the mandatory colocation of VMs with libvirtd negates your goal.
>> If libvirtd stops, the VMs will have to stop anyway because they can't
>> be colocated with libvirtd. Making the colocation optional should fix that.
>>
>>> The question is, why the cluster does not obey optional constraint, when
>>> both libvirtd and VM stop in a single transition?
>> If it truly is in the same transition, then it should be honored.
>>
>> You have *mandatory* constraints for DLM -> CLVMd -> cluster-config ->
>> libvirtd, but only an *optional* constraint for libvirtd -> VMs.
>> Therefore, libvirtd will generally have to wait longer than the VMs to
>> be started.
>>
>> It might help to add mandatory constraints for cluster-config -> VMs.
>> That way, they have the same requirements as libvirtd, and are more
>> likely to start in the same transition.
>>
>> However I'm sure there are still problematic situations. What you want
>> is a simple idea, but a rather complex specification: "If rsc1 fails,
>> block any instances of this other RA on the same node."
>>
>> It might be possible to come up with some node attribute magic to
>> enforce this. You'd need some custom RAs. I imagine something like one
>> RA that sets a node attribute, and another RA that checks it.
>>
>> The setter would be grouped with libvirtd. Anytime that libvirtd starts,
>> the setter would set a node attribute on the local node. Anytime that
>> libvirtd stopped or failed, the setter would unset the attribute value.
>>
>> The checker would simply monitor the attribute, and fail if the
>> attribute is unset. The group would have on-fail=block. So anytime the
>> the attribute was unset, the VM would not be started or stopped. (There
>> would be no constraints between the two groups -- the checker RA would
>> take the place of constraints.)
> 
> In how far would that behave differently to just putting libvirtd
> into this on-fail=block group? (apart from of course the
> possibility to group the vms into more than one group ...)

You could stop or restart libvirtd without stopping the VMs. It would
cause a "failure" of the checker that would need to be cleaned later,
but the VMs wouldn't stop.

>>
>> I haven't thought through all possible scenarios, but it seems feasible
>> to me.
>>
>>> In my eyes, these services are bound by a HARD obvious colocation
>>> constraint: VirtualDomain should never ever be touched in absence of
>>> working libvirtd. Unfortunately, I cannot figure out a way to reflect
>>> this constraint in the cluster.
>>>
>>>
>>> -- 
>>> Pavel Levshin

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Colocation and ordering with live migration

2016-10-10 Thread Ken Gaillot
On 10/10/2016 07:36 AM, Pavel Levshin wrote:
> 10.10.2016 15:11, Klaus Wenninger:
>> On 10/10/2016 02:00 PM, Pavel Levshin wrote:
>>> 10.10.2016 14:32, Klaus Wenninger:
 Why are the order-constraints between libvirt & vms optional?
>>> If they were mandatory, then all the virtual machines would be
>>> restarted when libvirtd restarts. This is not desired nor needed. When
>>> this happens, the node is fenced because it is unable to restart VM in
>>> absence of working libvirtd.
>> Was guessing something like that ...
>> So let me reformulate my question:
>>Why does libvirtd have to be restarted?
>> If it is because of config-changes making it reloadable might be a
>> solution ...
>>
> 
> Right, config changes come to my mind first of all. But sometimes a
> service, including libvirtd, may fail unexpectedly. In this case I would
> prefer to restart it without disturbing VirtualDomains, which will fail
> eternally.

I think the mandatory colocation of VMs with libvirtd negates your goal.
If libvirtd stops, the VMs will have to stop anyway because they can't
be colocated with libvirtd. Making the colocation optional should fix that.

> The question is, why the cluster does not obey optional constraint, when
> both libvirtd and VM stop in a single transition?

If it truly is in the same transition, then it should be honored.

You have *mandatory* constraints for DLM -> CLVMd -> cluster-config ->
libvirtd, but only an *optional* constraint for libvirtd -> VMs.
Therefore, libvirtd will generally have to wait longer than the VMs to
be started.

It might help to add mandatory constraints for cluster-config -> VMs.
That way, they have the same requirements as libvirtd, and are more
likely to start in the same transition.

However I'm sure there are still problematic situations. What you want
is a simple idea, but a rather complex specification: "If rsc1 fails,
block any instances of this other RA on the same node."

It might be possible to come up with some node attribute magic to
enforce this. You'd need some custom RAs. I imagine something like one
RA that sets a node attribute, and another RA that checks it.

The setter would be grouped with libvirtd. Anytime that libvirtd starts,
the setter would set a node attribute on the local node. Anytime that
libvirtd stopped or failed, the setter would unset the attribute value.

The checker would simply monitor the attribute, and fail if the
attribute is unset. The group would have on-fail=block. So anytime the
the attribute was unset, the VM would not be started or stopped. (There
would be no constraints between the two groups -- the checker RA would
take the place of constraints.)

I haven't thought through all possible scenarios, but it seems feasible
to me.

> In my eyes, these services are bound by a HARD obvious colocation
> constraint: VirtualDomain should never ever be touched in absence of
> working libvirtd. Unfortunately, I cannot figure out a way to reflect
> this constraint in the cluster.
> 
> 
> -- 
> Pavel Levshin

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Colocation and ordering with live migration

2016-10-10 Thread Pavel Levshin

10.10.2016 15:11, Klaus Wenninger:

On 10/10/2016 02:00 PM, Pavel Levshin wrote:

10.10.2016 14:32, Klaus Wenninger:

Why are the order-constraints between libvirt & vms optional?

If they were mandatory, then all the virtual machines would be
restarted when libvirtd restarts. This is not desired nor needed. When
this happens, the node is fenced because it is unable to restart VM in
absence of working libvirtd.

Was guessing something like that ...
So let me reformulate my question:
   Why does libvirtd have to be restarted?
If it is because of config-changes making it reloadable might be a
solution ...



Right, config changes come to my mind first of all. But sometimes a 
service, including libvirtd, may fail unexpectedly. In this case I would 
prefer to restart it without disturbing VirtualDomains, which will fail 
eternally.


The question is, why the cluster does not obey optional constraint, when 
both libvirtd and VM stop in a single transition?


In my eyes, these services are bound by a HARD obvious colocation 
constraint: VirtualDomain should never ever be touched in absence of 
working libvirtd. Unfortunately, I cannot figure out a way to reflect 
this constraint in the cluster.



--
Pavel Levshin


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Colocation and ordering with live migration

2016-10-10 Thread Klaus Wenninger
On 10/10/2016 02:00 PM, Pavel Levshin wrote:
>
> 10.10.2016 14:32, Klaus Wenninger:
>> Why are the order-constraints between libvirt & vms optional?
>
> If they were mandatory, then all the virtual machines would be
> restarted when libvirtd restarts. This is not desired nor needed. When
> this happens, the node is fenced because it is unable to restart VM in
> absence of working libvirtd.

Was guessing something like that ...
So let me reformulate my question:
  Why does libvirtd have to be restarted?
If it is because of config-changes making it reloadable might be a
solution ...

>
> You are right, when the constraint is mandatory, live migration works
> fine.
>
>
> -- 
> Pavel Levshin
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org



___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Colocation and ordering with live migration

2016-10-10 Thread Pavel Levshin


10.10.2016 14:32, Klaus Wenninger:

Why are the order-constraints between libvirt & vms optional?


If they were mandatory, then all the virtual machines would be restarted 
when libvirtd restarts. This is not desired nor needed. When this 
happens, the node is fenced because it is unable to restart VM in 
absence of working libvirtd.


You are right, when the constraint is mandatory, live migration works fine.


--
Pavel Levshin


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Colocation and ordering with live migration

2016-10-10 Thread Klaus Wenninger
On 10/10/2016 10:17 AM, Pavel Levshin wrote:
> Hello.
>
> We are trying to migrate our services to relatively fresh version of
> cluster software. It is RHEL 7 with pacemaker 1.1.13-10. I’ve faced a
> problem when live migration of virtual machines is allowed. In short,
> I need to manage libvirtd, and I cannot set proper ordering
> constraints to stop libvirtd only after all virtual machines are
> stopped. This leads to failing migration and STONITH when I try to
> shutdown one of the nodes.
>
> If live migration Is disabled, then all works well.
Why are the order-constraints between libvirt & vms optional?
>
> Cluster config follows:
>
> ==
>
> Cluster Name: smartbvcluster
>
> Corosync Nodes:
>
> bvnode1 bvnode2
>
> Pacemaker Nodes:
>
> bvnode1 bvnode2
>
> Resources:
>
> Clone: dlm-clone
>
>   Meta Attrs: interleave=true ordered=true
>
>   Resource: dlm (class=ocf provider=pacemaker type=controld)
>
>Operations: start interval=0s timeout=90 (dlm-start-interval-0s)
>
>stop interval=0s timeout=100 (dlm-stop-interval-0s)
>
>monitor interval=30s (dlm-monitor-interval-30s)
>
> Clone: clvmd-clone
>
>   Meta Attrs: interleave=true ordered=true
>
>   Resource: clvmd (class=ocf provider=heartbeat type=clvm)
>
>Operations: start interval=0s timeout=90 (clvmd-start-interval-0s)
>
>stop interval=0s timeout=90 (clvmd-stop-interval-0s)
>
>monitor interval=30s (clvmd-monitor-interval-30s)
>
> Clone: cluster-config-clone
>
>   Meta Attrs: interleave=true
>
>   Resource: cluster-config (class=ocf provider=heartbeat type=Filesystem)
>
>Attributes: device=/dev/vg_bv_shared/cluster-config
> directory=/opt/cluster-config fstype=gfs2 options=noatime
>
>Operations: start interval=0s timeout=60
> (cluster-config-start-interval-0s)
>
>stop interval=0s timeout=60
> (cluster-config-stop-interval-0s)
>
>monitor interval=10s on-fail=fence OCF_CHECK_LEVEL=20
> (cluster-config-monitor-interval-10s)
>
> Resource: vm_smartbv1 (class=ocf provider=heartbeat type=VirtualDomain)
>
>   Attributes: config=/opt/cluster-config/libvirt/qemu/smartbv1.xml
> hypervisor=qemu:///system migration_transport=tcp
>
>   Meta Attrs: allow-migrate=true
>
>   Operations: start interval=0s timeout=90
> (vm_smartbv1-start-interval-0s)
>
>   stop interval=0s timeout=90 (vm_smartbv1-stop-interval-0s)
>
>   monitor interval=10 timeout=30
> (vm_smartbv1-monitor-interval-10)
>
> Resource: vm_smartbv2 (class=ocf provider=heartbeat type=VirtualDomain)
>
>   Attributes: config=/opt/cluster-config/libvirt/qemu/smartbv2.xml
> hypervisor=qemu:///system migration_transport=tcp
>
>   Meta Attrs: target-role=started allow-migrate=true
>
>   Operations: start interval=0s timeout=90
> (vm_smartbv2-start-interval-0s)
>
>   stop interval=0s timeout=90 (vm_smartbv2-stop-interval-0s)
>
>   monitor interval=10 timeout=30
> (vm_smartbv2-monitor-interval-10)
>
> Clone: libvirtd-clone
>
>   Meta Attrs: interleave=true
>
>   Resource: libvirtd (class=systemd type=libvirtd)
>
>Operations: monitor interval=60s (libvirtd-monitor-interval-60s)
>
> Stonith Devices:
>
> Resource: ilo.bvnode2 (class=stonith type=fence_ilo4)
>
>   Attributes: ipaddr=ilo.bvnode2 login=hacluster passwd=s
> pcmk_host_list=bvnode2 privlvl=operator
>
>   Operations: monitor interval=60s (ilo.bvnode2-monitor-interval-60s)
>
> Resource: ilo.bvnode1 (class=stonith type=fence_ilo4)
>
>   Attributes: ipaddr=ilo.bvnode1 login=hacluster passwd=s
> pcmk_host_list=bvnode1 privlvl=operator
>
>   Operations: monitor interval=60s (ilo.bvnode1-monitor-interval-60s)
>
> Fencing Levels:
>
> Node: bvnode1
>
>   Level 10 - ilo.bvnode1
>
> Node: bvnode2
>
>   Level 10 - ilo.bvnode2
>
> Location Constraints:
>
>   Resource: ilo.bvnode1
>
> Disabled on: bvnode1 (score:-INFINITY)
> (id:location-ilo.bvnode1-bvnode1--INFINITY)
>
>   Resource: ilo.bvnode2
>
> Disabled on: bvnode2 (score:-INFINITY)
> (id:location-ilo.bvnode2-bvnode2--INFINITY)
>
> Ordering Constraints:
>
>   start dlm-clone then start clvmd-clone (kind:Mandatory)
> (id:order-dlm-clone-clvmd-clone-mandatory)
>
>   start clvmd-clone then start cluster-config-clone (kind:Mandatory)
> (id:order-clvmd-clone-cluster-config-clone-mandatory)
>
>   start cluster-config-clone then start libvirtd-clone
> (kind:Mandatory) (id:order-cluster-config-clone-libvirtd-clone-mandatory)
>
>   start libvirtd-clone then start vm_smartbv2 (kind:Optional)
> (id:order-libvirtd-clone-vm_smartbv2-Optional)
>
>   start libvirtd-clone then start vm_smartbv1 (kind:Optional)
> (id:order-libvirtd-clone-vm_smartbv1-Optional)
>
> Colocation Constraints:
>
>   clvmd-clone with dlm-clone (score:INFINITY)
> (id:colocation-clvmd-clone-dlm-clone-INFINITY)
>
>   cluster-config-clone with clvmd-clone (score:INFINITY)
> (id:colocation-cluster-config-clone-clvmd-clone-INFINITY)
>
>   libvirtd-clone with cluster-config-clone (score:INFINITY)
>