***UNCHECKED*** Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-25 Thread Tutkowski, Mike
Thanks, Rafael – I should be able to take a look at this later today.

On 7/23/18, 6:58 AM, "Rafael Weingärtner"  wrote:

Hey Mike, PR created: https://github.com/apache/cloudstack/pull/2761
Can you take a look at it?

On Tue, Jul 17, 2018 at 4:35 PM, Tutkowski, Mike 
wrote:

> Correct, I happened to find it while testing a PR of mine targeted at
> master.
>
> > On Jul 17, 2018, at 1:30 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
> >
> > Correct. I do think the problem here is only in the release notes.
> >
> > Just to confirm, you found the problem while testing 4.12 (from master),
> > right?
> >
> > On Tue, Jul 17, 2018 at 4:22 PM, Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> > wrote:
> >
> >> Cool, if it’s just in master, then that makes it easier.
> >>
> >> Also, it means we did not have a process issue by introducing
> enhancement
> >> code in between release candidates.
> >>
> >> It would mean, however, that our documentation is a bit incorrect if, 
in
> >> fact, it states that that feature exists in 4.11.1.
> >>
> >>> On Jul 17, 2018, at 1:20 PM, Rafael Weingärtner <
> >> rafaelweingart...@gmail.com> wrote:
> >>>
> >>> Ok, thanks. I had the impression that we said it was backported to
> 4.11.
> >>>
> >>> I will get master and work on it then.
> >>>
> >>> On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike <
> >> mike.tutkow...@netapp.com>
> >>> wrote:
> >>>
>  I only noticed it in master. The example code I was comparing it
> against
>  was from 4.11.0. I never checked against 4.11.1.
> 
> > On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner <
>  rafaelweingart...@gmail.com> wrote:
> >
> > Hey Mike, I got the branch 4.11 to start fixing the problem we
> >> discussed,
> > but I do not think my commit was backported to 4.11. I mean, I am at
> > "VirtualMachineManagerImpl" and the code is not here. I also checked
> >> the
> > commit (
> > https://github.com/apache/cloudstack/commit/
>  f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
> > that introduced those changes to master, and according to Github, it
> is
> > only in the master branch, and not in 4.11.
> >
> > I checked the "VirtualMachineManagerImpl" class at the Apache
> >> CloudStack
> > remote repository in the 4.11 branch, and as you can see, the code
> >> there
>  is
> > the “old”   one.
> > https://github.com/apache/cloudstack/blob/4.11/engine/
>  orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
> >
> > I got a little confused now. Did you detect the problem in 4.11 or 
in
> > master?
> >
> >
> > On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike <
>  mike.tutkow...@netapp.com
> >> wrote:
> >
> >> Another comment here: The part that is broken is if you try to let
> >> CloudStack pick the primary storage on the destination side. That
> code
>  no
> >> longer exists in 4.11.1.
> >>
> >> On 7/16/18, 9:24 PM, "Tutkowski, Mike" 
>  wrote:
> >>
> >>  To follow up on this a bit: Yes, you should be able to migrate a 
VM
> >> and its storage from one cluster to another today using non-managed
> >> (traditional) primary storage with XenServer (both the source and
> >> destination primary storages would be cluster scoped). However, 
that
> >> is
>  one
> >> of the features that was broken in 4.11.1 that we are discussing in
> >> this
> >> thread.
> >>
> >>  On 7/16/18, 9:20 PM, "Tutkowski, Mike" 
> >> wrote:
> >>
> >>  For a bit of info on what managed storage is, please take a
> look
> >> at this document:
> >>
> >>  https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%
> >> 20in%20CloudStack.docx?dl=0
> >>
> >>  The short answer is that you can have zone-wide managed 
storage
> >> (for XenServer, VMware, and KVM). However, there is no current
> >> zone-wide
> >> non-managed storage for XenServer.
> >>
> >>  On 7/16/18, 6:20 PM, "Yiping Zhang" 
> wrote:
> >>
> >>  I assume by "managed storage", you guys mean primary
>  storages,
> >> either zone -wide or cluster-wide.
> >>
> >>  For Xen hypervisor, ACS does not support "zone-wide"
> primary
> >> storage yet. Still, I can live migrate a VM with data disks between
> >> clusters with storage migration from web GUI, today.  So, your
> >> statement
> >> below does not reflect current 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-23 Thread Rafael Weingärtner
Hey Mike, PR created: https://github.com/apache/cloudstack/pull/2761
Can you take a look at it?

On Tue, Jul 17, 2018 at 4:35 PM, Tutkowski, Mike 
wrote:

> Correct, I happened to find it while testing a PR of mine targeted at
> master.
>
> > On Jul 17, 2018, at 1:30 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
> >
> > Correct. I do think the problem here is only in the release notes.
> >
> > Just to confirm, you found the problem while testing 4.12 (from master),
> > right?
> >
> > On Tue, Jul 17, 2018 at 4:22 PM, Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> > wrote:
> >
> >> Cool, if it’s just in master, then that makes it easier.
> >>
> >> Also, it means we did not have a process issue by introducing
> enhancement
> >> code in between release candidates.
> >>
> >> It would mean, however, that our documentation is a bit incorrect if, in
> >> fact, it states that that feature exists in 4.11.1.
> >>
> >>> On Jul 17, 2018, at 1:20 PM, Rafael Weingärtner <
> >> rafaelweingart...@gmail.com> wrote:
> >>>
> >>> Ok, thanks. I had the impression that we said it was backported to
> 4.11.
> >>>
> >>> I will get master and work on it then.
> >>>
> >>> On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike <
> >> mike.tutkow...@netapp.com>
> >>> wrote:
> >>>
>  I only noticed it in master. The example code I was comparing it
> against
>  was from 4.11.0. I never checked against 4.11.1.
> 
> > On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner <
>  rafaelweingart...@gmail.com> wrote:
> >
> > Hey Mike, I got the branch 4.11 to start fixing the problem we
> >> discussed,
> > but I do not think my commit was backported to 4.11. I mean, I am at
> > "VirtualMachineManagerImpl" and the code is not here. I also checked
> >> the
> > commit (
> > https://github.com/apache/cloudstack/commit/
>  f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
> > that introduced those changes to master, and according to Github, it
> is
> > only in the master branch, and not in 4.11.
> >
> > I checked the "VirtualMachineManagerImpl" class at the Apache
> >> CloudStack
> > remote repository in the 4.11 branch, and as you can see, the code
> >> there
>  is
> > the “old”   one.
> > https://github.com/apache/cloudstack/blob/4.11/engine/
>  orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
> >
> > I got a little confused now. Did you detect the problem in 4.11 or in
> > master?
> >
> >
> > On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike <
>  mike.tutkow...@netapp.com
> >> wrote:
> >
> >> Another comment here: The part that is broken is if you try to let
> >> CloudStack pick the primary storage on the destination side. That
> code
>  no
> >> longer exists in 4.11.1.
> >>
> >> On 7/16/18, 9:24 PM, "Tutkowski, Mike" 
>  wrote:
> >>
> >>  To follow up on this a bit: Yes, you should be able to migrate a VM
> >> and its storage from one cluster to another today using non-managed
> >> (traditional) primary storage with XenServer (both the source and
> >> destination primary storages would be cluster scoped). However, that
> >> is
>  one
> >> of the features that was broken in 4.11.1 that we are discussing in
> >> this
> >> thread.
> >>
> >>  On 7/16/18, 9:20 PM, "Tutkowski, Mike" 
> >> wrote:
> >>
> >>  For a bit of info on what managed storage is, please take a
> look
> >> at this document:
> >>
> >>  https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%
> >> 20in%20CloudStack.docx?dl=0
> >>
> >>  The short answer is that you can have zone-wide managed storage
> >> (for XenServer, VMware, and KVM). However, there is no current
> >> zone-wide
> >> non-managed storage for XenServer.
> >>
> >>  On 7/16/18, 6:20 PM, "Yiping Zhang" 
> wrote:
> >>
> >>  I assume by "managed storage", you guys mean primary
>  storages,
> >> either zone -wide or cluster-wide.
> >>
> >>  For Xen hypervisor, ACS does not support "zone-wide"
> primary
> >> storage yet. Still, I can live migrate a VM with data disks between
> >> clusters with storage migration from web GUI, today.  So, your
> >> statement
> >> below does not reflect current behavior of the code.
> >>
> >>
> >> - If I want to migrate a VM across clusters, but
>  if
> >> at least one of its
> >> volumes is placed in a cluster-wide managed
> >> storage, the migration is not
> >> allowed. Is that it?
> >>
> >>  [Mike] Correct
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >
> >
> > --
> > Rafael Weingärtner
> 
> >>>
> >>>
> >>>
> >>> --
> >>> Rafael Weingärtner
> >>
> >
> >
> >
> > --
> > Rafael Weingärtner
>



-- 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-17 Thread Tutkowski, Mike
Correct, I happened to find it while testing a PR of mine targeted at master.

> On Jul 17, 2018, at 1:30 PM, Rafael Weingärtner  
> wrote:
> 
> Correct. I do think the problem here is only in the release notes.
> 
> Just to confirm, you found the problem while testing 4.12 (from master),
> right?
> 
> On Tue, Jul 17, 2018 at 4:22 PM, Tutkowski, Mike 
> wrote:
> 
>> Cool, if it’s just in master, then that makes it easier.
>> 
>> Also, it means we did not have a process issue by introducing enhancement
>> code in between release candidates.
>> 
>> It would mean, however, that our documentation is a bit incorrect if, in
>> fact, it states that that feature exists in 4.11.1.
>> 
>>> On Jul 17, 2018, at 1:20 PM, Rafael Weingärtner <
>> rafaelweingart...@gmail.com> wrote:
>>> 
>>> Ok, thanks. I had the impression that we said it was backported to 4.11.
>>> 
>>> I will get master and work on it then.
>>> 
>>> On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike <
>> mike.tutkow...@netapp.com>
>>> wrote:
>>> 
 I only noticed it in master. The example code I was comparing it against
 was from 4.11.0. I never checked against 4.11.1.
 
> On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner <
 rafaelweingart...@gmail.com> wrote:
> 
> Hey Mike, I got the branch 4.11 to start fixing the problem we
>> discussed,
> but I do not think my commit was backported to 4.11. I mean, I am at
> "VirtualMachineManagerImpl" and the code is not here. I also checked
>> the
> commit (
> https://github.com/apache/cloudstack/commit/
 f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
> that introduced those changes to master, and according to Github, it is
> only in the master branch, and not in 4.11.
> 
> I checked the "VirtualMachineManagerImpl" class at the Apache
>> CloudStack
> remote repository in the 4.11 branch, and as you can see, the code
>> there
 is
> the “old”   one.
> https://github.com/apache/cloudstack/blob/4.11/engine/
 orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
> 
> I got a little confused now. Did you detect the problem in 4.11 or in
> master?
> 
> 
> On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike <
 mike.tutkow...@netapp.com
>> wrote:
> 
>> Another comment here: The part that is broken is if you try to let
>> CloudStack pick the primary storage on the destination side. That code
 no
>> longer exists in 4.11.1.
>> 
>> On 7/16/18, 9:24 PM, "Tutkowski, Mike" 
 wrote:
>> 
>>  To follow up on this a bit: Yes, you should be able to migrate a VM
>> and its storage from one cluster to another today using non-managed
>> (traditional) primary storage with XenServer (both the source and
>> destination primary storages would be cluster scoped). However, that
>> is
 one
>> of the features that was broken in 4.11.1 that we are discussing in
>> this
>> thread.
>> 
>>  On 7/16/18, 9:20 PM, "Tutkowski, Mike" 
>> wrote:
>> 
>>  For a bit of info on what managed storage is, please take a look
>> at this document:
>> 
>>  https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%
>> 20in%20CloudStack.docx?dl=0
>> 
>>  The short answer is that you can have zone-wide managed storage
>> (for XenServer, VMware, and KVM). However, there is no current
>> zone-wide
>> non-managed storage for XenServer.
>> 
>>  On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:
>> 
>>  I assume by "managed storage", you guys mean primary
 storages,
>> either zone -wide or cluster-wide.
>> 
>>  For Xen hypervisor, ACS does not support "zone-wide" primary
>> storage yet. Still, I can live migrate a VM with data disks between
>> clusters with storage migration from web GUI, today.  So, your
>> statement
>> below does not reflect current behavior of the code.
>> 
>> 
>> - If I want to migrate a VM across clusters, but
 if
>> at least one of its
>> volumes is placed in a cluster-wide managed
>> storage, the migration is not
>> allowed. Is that it?
>> 
>>  [Mike] Correct
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
> 
> 
> --
> Rafael Weingärtner
 
>>> 
>>> 
>>> 
>>> --
>>> Rafael Weingärtner
>> 
> 
> 
> 
> -- 
> Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-17 Thread Rafael Weingärtner
Correct. I do think the problem here is only in the release notes.

Just to confirm, you found the problem while testing 4.12 (from master),
right?

On Tue, Jul 17, 2018 at 4:22 PM, Tutkowski, Mike 
wrote:

> Cool, if it’s just in master, then that makes it easier.
>
> Also, it means we did not have a process issue by introducing enhancement
> code in between release candidates.
>
> It would mean, however, that our documentation is a bit incorrect if, in
> fact, it states that that feature exists in 4.11.1.
>
> > On Jul 17, 2018, at 1:20 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
> >
> > Ok, thanks. I had the impression that we said it was backported to 4.11.
> >
> > I will get master and work on it then.
> >
> > On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> > wrote:
> >
> >> I only noticed it in master. The example code I was comparing it against
> >> was from 4.11.0. I never checked against 4.11.1.
> >>
> >>> On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner <
> >> rafaelweingart...@gmail.com> wrote:
> >>>
> >>> Hey Mike, I got the branch 4.11 to start fixing the problem we
> discussed,
> >>> but I do not think my commit was backported to 4.11. I mean, I am at
> >>> "VirtualMachineManagerImpl" and the code is not here. I also checked
> the
> >>> commit (
> >>> https://github.com/apache/cloudstack/commit/
> >> f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
> >>> that introduced those changes to master, and according to Github, it is
> >>> only in the master branch, and not in 4.11.
> >>>
> >>> I checked the "VirtualMachineManagerImpl" class at the Apache
> CloudStack
> >>> remote repository in the 4.11 branch, and as you can see, the code
> there
> >> is
> >>> the “old”   one.
> >>> https://github.com/apache/cloudstack/blob/4.11/engine/
> >> orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
> >>>
> >>> I got a little confused now. Did you detect the problem in 4.11 or in
> >>> master?
> >>>
> >>>
> >>> On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike <
> >> mike.tutkow...@netapp.com
>  wrote:
> >>>
>  Another comment here: The part that is broken is if you try to let
>  CloudStack pick the primary storage on the destination side. That code
> >> no
>  longer exists in 4.11.1.
> 
>  On 7/16/18, 9:24 PM, "Tutkowski, Mike" 
> >> wrote:
> 
>    To follow up on this a bit: Yes, you should be able to migrate a VM
>  and its storage from one cluster to another today using non-managed
>  (traditional) primary storage with XenServer (both the source and
>  destination primary storages would be cluster scoped). However, that
> is
> >> one
>  of the features that was broken in 4.11.1 that we are discussing in
> this
>  thread.
> 
>    On 7/16/18, 9:20 PM, "Tutkowski, Mike" 
>  wrote:
> 
>    For a bit of info on what managed storage is, please take a look
>  at this document:
> 
>    https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%
>  20in%20CloudStack.docx?dl=0
> 
>    The short answer is that you can have zone-wide managed storage
>  (for XenServer, VMware, and KVM). However, there is no current
> zone-wide
>  non-managed storage for XenServer.
> 
>    On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:
> 
>    I assume by "managed storage", you guys mean primary
> >> storages,
>  either zone -wide or cluster-wide.
> 
>    For Xen hypervisor, ACS does not support "zone-wide" primary
>  storage yet. Still, I can live migrate a VM with data disks between
>  clusters with storage migration from web GUI, today.  So, your
> statement
>  below does not reflect current behavior of the code.
> 
> 
>   - If I want to migrate a VM across clusters, but
> >> if
>  at least one of its
>   volumes is placed in a cluster-wide managed
>  storage, the migration is not
>   allowed. Is that it?
> 
>    [Mike] Correct
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> >>>
> >>>
> >>> --
> >>> Rafael Weingärtner
> >>
> >
> >
> >
> > --
> > Rafael Weingärtner
>



-- 
Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-17 Thread Tutkowski, Mike
Cool, if it’s just in master, then that makes it easier.

Also, it means we did not have a process issue by introducing enhancement code 
in between release candidates.

It would mean, however, that our documentation is a bit incorrect if, in fact, 
it states that that feature exists in 4.11.1.

> On Jul 17, 2018, at 1:20 PM, Rafael Weingärtner  
> wrote:
> 
> Ok, thanks. I had the impression that we said it was backported to 4.11.
> 
> I will get master and work on it then.
> 
> On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike 
> wrote:
> 
>> I only noticed it in master. The example code I was comparing it against
>> was from 4.11.0. I never checked against 4.11.1.
>> 
>>> On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner <
>> rafaelweingart...@gmail.com> wrote:
>>> 
>>> Hey Mike, I got the branch 4.11 to start fixing the problem we discussed,
>>> but I do not think my commit was backported to 4.11. I mean, I am at
>>> "VirtualMachineManagerImpl" and the code is not here. I also checked the
>>> commit (
>>> https://github.com/apache/cloudstack/commit/
>> f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
>>> that introduced those changes to master, and according to Github, it is
>>> only in the master branch, and not in 4.11.
>>> 
>>> I checked the "VirtualMachineManagerImpl" class at the Apache CloudStack
>>> remote repository in the 4.11 branch, and as you can see, the code there
>> is
>>> the “old”   one.
>>> https://github.com/apache/cloudstack/blob/4.11/engine/
>> orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
>>> 
>>> I got a little confused now. Did you detect the problem in 4.11 or in
>>> master?
>>> 
>>> 
>>> On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike <
>> mike.tutkow...@netapp.com
 wrote:
>>> 
 Another comment here: The part that is broken is if you try to let
 CloudStack pick the primary storage on the destination side. That code
>> no
 longer exists in 4.11.1.
 
 On 7/16/18, 9:24 PM, "Tutkowski, Mike" 
>> wrote:
 
   To follow up on this a bit: Yes, you should be able to migrate a VM
 and its storage from one cluster to another today using non-managed
 (traditional) primary storage with XenServer (both the source and
 destination primary storages would be cluster scoped). However, that is
>> one
 of the features that was broken in 4.11.1 that we are discussing in this
 thread.
 
   On 7/16/18, 9:20 PM, "Tutkowski, Mike" 
 wrote:
 
   For a bit of info on what managed storage is, please take a look
 at this document:
 
   https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%
 20in%20CloudStack.docx?dl=0
 
   The short answer is that you can have zone-wide managed storage
 (for XenServer, VMware, and KVM). However, there is no current zone-wide
 non-managed storage for XenServer.
 
   On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:
 
   I assume by "managed storage", you guys mean primary
>> storages,
 either zone -wide or cluster-wide.
 
   For Xen hypervisor, ACS does not support "zone-wide" primary
 storage yet. Still, I can live migrate a VM with data disks between
 clusters with storage migration from web GUI, today.  So, your statement
 below does not reflect current behavior of the code.
 
 
  - If I want to migrate a VM across clusters, but
>> if
 at least one of its
  volumes is placed in a cluster-wide managed
 storage, the migration is not
  allowed. Is that it?
 
   [Mike] Correct
 
 
 
 
 
 
 
 
 
 
 
>>> 
>>> 
>>> --
>>> Rafael Weingärtner
>> 
> 
> 
> 
> -- 
> Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-17 Thread Rafael Weingärtner
Ok, thanks. I had the impression that we said it was backported to 4.11.

I will get master and work on it then.

On Tue, Jul 17, 2018 at 4:12 PM, Tutkowski, Mike 
wrote:

> I only noticed it in master. The example code I was comparing it against
> was from 4.11.0. I never checked against 4.11.1.
>
> > On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner <
> rafaelweingart...@gmail.com> wrote:
> >
> > Hey Mike, I got the branch 4.11 to start fixing the problem we discussed,
> > but I do not think my commit was backported to 4.11. I mean, I am at
> > "VirtualMachineManagerImpl" and the code is not here. I also checked the
> > commit (
> > https://github.com/apache/cloudstack/commit/
> f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
> > that introduced those changes to master, and according to Github, it is
> > only in the master branch, and not in 4.11.
> >
> > I checked the "VirtualMachineManagerImpl" class at the Apache CloudStack
> > remote repository in the 4.11 branch, and as you can see, the code there
> is
> > the “old”   one.
> > https://github.com/apache/cloudstack/blob/4.11/engine/
> orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
> >
> > I got a little confused now. Did you detect the problem in 4.11 or in
> > master?
> >
> >
> > On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike <
> mike.tutkow...@netapp.com
> >> wrote:
> >
> >> Another comment here: The part that is broken is if you try to let
> >> CloudStack pick the primary storage on the destination side. That code
> no
> >> longer exists in 4.11.1.
> >>
> >> On 7/16/18, 9:24 PM, "Tutkowski, Mike" 
> wrote:
> >>
> >>To follow up on this a bit: Yes, you should be able to migrate a VM
> >> and its storage from one cluster to another today using non-managed
> >> (traditional) primary storage with XenServer (both the source and
> >> destination primary storages would be cluster scoped). However, that is
> one
> >> of the features that was broken in 4.11.1 that we are discussing in this
> >> thread.
> >>
> >>On 7/16/18, 9:20 PM, "Tutkowski, Mike" 
> >> wrote:
> >>
> >>For a bit of info on what managed storage is, please take a look
> >> at this document:
> >>
> >>https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%
> >> 20in%20CloudStack.docx?dl=0
> >>
> >>The short answer is that you can have zone-wide managed storage
> >> (for XenServer, VMware, and KVM). However, there is no current zone-wide
> >> non-managed storage for XenServer.
> >>
> >>On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:
> >>
> >>I assume by "managed storage", you guys mean primary
> storages,
> >> either zone -wide or cluster-wide.
> >>
> >>For Xen hypervisor, ACS does not support "zone-wide" primary
> >> storage yet. Still, I can live migrate a VM with data disks between
> >> clusters with storage migration from web GUI, today.  So, your statement
> >> below does not reflect current behavior of the code.
> >>
> >>
> >>   - If I want to migrate a VM across clusters, but
> if
> >> at least one of its
> >>   volumes is placed in a cluster-wide managed
> >> storage, the migration is not
> >>   allowed. Is that it?
> >>
> >>[Mike] Correct
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >
> >
> > --
> > Rafael Weingärtner
>



-- 
Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-17 Thread Yiping Zhang
Hi, Mike, Rafael:

Thanks for the clarification of what "managed storage" is and working on fixing 
the broken bits.

Yiping

On 7/16/18, 8:28 PM, "Tutkowski, Mike"  wrote:

Another comment here: The part that is broken is if you try to let 
CloudStack pick the primary storage on the destination side. That code no 
longer exists in 4.11.1.

On 7/16/18, 9:24 PM, "Tutkowski, Mike"  wrote:

To follow up on this a bit: Yes, you should be able to migrate a VM and 
its storage from one cluster to another today using non-managed (traditional) 
primary storage with XenServer (both the source and destination primary 
storages would be cluster scoped). However, that is one of the features that 
was broken in 4.11.1 that we are discussing in this thread.

On 7/16/18, 9:20 PM, "Tutkowski, Mike"  
wrote:

For a bit of info on what managed storage is, please take a look at 
this document:


https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%20in%20CloudStack.docx?dl=0

The short answer is that you can have zone-wide managed storage 
(for XenServer, VMware, and KVM). However, there is no current zone-wide 
non-managed storage for XenServer.

On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:

I assume by "managed storage", you guys mean primary storages, 
either zone -wide or cluster-wide.

For Xen hypervisor, ACS does not support "zone-wide" primary 
storage yet. Still, I can live migrate a VM with data disks between clusters 
with storage migration from web GUI, today.  So, your statement below does not 
reflect current behavior of the code.


   - If I want to migrate a VM across clusters, but if 
at least one of its
   volumes is placed in a cluster-wide managed storage, 
the migration is not
   allowed. Is that it?

[Mike] Correct














Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-17 Thread Tutkowski, Mike
I only noticed it in master. The example code I was comparing it against was 
from 4.11.0. I never checked against 4.11.1.

> On Jul 17, 2018, at 1:02 PM, Rafael Weingärtner  
> wrote:
> 
> Hey Mike, I got the branch 4.11 to start fixing the problem we discussed,
> but I do not think my commit was backported to 4.11. I mean, I am at
> "VirtualMachineManagerImpl" and the code is not here. I also checked the
> commit (
> https://github.com/apache/cloudstack/commit/f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
> that introduced those changes to master, and according to Github, it is
> only in the master branch, and not in 4.11.
> 
> I checked the "VirtualMachineManagerImpl" class at the Apache CloudStack
> remote repository in the 4.11 branch, and as you can see, the code there is
> the “old”   one.
> https://github.com/apache/cloudstack/blob/4.11/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
> 
> I got a little confused now. Did you detect the problem in 4.11 or in
> master?
> 
> 
> On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike > wrote:
> 
>> Another comment here: The part that is broken is if you try to let
>> CloudStack pick the primary storage on the destination side. That code no
>> longer exists in 4.11.1.
>> 
>> On 7/16/18, 9:24 PM, "Tutkowski, Mike"  wrote:
>> 
>>To follow up on this a bit: Yes, you should be able to migrate a VM
>> and its storage from one cluster to another today using non-managed
>> (traditional) primary storage with XenServer (both the source and
>> destination primary storages would be cluster scoped). However, that is one
>> of the features that was broken in 4.11.1 that we are discussing in this
>> thread.
>> 
>>On 7/16/18, 9:20 PM, "Tutkowski, Mike" 
>> wrote:
>> 
>>For a bit of info on what managed storage is, please take a look
>> at this document:
>> 
>>https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%
>> 20in%20CloudStack.docx?dl=0
>> 
>>The short answer is that you can have zone-wide managed storage
>> (for XenServer, VMware, and KVM). However, there is no current zone-wide
>> non-managed storage for XenServer.
>> 
>>On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:
>> 
>>I assume by "managed storage", you guys mean primary storages,
>> either zone -wide or cluster-wide.
>> 
>>For Xen hypervisor, ACS does not support "zone-wide" primary
>> storage yet. Still, I can live migrate a VM with data disks between
>> clusters with storage migration from web GUI, today.  So, your statement
>> below does not reflect current behavior of the code.
>> 
>> 
>>   - If I want to migrate a VM across clusters, but if
>> at least one of its
>>   volumes is placed in a cluster-wide managed
>> storage, the migration is not
>>   allowed. Is that it?
>> 
>>[Mike] Correct
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
> 
> 
> -- 
> Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-17 Thread Rafael Weingärtner
Hey Mike, I got the branch 4.11 to start fixing the problem we discussed,
but I do not think my commit was backported to 4.11. I mean, I am at
"VirtualMachineManagerImpl" and the code is not here. I also checked the
commit (
https://github.com/apache/cloudstack/commit/f2efbcececb3cfb06a51e5d3a2e77417c19c667f)
that introduced those changes to master, and according to Github, it is
only in the master branch, and not in 4.11.

I checked the "VirtualMachineManagerImpl" class at the Apache CloudStack
remote repository in the 4.11 branch, and as you can see, the code there is
the “old”   one.
https://github.com/apache/cloudstack/blob/4.11/engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java

I got a little confused now. Did you detect the problem in 4.11 or in
master?


On Tue, Jul 17, 2018 at 12:27 AM, Tutkowski, Mike  wrote:

> Another comment here: The part that is broken is if you try to let
> CloudStack pick the primary storage on the destination side. That code no
> longer exists in 4.11.1.
>
> On 7/16/18, 9:24 PM, "Tutkowski, Mike"  wrote:
>
> To follow up on this a bit: Yes, you should be able to migrate a VM
> and its storage from one cluster to another today using non-managed
> (traditional) primary storage with XenServer (both the source and
> destination primary storages would be cluster scoped). However, that is one
> of the features that was broken in 4.11.1 that we are discussing in this
> thread.
>
> On 7/16/18, 9:20 PM, "Tutkowski, Mike" 
> wrote:
>
> For a bit of info on what managed storage is, please take a look
> at this document:
>
> https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%
> 20in%20CloudStack.docx?dl=0
>
> The short answer is that you can have zone-wide managed storage
> (for XenServer, VMware, and KVM). However, there is no current zone-wide
> non-managed storage for XenServer.
>
> On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:
>
> I assume by "managed storage", you guys mean primary storages,
> either zone -wide or cluster-wide.
>
> For Xen hypervisor, ACS does not support "zone-wide" primary
> storage yet. Still, I can live migrate a VM with data disks between
> clusters with storage migration from web GUI, today.  So, your statement
> below does not reflect current behavior of the code.
>
>
>- If I want to migrate a VM across clusters, but if
> at least one of its
>volumes is placed in a cluster-wide managed
> storage, the migration is not
>allowed. Is that it?
>
> [Mike] Correct
>
>
>
>
>
>
>
>
>
>
>


-- 
Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
Another comment here: The part that is broken is if you try to let CloudStack 
pick the primary storage on the destination side. That code no longer exists in 
4.11.1.

On 7/16/18, 9:24 PM, "Tutkowski, Mike"  wrote:

To follow up on this a bit: Yes, you should be able to migrate a VM and its 
storage from one cluster to another today using non-managed (traditional) 
primary storage with XenServer (both the source and destination primary 
storages would be cluster scoped). However, that is one of the features that 
was broken in 4.11.1 that we are discussing in this thread.

On 7/16/18, 9:20 PM, "Tutkowski, Mike"  wrote:

For a bit of info on what managed storage is, please take a look at 
this document:


https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%20in%20CloudStack.docx?dl=0

The short answer is that you can have zone-wide managed storage (for 
XenServer, VMware, and KVM). However, there is no current zone-wide non-managed 
storage for XenServer.

On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:

I assume by "managed storage", you guys mean primary storages, 
either zone -wide or cluster-wide.

For Xen hypervisor, ACS does not support "zone-wide" primary 
storage yet. Still, I can live migrate a VM with data disks between clusters 
with storage migration from web GUI, today.  So, your statement below does not 
reflect current behavior of the code.


   - If I want to migrate a VM across clusters, but if at 
least one of its
   volumes is placed in a cluster-wide managed storage, the 
migration is not
   allowed. Is that it?

[Mike] Correct












Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
To follow up on this a bit: Yes, you should be able to migrate a VM and its 
storage from one cluster to another today using non-managed (traditional) 
primary storage with XenServer (both the source and destination primary 
storages would be cluster scoped). However, that is one of the features that 
was broken in 4.11.1 that we are discussing in this thread.

On 7/16/18, 9:20 PM, "Tutkowski, Mike"  wrote:

For a bit of info on what managed storage is, please take a look at this 
document:


https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%20in%20CloudStack.docx?dl=0

The short answer is that you can have zone-wide managed storage (for 
XenServer, VMware, and KVM). However, there is no current zone-wide non-managed 
storage for XenServer.

On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:

I assume by "managed storage", you guys mean primary storages, either 
zone -wide or cluster-wide.

For Xen hypervisor, ACS does not support "zone-wide" primary storage 
yet. Still, I can live migrate a VM with data disks between clusters with 
storage migration from web GUI, today.  So, your statement below does not 
reflect current behavior of the code.


   - If I want to migrate a VM across clusters, but if at least 
one of its
   volumes is placed in a cluster-wide managed storage, the 
migration is not
   allowed. Is that it?

[Mike] Correct










Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
For a bit of info on what managed storage is, please take a look at this 
document:

https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%20in%20CloudStack.docx?dl=0

The short answer is that you can have zone-wide managed storage (for XenServer, 
VMware, and KVM). However, there is no current zone-wide non-managed storage 
for XenServer.

On 7/16/18, 6:20 PM, "Yiping Zhang"  wrote:

I assume by "managed storage", you guys mean primary storages, either zone 
-wide or cluster-wide.

For Xen hypervisor, ACS does not support "zone-wide" primary storage yet. 
Still, I can live migrate a VM with data disks between clusters with storage 
migration from web GUI, today.  So, your statement below does not reflect 
current behavior of the code.


   - If I want to migrate a VM across clusters, but if at least one 
of its
   volumes is placed in a cluster-wide managed storage, the 
migration is not
   allowed. Is that it?

[Mike] Correct








Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Yiping Zhang
I assume by "managed storage", you guys mean primary storages, either zone 
-wide or cluster-wide.

For Xen hypervisor, ACS does not support "zone-wide" primary storage yet. 
Still, I can live migrate a VM with data disks between clusters with storage 
migration from web GUI, today.  So, your statement below does not reflect 
current behavior of the code.


   - If I want to migrate a VM across clusters, but if at least one of 
its
   volumes is placed in a cluster-wide managed storage, the migration 
is not
   allowed. Is that it?

[Mike] Correct






Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Rafael Weingärtner
Awesome! Thanks for your inputs. I will work on them, and as soon as I have
something I will ping you.

On Mon, Jul 16, 2018 at 8:26 PM, Tutkowski, Mike 
wrote:

> Yeah, I just meant that was a workaround. As you pointed out, that
> workaround doesn’t make use of the migrateVirtualMachineWithVolume API
> command, though.
>
> On 7/16/18, 5:23 PM, "Rafael Weingärtner" 
> wrote:
>
> Thanks for the answers Mike. I will not be able to do it today, but I
> will
> manage to do it this week. There is only one last doubt.
>
> [Mike] At least for KVM, you can shut the VM down and perform an
> offline
> migration
> of the volume from managed storage to non-managed storage. It’s
> possible we
> may
> support such a similar behavior with other hypervisor types in the
> future.
>
> [Rafael] I guess that we can shut down XenServer VMs and then migrate
> the
> volumes later, right? However, the method in question here
> (migrateVirtualMachineWithVolume) is not supposed to execute such
> steps, is
> it?
>
>
> On Mon, Jul 16, 2018 at 8:17 PM, Tutkowski, Mike <
> mike.tutkow...@netapp.com>
> wrote:
>
> >- So, managed storage can be cluster and zone wide. Is that
> correct?
> >
> > [Mike] Correct
> >
> >- If I want to migrate a VM across clusters, but if at least
> one of
> > its
> >volumes is placed in a cluster-wide managed storage, the
> migration
> > is not
> >allowed. Is that it?
> >
> > [Mike] Correct
> >
> >- A volume placed in managed storage can never (at least not
> using
> > this
> >migrateWithVolume method) be migrated out of the storage pool
> it
> > resides.
> >is this statement right? Do you have alternative/other
> execution
> > flow
> >regarding this scenario?
> >
> > [Mike] At least for KVM, you can shut the VM down and perform an
> offline
> > migration
> > of the volume from managed storage to non-managed storage. It’s
> possible
> > we may
> > support such a similar behavior with other hypervisor types in the
> future.
> >
> >- When migrating a VM that does not have volumes in managed
> > storage, it
> >should be possible to migrate it cross clusters. Therefore, we
> > should try
> >to use the volume allocators to find a suitable storage pool
> for its
> >volumes in the target cluster
> >
> > [Mike] It’s OK here if one or more of the volumes is on managed
> storage.
> > The “trick” is
> > that it needs to be on zone-wide managed storage that is visible to
> both
> > the source and
> > destination compute clusters. You cannot specify a new storage pool
> for
> > any of these volumes
> > (each must remain on its current, zone-wide primary storage).
> >
> > If you can add these new constraints into the code, I can review them
> > later. I’m a bit
> > pressed for time this week, so it might not be possible to do so
> right
> > away. Thanks!
> >
> > On 7/16/18, 3:52 PM, "Rafael Weingärtner" <
> rafaelweingart...@gmail.com>
> > wrote:
> >
> > Thanks for your feedback Mike. I actually did not want to change
> this
> > “migrateVirtualMachineWithVolume” API method. Everything
> started when
> > we
> > wanted to create a feature to allow volume placement overrides.
> This
> > means,
> > allowing root admins to place/migrate the volume to a storage
> pool that
> > might not be “allowed” (according to its current disk offering).
> This
> > feature was later expanded to allow changing the disk offering
> while
> > executing a storage migration (this means allowing changes on
> volume’s
> > QoS). Thus, creating a mechanism within ACS to allow disk
> offerings
> > replacement (as opposed to DB intervention, which was the way it
> was
> > being
> > done so far). The rationale behind these extensions/enhancement
> is
> > that the
> > root admins are wise/experts (at least we expect them to be).
> > Therefore,
> > they know what they are doing when overriding or replacing a disk
> > offering
> > of a user.
> >
> > So, why am I changing this “migrateVirtualMachineWithVolume” API
> > method?
> > When we allowed that override procedure, it broke the migration
> of VMs
> > that
> > had volumes initially placed in NFS and then replaced (via
> override) in
> > local storage. It had something to do with the way ACS was
> detecting
> > if the
> > VM has a local storage. Then, when I went to the method to fix
> it; it
> > was
> > very convoluted to read and understand. Therefore, I re-wrote,
> and I
> > missed
> > your use case. I am sorry for that. Moreover, I do intend to
> keep with
>   

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
Yeah, I just meant that was a workaround. As you pointed out, that workaround 
doesn’t make use of the migrateVirtualMachineWithVolume API command, though.

On 7/16/18, 5:23 PM, "Rafael Weingärtner"  wrote:

Thanks for the answers Mike. I will not be able to do it today, but I will
manage to do it this week. There is only one last doubt.

[Mike] At least for KVM, you can shut the VM down and perform an offline
migration
of the volume from managed storage to non-managed storage. It’s possible we
may
support such a similar behavior with other hypervisor types in the future.

[Rafael] I guess that we can shut down XenServer VMs and then migrate the
volumes later, right? However, the method in question here
(migrateVirtualMachineWithVolume) is not supposed to execute such steps, is
it?


On Mon, Jul 16, 2018 at 8:17 PM, Tutkowski, Mike 
wrote:

>- So, managed storage can be cluster and zone wide. Is that 
correct?
>
> [Mike] Correct
>
>- If I want to migrate a VM across clusters, but if at least one of
> its
>volumes is placed in a cluster-wide managed storage, the migration
> is not
>allowed. Is that it?
>
> [Mike] Correct
>
>- A volume placed in managed storage can never (at least not using
> this
>migrateWithVolume method) be migrated out of the storage pool it
> resides.
>is this statement right? Do you have alternative/other execution
> flow
>regarding this scenario?
>
> [Mike] At least for KVM, you can shut the VM down and perform an offline
> migration
> of the volume from managed storage to non-managed storage. It’s possible
> we may
> support such a similar behavior with other hypervisor types in the future.
>
>- When migrating a VM that does not have volumes in managed
> storage, it
>should be possible to migrate it cross clusters. Therefore, we
> should try
>to use the volume allocators to find a suitable storage pool for 
its
>volumes in the target cluster
>
> [Mike] It’s OK here if one or more of the volumes is on managed storage.
> The “trick” is
> that it needs to be on zone-wide managed storage that is visible to both
> the source and
> destination compute clusters. You cannot specify a new storage pool for
> any of these volumes
> (each must remain on its current, zone-wide primary storage).
>
> If you can add these new constraints into the code, I can review them
> later. I’m a bit
> pressed for time this week, so it might not be possible to do so right
> away. Thanks!
>
> On 7/16/18, 3:52 PM, "Rafael Weingärtner" 
> wrote:
>
> Thanks for your feedback Mike. I actually did not want to change this
> “migrateVirtualMachineWithVolume” API method. Everything started when
> we
> wanted to create a feature to allow volume placement overrides. This
> means,
> allowing root admins to place/migrate the volume to a storage pool 
that
> might not be “allowed” (according to its current disk offering). This
> feature was later expanded to allow changing the disk offering while
> executing a storage migration (this means allowing changes on volume’s
> QoS). Thus, creating a mechanism within ACS to allow disk offerings
> replacement (as opposed to DB intervention, which was the way it was
> being
> done so far). The rationale behind these extensions/enhancement is
> that the
> root admins are wise/experts (at least we expect them to be).
> Therefore,
> they know what they are doing when overriding or replacing a disk
> offering
> of a user.
>
> So, why am I changing this “migrateVirtualMachineWithVolume” API
> method?
> When we allowed that override procedure, it broke the migration of VMs
> that
> had volumes initially placed in NFS and then replaced (via override) 
in
> local storage. It had something to do with the way ACS was detecting
> if the
> VM has a local storage. Then, when I went to the method to fix it; it
> was
> very convoluted to read and understand. Therefore, I re-wrote, and I
> missed
> your use case. I am sorry for that. Moreover, I do intend to keep with
> the
> current code, as we already have other features developed on top of
> it, and
> this code is well documented and unit tested. It is only a matter of
> adding
> your requirement there.
>
> Now, let’s fix the problem. I will not point code here. I only want to
> understand the idea for now.
>
>- So, managed storage can be cluster and zone wide. Is that 
correct?
>- If I want to migrate 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Rafael Weingärtner
Thanks for the answers Mike. I will not be able to do it today, but I will
manage to do it this week. There is only one last doubt.

[Mike] At least for KVM, you can shut the VM down and perform an offline
migration
of the volume from managed storage to non-managed storage. It’s possible we
may
support such a similar behavior with other hypervisor types in the future.

[Rafael] I guess that we can shut down XenServer VMs and then migrate the
volumes later, right? However, the method in question here
(migrateVirtualMachineWithVolume) is not supposed to execute such steps, is
it?


On Mon, Jul 16, 2018 at 8:17 PM, Tutkowski, Mike 
wrote:

>- So, managed storage can be cluster and zone wide. Is that correct?
>
> [Mike] Correct
>
>- If I want to migrate a VM across clusters, but if at least one of
> its
>volumes is placed in a cluster-wide managed storage, the migration
> is not
>allowed. Is that it?
>
> [Mike] Correct
>
>- A volume placed in managed storage can never (at least not using
> this
>migrateWithVolume method) be migrated out of the storage pool it
> resides.
>is this statement right? Do you have alternative/other execution
> flow
>regarding this scenario?
>
> [Mike] At least for KVM, you can shut the VM down and perform an offline
> migration
> of the volume from managed storage to non-managed storage. It’s possible
> we may
> support such a similar behavior with other hypervisor types in the future.
>
>- When migrating a VM that does not have volumes in managed
> storage, it
>should be possible to migrate it cross clusters. Therefore, we
> should try
>to use the volume allocators to find a suitable storage pool for its
>volumes in the target cluster
>
> [Mike] It’s OK here if one or more of the volumes is on managed storage.
> The “trick” is
> that it needs to be on zone-wide managed storage that is visible to both
> the source and
> destination compute clusters. You cannot specify a new storage pool for
> any of these volumes
> (each must remain on its current, zone-wide primary storage).
>
> If you can add these new constraints into the code, I can review them
> later. I’m a bit
> pressed for time this week, so it might not be possible to do so right
> away. Thanks!
>
> On 7/16/18, 3:52 PM, "Rafael Weingärtner" 
> wrote:
>
> Thanks for your feedback Mike. I actually did not want to change this
> “migrateVirtualMachineWithVolume” API method. Everything started when
> we
> wanted to create a feature to allow volume placement overrides. This
> means,
> allowing root admins to place/migrate the volume to a storage pool that
> might not be “allowed” (according to its current disk offering). This
> feature was later expanded to allow changing the disk offering while
> executing a storage migration (this means allowing changes on volume’s
> QoS). Thus, creating a mechanism within ACS to allow disk offerings
> replacement (as opposed to DB intervention, which was the way it was
> being
> done so far). The rationale behind these extensions/enhancement is
> that the
> root admins are wise/experts (at least we expect them to be).
> Therefore,
> they know what they are doing when overriding or replacing a disk
> offering
> of a user.
>
> So, why am I changing this “migrateVirtualMachineWithVolume” API
> method?
> When we allowed that override procedure, it broke the migration of VMs
> that
> had volumes initially placed in NFS and then replaced (via override) in
> local storage. It had something to do with the way ACS was detecting
> if the
> VM has a local storage. Then, when I went to the method to fix it; it
> was
> very convoluted to read and understand. Therefore, I re-wrote, and I
> missed
> your use case. I am sorry for that. Moreover, I do intend to keep with
> the
> current code, as we already have other features developed on top of
> it, and
> this code is well documented and unit tested. It is only a matter of
> adding
> your requirement there.
>
> Now, let’s fix the problem. I will not point code here. I only want to
> understand the idea for now.
>
>- So, managed storage can be cluster and zone wide. Is that correct?
>- If I want to migrate a VM across clusters, but if at least one of
> its
>volumes is placed in a cluster-wide managed storage, the migration
> is not
>allowed. Is that it?
>- A volume placed in managed storage can never (at least not using
> this
>migrateWithVolume method) be migrated out of the storage pool it
> resides.
>is this statement right? Do you have alternative/other execution
> flow
>regarding this scenario?
>- When migrating a VM that does not have volumes in managed
> storage, it
>should be possible to migrate it cross clusters. Therefore, we
> should try
>to use the volume allocators to 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
   - So, managed storage can be cluster and zone wide. Is that correct?

[Mike] Correct

   - If I want to migrate a VM across clusters, but if at least one of its
   volumes is placed in a cluster-wide managed storage, the migration is not
   allowed. Is that it?

[Mike] Correct

   - A volume placed in managed storage can never (at least not using this
   migrateWithVolume method) be migrated out of the storage pool it resides.
   is this statement right? Do you have alternative/other execution flow
   regarding this scenario?

[Mike] At least for KVM, you can shut the VM down and perform an offline 
migration 
of the volume from managed storage to non-managed storage. It’s possible we may 
support such a similar behavior with other hypervisor types in the future.

   - When migrating a VM that does not have volumes in managed storage, it
   should be possible to migrate it cross clusters. Therefore, we should try
   to use the volume allocators to find a suitable storage pool for its
   volumes in the target cluster

[Mike] It’s OK here if one or more of the volumes is on managed storage. The 
“trick” is 
that it needs to be on zone-wide managed storage that is visible to both the 
source and 
destination compute clusters. You cannot specify a new storage pool for any of 
these volumes 
(each must remain on its current, zone-wide primary storage).

If you can add these new constraints into the code, I can review them later. 
I’m a bit 
pressed for time this week, so it might not be possible to do so right away. 
Thanks!

On 7/16/18, 3:52 PM, "Rafael Weingärtner"  wrote:

Thanks for your feedback Mike. I actually did not want to change this
“migrateVirtualMachineWithVolume” API method. Everything started when we
wanted to create a feature to allow volume placement overrides. This means,
allowing root admins to place/migrate the volume to a storage pool that
might not be “allowed” (according to its current disk offering). This
feature was later expanded to allow changing the disk offering while
executing a storage migration (this means allowing changes on volume’s
QoS). Thus, creating a mechanism within ACS to allow disk offerings
replacement (as opposed to DB intervention, which was the way it was being
done so far). The rationale behind these extensions/enhancement is that the
root admins are wise/experts (at least we expect them to be). Therefore,
they know what they are doing when overriding or replacing a disk offering
of a user.

So, why am I changing this “migrateVirtualMachineWithVolume” API method?
When we allowed that override procedure, it broke the migration of VMs that
had volumes initially placed in NFS and then replaced (via override) in
local storage. It had something to do with the way ACS was detecting if the
VM has a local storage. Then, when I went to the method to fix it; it was
very convoluted to read and understand. Therefore, I re-wrote, and I missed
your use case. I am sorry for that. Moreover, I do intend to keep with the
current code, as we already have other features developed on top of it, and
this code is well documented and unit tested. It is only a matter of adding
your requirement there.

Now, let’s fix the problem. I will not point code here. I only want to
understand the idea for now.

   - So, managed storage can be cluster and zone wide. Is that correct?
   - If I want to migrate a VM across clusters, but if at least one of its
   volumes is placed in a cluster-wide managed storage, the migration is not
   allowed. Is that it?
   - A volume placed in managed storage can never (at least not using this
   migrateWithVolume method) be migrated out of the storage pool it resides.
   is this statement right? Do you have alternative/other execution flow
   regarding this scenario?
   - When migrating a VM that does not have volumes in managed storage, it
   should be possible to migrate it cross clusters. Therefore, we should try
   to use the volume allocators to find a suitable storage pool for its
   volumes in the target cluster

Are these all of the use cases that were left behind?

On Mon, Jul 16, 2018 at 5:36 PM, Tutkowski, Mike 
wrote:

> For your feature, Rafael, are you trying to support the migration of a VM
> that has local storage from one cluster to another or is intra-cluster
> migration of local storage sufficient?
>
> There is the migrateVolume API (you can pass in “live migrate” parameter):
>
> http://cloudstack.apache.org/api/apidocs-4.11/apis/migrateVolume.html
>
> There is also the migrateVirtualMachineWithVolume (one or more volumes).
> This is especially useful for moving a VM with its storage from one 
cluster
> to another:
>
> 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
I think I understand the confusion here.

Rafael’s code was put into 4.11.1, but not into the initial release candidate 
(RC). In fact, the most recent version of 4.11 that has been released is 
4.11.1. Somehow Rafael’s code (which is an enhancement) was merged into 4.11 
during the RC process. This is why my automated tests did not find it. I ran 
them against 4.11.1 RC1 and his code was put in after the first RC.

It looks like we had a bit of a process issue during the RC process as only bug 
fixes should be going into the next RC.

In any event, this means the documentation (at least in this regard) should be 
fine for 4.11.1. Also, no 4.11.2 (or 4.11.3) has been publicly released. We 
seem to have been getting those confused with RCs in our e-mail chain here.

On 7/16/18, 1:46 PM, "Yiping Zhang"  wrote:

Why is it listed as fixed in 4.11.1.0 in the release note, If the code only 
exist in 4.11.2?



On 7/16/18, 12:43 PM, "Tutkowski, Mike"  wrote:

OK, as Rafael noted, looks like it’s in 4.11.2. My regression tests 
were run against 4.11.1. I thought we only allowed bug fixes when going to a 
new RC, but it appears we are not strictly enforcing that rule.

On 7/16/18, 1:40 PM, "Tutkowski, Mike"  
wrote:

When I ran my suite of tests on 4.11.1, I did not encounter this 
issue. Also, looking at the code now, it appears this new code is first in 4.12.

On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:


Is this code already in ACS 4.11.1.0? 

CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to 
release note here, 
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/ja/master/fixed_issues.html,
 but in the JIRA ticket itself, the "fixed version/s" field says 4.12.

We are using XenServer clusters with shared NFS storages and I 
am about to migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between 
clusters a lot, this is going to be a blocker for us.  Someone please confirm.

Thanks

Yiping


On 7/14/18, 11:20 PM, "Tutkowski, Mike" 
 wrote:

Hi,

While running managed-storage regression tests tonight, I 
noticed a problem that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the 
migration of a virtual disk that’s on local storage to shared storage. In the 
process of enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer 
possible. It also seems that the managed-storage logic has been dropped for 
some reason in the new implementation.

Rafael – It seems that you worked on this feature. Would 
you be able to look into this and create a PR?

Thanks,
Mike












Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Rafael Weingärtner
Thanks for your feedback Mike. I actually did not want to change this
“migrateVirtualMachineWithVolume” API method. Everything started when we
wanted to create a feature to allow volume placement overrides. This means,
allowing root admins to place/migrate the volume to a storage pool that
might not be “allowed” (according to its current disk offering). This
feature was later expanded to allow changing the disk offering while
executing a storage migration (this means allowing changes on volume’s
QoS). Thus, creating a mechanism within ACS to allow disk offerings
replacement (as opposed to DB intervention, which was the way it was being
done so far). The rationale behind these extensions/enhancement is that the
root admins are wise/experts (at least we expect them to be). Therefore,
they know what they are doing when overriding or replacing a disk offering
of a user.

So, why am I changing this “migrateVirtualMachineWithVolume” API method?
When we allowed that override procedure, it broke the migration of VMs that
had volumes initially placed in NFS and then replaced (via override) in
local storage. It had something to do with the way ACS was detecting if the
VM has a local storage. Then, when I went to the method to fix it; it was
very convoluted to read and understand. Therefore, I re-wrote, and I missed
your use case. I am sorry for that. Moreover, I do intend to keep with the
current code, as we already have other features developed on top of it, and
this code is well documented and unit tested. It is only a matter of adding
your requirement there.

Now, let’s fix the problem. I will not point code here. I only want to
understand the idea for now.

   - So, managed storage can be cluster and zone wide. Is that correct?
   - If I want to migrate a VM across clusters, but if at least one of its
   volumes is placed in a cluster-wide managed storage, the migration is not
   allowed. Is that it?
   - A volume placed in managed storage can never (at least not using this
   migrateWithVolume method) be migrated out of the storage pool it resides.
   is this statement right? Do you have alternative/other execution flow
   regarding this scenario?
   - When migrating a VM that does not have volumes in managed storage, it
   should be possible to migrate it cross clusters. Therefore, we should try
   to use the volume allocators to find a suitable storage pool for its
   volumes in the target cluster

Are these all of the use cases that were left behind?

On Mon, Jul 16, 2018 at 5:36 PM, Tutkowski, Mike 
wrote:

> For your feature, Rafael, are you trying to support the migration of a VM
> that has local storage from one cluster to another or is intra-cluster
> migration of local storage sufficient?
>
> There is the migrateVolume API (you can pass in “live migrate” parameter):
>
> http://cloudstack.apache.org/api/apidocs-4.11/apis/migrateVolume.html
>
> There is also the migrateVirtualMachineWithVolume (one or more volumes).
> This is especially useful for moving a VM with its storage from one cluster
> to another:
>
> http://cloudstack.apache.org/api/apidocs-4.11/apis/
> migrateVirtualMachineWithVolume.html
>
> On 7/16/18, 2:20 PM, "Tutkowski, Mike"  wrote:
>
> Actually, I think I answered both of your questions with these two
> prior e-mails. Please let me know if you need further clarification. Thanks!
>
> On 7/16/18, 2:17 PM, "Tutkowski, Mike" 
> wrote:
>
> Allow me to correct what I said here:
>
> “If getDefaultMappingOfVolumesAndStoragePoolForMigration is
> invoked, we silently ignore the (faulty) input (which is a new storage
> pool) from the user and keep the volume in its same managed storage pool
> (the user may wonder why it wasn’t migrated if they don’t get an error
> message back telling them this is not allowed).”
>
> I should have said the following:
>
> If getDefaultMappingOfVolumesAndStoragePoolForMigration is
> invoked on a VM that is using managed storage that is only at the cluster
> level (managed storage can be at either the zone or cluster level) and we
> are trying to migrate the VM from one cluster to another, this operation
> should fail (as the old code detects). The new code tries to keep the
> volume in the same storage pool (but that storage pool will not be visible
> to the hosts in the destination compute cluster).
>
> On 7/16/18, 2:10 PM, "Tutkowski, Mike" 
> wrote:
>
> Let me answer the questions in two separate e-mails.
>
> This answer deals with what you wrote about this code:
>
> > if (destPool.getId() == currentPool.getId()) {
> > volumeToPoolObjectMap.put(volume, currentPool);
> > } else {
> >  throw new CloudRuntimeException("Currently, a
> volume on managed
> > storage can only be 'migrated' to itself.");
> > }
> >
>
> The code above is invoked if the user tries to 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
For your feature, Rafael, are you trying to support the migration of a VM that 
has local storage from one cluster to another or is intra-cluster migration of 
local storage sufficient?

There is the migrateVolume API (you can pass in “live migrate” parameter):

http://cloudstack.apache.org/api/apidocs-4.11/apis/migrateVolume.html

There is also the migrateVirtualMachineWithVolume (one or more volumes). This 
is especially useful for moving a VM with its storage from one cluster to 
another:

http://cloudstack.apache.org/api/apidocs-4.11/apis/migrateVirtualMachineWithVolume.html

On 7/16/18, 2:20 PM, "Tutkowski, Mike"  wrote:

Actually, I think I answered both of your questions with these two prior 
e-mails. Please let me know if you need further clarification. Thanks!

On 7/16/18, 2:17 PM, "Tutkowski, Mike"  wrote:

Allow me to correct what I said here:

“If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, we 
silently ignore the (faulty) input (which is a new storage pool) from the user 
and keep the volume in its same managed storage pool (the user may wonder why 
it wasn’t migrated if they don’t get an error message back telling them this is 
not allowed).”

I should have said the following:

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked on a 
VM that is using managed storage that is only at the cluster level (managed 
storage can be at either the zone or cluster level) and we are trying to 
migrate the VM from one cluster to another, this operation should fail (as the 
old code detects). The new code tries to keep the volume in the same storage 
pool (but that storage pool will not be visible to the hosts in the destination 
compute cluster).

On 7/16/18, 2:10 PM, "Tutkowski, Mike"  
wrote:

Let me answer the questions in two separate e-mails.

This answer deals with what you wrote about this code:

> if (destPool.getId() == currentPool.getId()) {
> volumeToPoolObjectMap.put(volume, currentPool);
> } else {
>  throw new CloudRuntimeException("Currently, a volume on 
managed
> storage can only be 'migrated' to itself.");
> }
>

The code above is invoked if the user tries to migrate a volume 
that’s on managed storage to another storage pool. At present, such volumes can 
be migrated when a VM is migrated from one compute cluster to another, but 
those volumes have to remain on the same managed storage.

Here’s an example:

Let’s say VM_1 is in Cluster_1. VM_1 has a root (or data) disk on 
managed storage. We try to migrate the VM from Cluster_1 to Cluster_2 and 
specify a new storage pool for the volume. This case should fail. To make it 
work, you need to either 1) not specify a new storage pool or 2) specify the 
same storage pool the volume is already in. If the managed storage in question 
is zone wide, then it can be used from both Cluster_1 and Cluster_2.

The new code might call 
getDefaultMappingOfVolumesAndStoragePoolForMigration (if no storage pools at 
all are passed in to the API) or it might call 
createMappingVolumeAndStoragePoolEnteredByUser.

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, 
we silently ignore the (faulty) input (which is a new storage pool) from the 
user and keep the volume in its same managed storage pool (the user may wonder 
why it wasn’t migrated if they don’t get an error message back telling them 
this is not allowed).

If createMappingVolumeAndStoragePoolEnteredByUser is invoked, we 
seem to have a bigger problem (code is below):

I do not believe you are required to pass in a new storage pool for 
each and every volume of the VM. If the VM has, say, three volumes, you may 
only try to migrate two of the volumes to new storage pools. This logic seems 
to assume if you want to migrate one of the VM’s volumes, then you necessarily 
want to migrate all of the VM’s volumes. I believe it’s possible for targetPool 
to come back null and later throw a NullPointerException. The old code walks 
through each volume of the VM and checks if there is a new storage pool 
specified for it. If so, do one thing; else, do something else.

private Map 
createMappingVolumeAndStoragePoolEnteredByUser(VirtualMachineProfile profile, 
Host host, Map volumeToPool) {
Map volumeToPoolObjectMap = new 
HashMap();
for(Long volumeId: volumeToPool.keySet()) {
VolumeVO volume = _volsDao.findById(volumeId);

Long poolId = volumeToPool.get(volumeId);
StoragePoolVO 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
Actually, I think I answered both of your questions with these two prior 
e-mails. Please let me know if you need further clarification. Thanks!

On 7/16/18, 2:17 PM, "Tutkowski, Mike"  wrote:

Allow me to correct what I said here:

“If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, we 
silently ignore the (faulty) input (which is a new storage pool) from the user 
and keep the volume in its same managed storage pool (the user may wonder why 
it wasn’t migrated if they don’t get an error message back telling them this is 
not allowed).”

I should have said the following:

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked on a VM 
that is using managed storage that is only at the cluster level (managed 
storage can be at either the zone or cluster level) and we are trying to 
migrate the VM from one cluster to another, this operation should fail (as the 
old code detects). The new code tries to keep the volume in the same storage 
pool (but that storage pool will not be visible to the hosts in the destination 
compute cluster).

On 7/16/18, 2:10 PM, "Tutkowski, Mike"  wrote:

Let me answer the questions in two separate e-mails.

This answer deals with what you wrote about this code:

> if (destPool.getId() == currentPool.getId()) {
> volumeToPoolObjectMap.put(volume, currentPool);
> } else {
>  throw new CloudRuntimeException("Currently, a volume on 
managed
> storage can only be 'migrated' to itself.");
> }
>

The code above is invoked if the user tries to migrate a volume that’s 
on managed storage to another storage pool. At present, such volumes can be 
migrated when a VM is migrated from one compute cluster to another, but those 
volumes have to remain on the same managed storage.

Here’s an example:

Let’s say VM_1 is in Cluster_1. VM_1 has a root (or data) disk on 
managed storage. We try to migrate the VM from Cluster_1 to Cluster_2 and 
specify a new storage pool for the volume. This case should fail. To make it 
work, you need to either 1) not specify a new storage pool or 2) specify the 
same storage pool the volume is already in. If the managed storage in question 
is zone wide, then it can be used from both Cluster_1 and Cluster_2.

The new code might call 
getDefaultMappingOfVolumesAndStoragePoolForMigration (if no storage pools at 
all are passed in to the API) or it might call 
createMappingVolumeAndStoragePoolEnteredByUser.

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, we 
silently ignore the (faulty) input (which is a new storage pool) from the user 
and keep the volume in its same managed storage pool (the user may wonder why 
it wasn’t migrated if they don’t get an error message back telling them this is 
not allowed).

If createMappingVolumeAndStoragePoolEnteredByUser is invoked, we seem 
to have a bigger problem (code is below):

I do not believe you are required to pass in a new storage pool for 
each and every volume of the VM. If the VM has, say, three volumes, you may 
only try to migrate two of the volumes to new storage pools. This logic seems 
to assume if you want to migrate one of the VM’s volumes, then you necessarily 
want to migrate all of the VM’s volumes. I believe it’s possible for targetPool 
to come back null and later throw a NullPointerException. The old code walks 
through each volume of the VM and checks if there is a new storage pool 
specified for it. If so, do one thing; else, do something else.

private Map 
createMappingVolumeAndStoragePoolEnteredByUser(VirtualMachineProfile profile, 
Host host, Map volumeToPool) {
Map volumeToPoolObjectMap = new 
HashMap();
for(Long volumeId: volumeToPool.keySet()) {
VolumeVO volume = _volsDao.findById(volumeId);

Long poolId = volumeToPool.get(volumeId);
StoragePoolVO targetPool = _storagePoolDao.findById(poolId);
StoragePoolVO currentPool = 
_storagePoolDao.findById(volume.getPoolId());

if (_poolHostDao.findByPoolHost(targetPool.getId(), 
host.getId()) == null) {
throw new CloudRuntimeException(String.format("Cannot 
migrate the volume [%s] to the storage pool [%s] while migrating VM [%s] to 
target host [%s]. The host does not have access to the storage pool entered.", 
volume.getUuid(), targetPool.getUuid(), profile.getUuid(), host.getUuid()));
}
if (currentPool.getId() == targetPool.getId()) {
s_logger.info(String.format("The volume [%s] is already 
allocated in storage pool [%s].", volume.getUuid(), targetPool.getUuid()));
 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
Allow me to correct what I said here:

“If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, we 
silently ignore the (faulty) input (which is a new storage pool) from the user 
and keep the volume in its same managed storage pool (the user may wonder why 
it wasn’t migrated if they don’t get an error message back telling them this is 
not allowed).”

I should have said the following:

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked on a VM that 
is using managed storage that is only at the cluster level (managed storage can 
be at either the zone or cluster level) and we are trying to migrate the VM 
from one cluster to another, this operation should fail (as the old code 
detects). The new code tries to keep the volume in the same storage pool (but 
that storage pool will not be visible to the hosts in the destination compute 
cluster).

On 7/16/18, 2:10 PM, "Tutkowski, Mike"  wrote:

Let me answer the questions in two separate e-mails.

This answer deals with what you wrote about this code:

> if (destPool.getId() == currentPool.getId()) {
> volumeToPoolObjectMap.put(volume, currentPool);
> } else {
>  throw new CloudRuntimeException("Currently, a volume on managed
> storage can only be 'migrated' to itself.");
> }
>

The code above is invoked if the user tries to migrate a volume that’s on 
managed storage to another storage pool. At present, such volumes can be 
migrated when a VM is migrated from one compute cluster to another, but those 
volumes have to remain on the same managed storage.

Here’s an example:

Let’s say VM_1 is in Cluster_1. VM_1 has a root (or data) disk on managed 
storage. We try to migrate the VM from Cluster_1 to Cluster_2 and specify a new 
storage pool for the volume. This case should fail. To make it work, you need 
to either 1) not specify a new storage pool or 2) specify the same storage pool 
the volume is already in. If the managed storage in question is zone wide, then 
it can be used from both Cluster_1 and Cluster_2.

The new code might call 
getDefaultMappingOfVolumesAndStoragePoolForMigration (if no storage pools at 
all are passed in to the API) or it might call 
createMappingVolumeAndStoragePoolEnteredByUser.

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, we 
silently ignore the (faulty) input (which is a new storage pool) from the user 
and keep the volume in its same managed storage pool (the user may wonder why 
it wasn’t migrated if they don’t get an error message back telling them this is 
not allowed).

If createMappingVolumeAndStoragePoolEnteredByUser is invoked, we seem to 
have a bigger problem (code is below):

I do not believe you are required to pass in a new storage pool for each 
and every volume of the VM. If the VM has, say, three volumes, you may only try 
to migrate two of the volumes to new storage pools. This logic seems to assume 
if you want to migrate one of the VM’s volumes, then you necessarily want to 
migrate all of the VM’s volumes. I believe it’s possible for targetPool to come 
back null and later throw a NullPointerException. The old code walks through 
each volume of the VM and checks if there is a new storage pool specified for 
it. If so, do one thing; else, do something else.

private Map 
createMappingVolumeAndStoragePoolEnteredByUser(VirtualMachineProfile profile, 
Host host, Map volumeToPool) {
Map volumeToPoolObjectMap = new 
HashMap();
for(Long volumeId: volumeToPool.keySet()) {
VolumeVO volume = _volsDao.findById(volumeId);

Long poolId = volumeToPool.get(volumeId);
StoragePoolVO targetPool = _storagePoolDao.findById(poolId);
StoragePoolVO currentPool = 
_storagePoolDao.findById(volume.getPoolId());

if (_poolHostDao.findByPoolHost(targetPool.getId(), 
host.getId()) == null) {
throw new CloudRuntimeException(String.format("Cannot 
migrate the volume [%s] to the storage pool [%s] while migrating VM [%s] to 
target host [%s]. The host does not have access to the storage pool entered.", 
volume.getUuid(), targetPool.getUuid(), profile.getUuid(), host.getUuid()));
}
if (currentPool.getId() == targetPool.getId()) {
s_logger.info(String.format("The volume [%s] is already 
allocated in storage pool [%s].", volume.getUuid(), targetPool.getUuid()));
}
volumeToPoolObjectMap.put(volume, targetPool);
}
return volumeToPoolObjectMap;
}

On 7/16/18, 5:13 AM, "Rafael Weingärtner"  
wrote:

Ok, I see what happened there with the migration to cluster. When I 
re-did
the code I did not have this case. And therefore, in the old code, I was
not seeing this use case 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
Let me answer the questions in two separate e-mails.

This answer deals with what you wrote about this code:

> if (destPool.getId() == currentPool.getId()) {
> volumeToPoolObjectMap.put(volume, currentPool);
> } else {
>  throw new CloudRuntimeException("Currently, a volume on managed
> storage can only be 'migrated' to itself.");
> }
>

The code above is invoked if the user tries to migrate a volume that’s on 
managed storage to another storage pool. At present, such volumes can be 
migrated when a VM is migrated from one compute cluster to another, but those 
volumes have to remain on the same managed storage.

Here’s an example:

Let’s say VM_1 is in Cluster_1. VM_1 has a root (or data) disk on managed 
storage. We try to migrate the VM from Cluster_1 to Cluster_2 and specify a new 
storage pool for the volume. This case should fail. To make it work, you need 
to either 1) not specify a new storage pool or 2) specify the same storage pool 
the volume is already in. If the managed storage in question is zone wide, then 
it can be used from both Cluster_1 and Cluster_2.

The new code might call getDefaultMappingOfVolumesAndStoragePoolForMigration 
(if no storage pools at all are passed in to the API) or it might call 
createMappingVolumeAndStoragePoolEnteredByUser.

If getDefaultMappingOfVolumesAndStoragePoolForMigration is invoked, we silently 
ignore the (faulty) input (which is a new storage pool) from the user and keep 
the volume in its same managed storage pool (the user may wonder why it wasn’t 
migrated if they don’t get an error message back telling them this is not 
allowed).

If createMappingVolumeAndStoragePoolEnteredByUser is invoked, we seem to have a 
bigger problem (code is below):

I do not believe you are required to pass in a new storage pool for each and 
every volume of the VM. If the VM has, say, three volumes, you may only try to 
migrate two of the volumes to new storage pools. This logic seems to assume if 
you want to migrate one of the VM’s volumes, then you necessarily want to 
migrate all of the VM’s volumes. I believe it’s possible for targetPool to come 
back null and later throw a NullPointerException. The old code walks through 
each volume of the VM and checks if there is a new storage pool specified for 
it. If so, do one thing; else, do something else.

private Map 
createMappingVolumeAndStoragePoolEnteredByUser(VirtualMachineProfile profile, 
Host host, Map volumeToPool) {
Map volumeToPoolObjectMap = new HashMap();
for(Long volumeId: volumeToPool.keySet()) {
VolumeVO volume = _volsDao.findById(volumeId);

Long poolId = volumeToPool.get(volumeId);
StoragePoolVO targetPool = _storagePoolDao.findById(poolId);
StoragePoolVO currentPool = 
_storagePoolDao.findById(volume.getPoolId());

if (_poolHostDao.findByPoolHost(targetPool.getId(), host.getId()) 
== null) {
throw new CloudRuntimeException(String.format("Cannot migrate 
the volume [%s] to the storage pool [%s] while migrating VM [%s] to target host 
[%s]. The host does not have access to the storage pool entered.", 
volume.getUuid(), targetPool.getUuid(), profile.getUuid(), host.getUuid()));
}
if (currentPool.getId() == targetPool.getId()) {
s_logger.info(String.format("The volume [%s] is already 
allocated in storage pool [%s].", volume.getUuid(), targetPool.getUuid()));
}
volumeToPoolObjectMap.put(volume, targetPool);
}
return volumeToPoolObjectMap;
}

On 7/16/18, 5:13 AM, "Rafael Weingärtner"  wrote:

Ok, I see what happened there with the migration to cluster. When I re-did
the code I did not have this case. And therefore, in the old code, I was
not seeing this use case (convoluted code, lack of documentation, and so
on; we all know the story). I will fix it.

Regarding the managed storage issue, can you describe the “special
handling” you need?

Are you talking about this:

> if (destPool.getId() == currentPool.getId()) {
> volumeToPoolObjectMap.put(volume, currentPool);
> } else {
>  throw new CloudRuntimeException("Currently, a volume on managed
> storage can only be 'migrated' to itself.");
> }
>


That is a simple validation, right? A validation to throw an exception if
the user tries to migrate the volume to some other storage pool. Is that
it? If that is the case, the default method
“getDefaultMappingOfVolumesAndStoragePoolForMigration” already takes care
of this. Meaning, that it will not try to move the volume to other storage
pool.

On the other hand, we need to add a validation in the
“createMappingVolumeAndStoragePoolEnteredByUser” method then.
I will wait for your feedback before starting to code. Thanks for spotting
this issue.

On Sun, Jul 15, 2018 at 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Rafael Weingärtner
Probably a lack of attention of the people writing the release note. It
happens in manual processes.

On Mon, Jul 16, 2018 at 4:46 PM, Yiping Zhang  wrote:

> Why is it listed as fixed in 4.11.1.0 in the release note, If the code
> only exist in 4.11.2?
>
>
>
> On 7/16/18, 12:43 PM, "Tutkowski, Mike" 
> wrote:
>
> OK, as Rafael noted, looks like it’s in 4.11.2. My regression tests
> were run against 4.11.1. I thought we only allowed bug fixes when going to
> a new RC, but it appears we are not strictly enforcing that rule.
>
> On 7/16/18, 1:40 PM, "Tutkowski, Mike" 
> wrote:
>
> When I ran my suite of tests on 4.11.1, I did not encounter this
> issue. Also, looking at the code now, it appears this new code is first in
> 4.12.
>
> On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:
>
>
> Is this code already in ACS 4.11.1.0?
>
> CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to
> release note here, http://docs.cloudstack.apache.org/projects/cloudstack-
> release-notes/ja/master/fixed_issues.html, but in the JIRA ticket itself,
> the "fixed version/s" field says 4.12.
>
> We are using XenServer clusters with shared NFS storages and I
> am about to migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between
> clusters a lot, this is going to be a blocker for us.  Someone please
> confirm.
>
> Thanks
>
> Yiping
>
>
> On 7/14/18, 11:20 PM, "Tutkowski, Mike" <
> mike.tutkow...@netapp.com> wrote:
>
> Hi,
>
> While running managed-storage regression tests tonight, I
> noticed a problem that is not related to managed storage.
>
> CLOUDSTACK-10240 is a ticket asking that we allow the
> migration of a virtual disk that’s on local storage to shared storage. In
> the process of enabling this feature, the VirtualMachineManagerImpl.
> getPoolListForVolumesForMigration method was re-written in a way that
> completely breaks at least one use case: Migrating a VM across compute
> clusters (at least supported in XenServer). If, say, a virtual disk resides
> on shared storage in the source compute cluster, we must be able to copy
> this virtual disk to shared storage in the destination compute cluster.
>
> As the code is currently written, this is no longer
> possible. It also seems that the managed-storage logic has been dropped for
> some reason in the new implementation.
>
> Rafael – It seems that you worked on this feature. Would
> you be able to look into this and create a PR?
>
> Thanks,
> Mike
>
>
>
>
>
>
>
>
>


-- 
Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
I should be able to do so soon. I’ve been in meetings all day. I’ll try to 
investigate this before my next meeting starts.

On 7/16/18, 1:45 PM, "Rafael Weingärtner"  wrote:

Yes, that is what happened. I also followed this principle. That is why I
create the PR against master, but I think people are not following this.

Mike, can you provide me some feedback regarding those two inquiries? Then,
we can fix this quickly.

On Mon, Jul 16, 2018 at 4:42 PM, Tutkowski, Mike 
wrote:

> OK, as Rafael noted, looks like it’s in 4.11.2. My regression tests were
> run against 4.11.1. I thought we only allowed bug fixes when going to a 
new
> RC, but it appears we are not strictly enforcing that rule.
>
> On 7/16/18, 1:40 PM, "Tutkowski, Mike"  wrote:
>
> When I ran my suite of tests on 4.11.1, I did not encounter this
> issue. Also, looking at the code now, it appears this new code is first in
> 4.12.
>
> On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:
>
>
> Is this code already in ACS 4.11.1.0?
>
> CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to
> release note here, http://docs.cloudstack.apache.org/projects/cloudstack-
> release-notes/ja/master/fixed_issues.html, but in the JIRA ticket itself,
> the "fixed version/s" field says 4.12.
>
> We are using XenServer clusters with shared NFS storages and I am
> about to migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between
> clusters a lot, this is going to be a blocker for us.  Someone please
> confirm.
>
> Thanks
>
> Yiping
>
>
> On 7/14/18, 11:20 PM, "Tutkowski, Mike" 

> wrote:
>
> Hi,
>
> While running managed-storage regression tests tonight, I
> noticed a problem that is not related to managed storage.
>
> CLOUDSTACK-10240 is a ticket asking that we allow the
> migration of a virtual disk that’s on local storage to shared storage. In
> the process of enabling this feature, the VirtualMachineManagerImpl.
> getPoolListForVolumesForMigration method was re-written in a way that
> completely breaks at least one use case: Migrating a VM across compute
> clusters (at least supported in XenServer). If, say, a virtual disk 
resides
> on shared storage in the source compute cluster, we must be able to copy
> this virtual disk to shared storage in the destination compute cluster.
>
> As the code is currently written, this is no longer possible.
> It also seems that the managed-storage logic has been dropped for some
> reason in the new implementation.
>
> Rafael – It seems that you worked on this feature. Would you
> be able to look into this and create a PR?
>
> Thanks,
> Mike
>
>
>
>
>
>
>


-- 
Rafael Weingärtner




Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Yiping Zhang
Why is it listed as fixed in 4.11.1.0 in the release note, If the code only 
exist in 4.11.2?



On 7/16/18, 12:43 PM, "Tutkowski, Mike"  wrote:

OK, as Rafael noted, looks like it’s in 4.11.2. My regression tests were 
run against 4.11.1. I thought we only allowed bug fixes when going to a new RC, 
but it appears we are not strictly enforcing that rule.

On 7/16/18, 1:40 PM, "Tutkowski, Mike"  wrote:

When I ran my suite of tests on 4.11.1, I did not encounter this issue. 
Also, looking at the code now, it appears this new code is first in 4.12.

On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:


Is this code already in ACS 4.11.1.0? 

CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to 
release note here, 
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/ja/master/fixed_issues.html,
 but in the JIRA ticket itself, the "fixed version/s" field says 4.12.

We are using XenServer clusters with shared NFS storages and I am 
about to migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between 
clusters a lot, this is going to be a blocker for us.  Someone please confirm.

Thanks

Yiping


On 7/14/18, 11:20 PM, "Tutkowski, Mike"  
wrote:

Hi,

While running managed-storage regression tests tonight, I 
noticed a problem that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration 
of a virtual disk that’s on local storage to shared storage. In the process of 
enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. 
It also seems that the managed-storage logic has been dropped for some reason 
in the new implementation.

Rafael – It seems that you worked on this feature. Would you be 
able to look into this and create a PR?

Thanks,
Mike










Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Rafael Weingärtner
Yes, that is what happened. I also followed this principle. That is why I
create the PR against master, but I think people are not following this.

Mike, can you provide me some feedback regarding those two inquiries? Then,
we can fix this quickly.

On Mon, Jul 16, 2018 at 4:42 PM, Tutkowski, Mike 
wrote:

> OK, as Rafael noted, looks like it’s in 4.11.2. My regression tests were
> run against 4.11.1. I thought we only allowed bug fixes when going to a new
> RC, but it appears we are not strictly enforcing that rule.
>
> On 7/16/18, 1:40 PM, "Tutkowski, Mike"  wrote:
>
> When I ran my suite of tests on 4.11.1, I did not encounter this
> issue. Also, looking at the code now, it appears this new code is first in
> 4.12.
>
> On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:
>
>
> Is this code already in ACS 4.11.1.0?
>
> CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to
> release note here, http://docs.cloudstack.apache.org/projects/cloudstack-
> release-notes/ja/master/fixed_issues.html, but in the JIRA ticket itself,
> the "fixed version/s" field says 4.12.
>
> We are using XenServer clusters with shared NFS storages and I am
> about to migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between
> clusters a lot, this is going to be a blocker for us.  Someone please
> confirm.
>
> Thanks
>
> Yiping
>
>
> On 7/14/18, 11:20 PM, "Tutkowski, Mike" 
> wrote:
>
> Hi,
>
> While running managed-storage regression tests tonight, I
> noticed a problem that is not related to managed storage.
>
> CLOUDSTACK-10240 is a ticket asking that we allow the
> migration of a virtual disk that’s on local storage to shared storage. In
> the process of enabling this feature, the VirtualMachineManagerImpl.
> getPoolListForVolumesForMigration method was re-written in a way that
> completely breaks at least one use case: Migrating a VM across compute
> clusters (at least supported in XenServer). If, say, a virtual disk resides
> on shared storage in the source compute cluster, we must be able to copy
> this virtual disk to shared storage in the destination compute cluster.
>
> As the code is currently written, this is no longer possible.
> It also seems that the managed-storage logic has been dropped for some
> reason in the new implementation.
>
> Rafael – It seems that you worked on this feature. Would you
> be able to look into this and create a PR?
>
> Thanks,
> Mike
>
>
>
>
>
>
>


-- 
Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
OK, as Rafael noted, looks like it’s in 4.11.2. My regression tests were run 
against 4.11.1. I thought we only allowed bug fixes when going to a new RC, but 
it appears we are not strictly enforcing that rule.

On 7/16/18, 1:40 PM, "Tutkowski, Mike"  wrote:

When I ran my suite of tests on 4.11.1, I did not encounter this issue. 
Also, looking at the code now, it appears this new code is first in 4.12.

On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:


Is this code already in ACS 4.11.1.0? 

CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to release 
note here, 
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/ja/master/fixed_issues.html,
 but in the JIRA ticket itself, the "fixed version/s" field says 4.12.

We are using XenServer clusters with shared NFS storages and I am about 
to migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between clusters a 
lot, this is going to be a blocker for us.  Someone please confirm.

Thanks

Yiping


On 7/14/18, 11:20 PM, "Tutkowski, Mike"  
wrote:

Hi,

While running managed-storage regression tests tonight, I noticed a 
problem that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration of 
a virtual disk that’s on local storage to shared storage. In the process of 
enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. It 
also seems that the managed-storage logic has been dropped for some reason in 
the new implementation.

Rafael – It seems that you worked on this feature. Would you be 
able to look into this and create a PR?

Thanks,
Mike








Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Tutkowski, Mike
When I ran my suite of tests on 4.11.1, I did not encounter this issue. Also, 
looking at the code now, it appears this new code is first in 4.12.

On 7/16/18, 1:36 PM, "Yiping Zhang"  wrote:


Is this code already in ACS 4.11.1.0? 

CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to release note 
here, 
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/ja/master/fixed_issues.html,
 but in the JIRA ticket itself, the "fixed version/s" field says 4.12.

We are using XenServer clusters with shared NFS storages and I am about to 
migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between clusters a lot, 
this is going to be a blocker for us.  Someone please confirm.

Thanks

Yiping


On 7/14/18, 11:20 PM, "Tutkowski, Mike"  wrote:

Hi,

While running managed-storage regression tests tonight, I noticed a 
problem that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration of a 
virtual disk that’s on local storage to shared storage. In the process of 
enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. It also 
seems that the managed-storage logic has been dropped for some reason in the 
new implementation.

Rafael – It seems that you worked on this feature. Would you be able to 
look into this and create a PR?

Thanks,
Mike






Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Rafael Weingärtner
It is in 4.11.2.0.
It was first added to 4.12, and then back ported. I am now waiting for
Mike's feedback to proceed with the fix for the problem he found.

On Mon, Jul 16, 2018 at 4:36 PM, Yiping Zhang  wrote:

>
> Is this code already in ACS 4.11.1.0?
>
> CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to release note
> here, http://docs.cloudstack.apache.org/projects/cloudstack-
> release-notes/ja/master/fixed_issues.html, but in the JIRA ticket itself,
> the "fixed version/s" field says 4.12.
>
> We are using XenServer clusters with shared NFS storages and I am about to
> migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between clusters a
> lot, this is going to be a blocker for us.  Someone please confirm.
>
> Thanks
>
> Yiping
>
>
> On 7/14/18, 11:20 PM, "Tutkowski, Mike" 
> wrote:
>
> Hi,
>
> While running managed-storage regression tests tonight, I noticed a
> problem that is not related to managed storage.
>
> CLOUDSTACK-10240 is a ticket asking that we allow the migration of a
> virtual disk that’s on local storage to shared storage. In the process of
> enabling this feature, the VirtualMachineManagerImpl.
> getPoolListForVolumesForMigration method was re-written in a way that
> completely breaks at least one use case: Migrating a VM across compute
> clusters (at least supported in XenServer). If, say, a virtual disk resides
> on shared storage in the source compute cluster, we must be able to copy
> this virtual disk to shared storage in the destination compute cluster.
>
> As the code is currently written, this is no longer possible. It also
> seems that the managed-storage logic has been dropped for some reason in
> the new implementation.
>
> Rafael – It seems that you worked on this feature. Would you be able
> to look into this and create a PR?
>
> Thanks,
> Mike
>
>
>


-- 
Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Yiping Zhang

Is this code already in ACS 4.11.1.0? 

CLOUDSTACK-10240 is listed as fixed in 4.11.1.0, according to release note 
here, 
http://docs.cloudstack.apache.org/projects/cloudstack-release-notes/ja/master/fixed_issues.html,
 but in the JIRA ticket itself, the "fixed version/s" field says 4.12.

We are using XenServer clusters with shared NFS storages and I am about to 
migrate to ACS 4.11.1.0 from 4.9.3.0.  Since we move VM between clusters a lot, 
this is going to be a blocker for us.  Someone please confirm.

Thanks

Yiping


On 7/14/18, 11:20 PM, "Tutkowski, Mike"  wrote:

Hi,

While running managed-storage regression tests tonight, I noticed a problem 
that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration of a 
virtual disk that’s on local storage to shared storage. In the process of 
enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. It also seems 
that the managed-storage logic has been dropped for some reason in the new 
implementation.

Rafael – It seems that you worked on this feature. Would you be able to 
look into this and create a PR?

Thanks,
Mike




Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-16 Thread Rafael Weingärtner
Ok, I see what happened there with the migration to cluster. When I re-did
the code I did not have this case. And therefore, in the old code, I was
not seeing this use case (convoluted code, lack of documentation, and so
on; we all know the story). I will fix it.

Regarding the managed storage issue, can you describe the “special
handling” you need?

Are you talking about this:

> if (destPool.getId() == currentPool.getId()) {
> volumeToPoolObjectMap.put(volume, currentPool);
> } else {
>  throw new CloudRuntimeException("Currently, a volume on managed
> storage can only be 'migrated' to itself.");
> }
>


That is a simple validation, right? A validation to throw an exception if
the user tries to migrate the volume to some other storage pool. Is that
it? If that is the case, the default method
“getDefaultMappingOfVolumesAndStoragePoolForMigration” already takes care
of this. Meaning, that it will not try to move the volume to other storage
pool.

On the other hand, we need to add a validation in the
“createMappingVolumeAndStoragePoolEnteredByUser” method then.
I will wait for your feedback before starting to code. Thanks for spotting
this issue.

On Sun, Jul 15, 2018 at 9:11 PM, Tutkowski, Mike 
wrote:

> Hi Rafael,
>
> Thanks for your time on this.
>
> Here is an example where the new code deviates from the old code in a
> critical fashion (code right below is new):
>
> private Map getDefaultMappingOfVolumesAndS
> toragePoolForMigration(VirtualMachineProfile profile, Host targetHost) {
> Map volumeToPoolObjectMap = new
> HashMap();
> List allVolumes = _volsDao.findUsableVolumesForInstance(
> profile.getId());
> for (VolumeVO volume : allVolumes) {
> StoragePoolVO currentPool = _storagePoolDao.findById(
> volume.getPoolId());
> if (ScopeType.HOST.equals(currentPool.getScope())) {
> createVolumeToStoragePoolMappingIfNeeded(profile,
> targetHost, volumeToPoolObjectMap, volume, currentPool);
> } else {
> volumeToPoolObjectMap.put(volume, currentPool);
> }
> }
> return volumeToPoolObjectMap;
> }
>
> What happens in the new code (above) is if the user didn’t pass in a
> storage pool to migrate the virtual disk to (but the VM is being migrated
> to a new cluster), this code just assigns the virtual disk to its current
> storage pool (which is not going to be visible to any of the hosts in the
> new compute cluster).
>
> In the old code (I’m looking at 4.11.3 here), you could look around line
> 2337 for the following code (in the VirtualMachineManagerImpl.
> getPoolListForVolumesForMigration method):
>
> // Find a suitable pool for the volume. Call the
> storage pool allocator to find the list of pools.
>
> final DiskProfile diskProfile = new
> DiskProfile(volume, diskOffering, profile.getHypervisorType());
> final DataCenterDeployment plan = new
> DataCenterDeployment(host.getDataCenterId(), host.getPodId(),
> host.getClusterId(),
> host.getId(), null, null);
>
> final List poolList = new ArrayList<>();
> final ExcludeList avoid = new ExcludeList();
>
> for (final StoragePoolAllocator allocator :
> _storagePoolAllocators) {
> final List poolListFromAllocator =
> allocator.allocateToPool(diskProfile, profile, plan, avoid,
> StoragePoolAllocator.RETURN_UPTO_ALL);
>
> if (poolListFromAllocator != null &&
> !poolListFromAllocator.isEmpty()) {
> poolList.addAll(poolListFromAllocator);
> }
> }
>
> This old code would find an applicable storage pool in the destination
> cluster (one that can be seen by the hosts in that compute cluster).
>
> I think the main error in the new logic is the assumption that a VM can
> only be migrated to a host in the same computer cluster. For XenServer
> (perhaps for other hypervisor types?), we support cross-cluster VM
> migration.
>
> The other issue I noticed is that there is no logic in the new code that
> checks for managed-storage use cases. If you look in the
> VirtualMachineManagerImpl.getPoolListForVolumesForMigration method in the
> old code, there is special handling for managed storage. I don’t see this
> reproduced in the new logic.
>
> I sympathize with your point that all tests passed yet this issue was not
> uncovered. Unfortunately, I suspect we have a fairly low % coverage of
> automated tests on CloudStack. If we ever did get to a high % of automated
> test coverage, we might be able to spin up new releases more frequently. As
> the case stands today, however, there are probably many un-tested use cases
> when it comes to our automated suite of tests.
>
> Thanks again!
> Mike
>
> On 7/15/18, 4:19 PM, "Rafael Weingärtner" 
> wrote:
>
> Mike, are you able to pin-point 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-15 Thread Tutkowski, Mike
Hi Rafael,

Thanks for your time on this.

Here is an example where the new code deviates from the old code in a critical 
fashion (code right below is new):

private Map 
getDefaultMappingOfVolumesAndStoragePoolForMigration(VirtualMachineProfile 
profile, Host targetHost) {
Map volumeToPoolObjectMap = new HashMap();
List allVolumes = 
_volsDao.findUsableVolumesForInstance(profile.getId());
for (VolumeVO volume : allVolumes) {
StoragePoolVO currentPool = 
_storagePoolDao.findById(volume.getPoolId());
if (ScopeType.HOST.equals(currentPool.getScope())) {
createVolumeToStoragePoolMappingIfNeeded(profile, targetHost, 
volumeToPoolObjectMap, volume, currentPool);
} else {
volumeToPoolObjectMap.put(volume, currentPool);
}
}
return volumeToPoolObjectMap;
}

What happens in the new code (above) is if the user didn’t pass in a storage 
pool to migrate the virtual disk to (but the VM is being migrated to a new 
cluster), this code just assigns the virtual disk to its current storage pool 
(which is not going to be visible to any of the hosts in the new compute 
cluster).

In the old code (I’m looking at 4.11.3 here), you could look around line 2337 
for the following code (in the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method):

// Find a suitable pool for the volume. Call the storage 
pool allocator to find the list of pools.

final DiskProfile diskProfile = new DiskProfile(volume, 
diskOffering, profile.getHypervisorType());
final DataCenterDeployment plan = new 
DataCenterDeployment(host.getDataCenterId(), host.getPodId(), 
host.getClusterId(),
host.getId(), null, null);

final List poolList = new ArrayList<>();
final ExcludeList avoid = new ExcludeList();

for (final StoragePoolAllocator allocator : 
_storagePoolAllocators) {
final List poolListFromAllocator = 
allocator.allocateToPool(diskProfile, profile, plan, avoid, 
StoragePoolAllocator.RETURN_UPTO_ALL);

if (poolListFromAllocator != null && 
!poolListFromAllocator.isEmpty()) {
poolList.addAll(poolListFromAllocator);
}
}

This old code would find an applicable storage pool in the destination cluster 
(one that can be seen by the hosts in that compute cluster).

I think the main error in the new logic is the assumption that a VM can only be 
migrated to a host in the same computer cluster. For XenServer (perhaps for 
other hypervisor types?), we support cross-cluster VM migration.

The other issue I noticed is that there is no logic in the new code that checks 
for managed-storage use cases. If you look in the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method in the old 
code, there is special handling for managed storage. I don’t see this 
reproduced in the new logic.

I sympathize with your point that all tests passed yet this issue was not 
uncovered. Unfortunately, I suspect we have a fairly low % coverage of 
automated tests on CloudStack. If we ever did get to a high % of automated test 
coverage, we might be able to spin up new releases more frequently. As the case 
stands today, however, there are probably many un-tested use cases when it 
comes to our automated suite of tests.

Thanks again!
Mike

On 7/15/18, 4:19 PM, "Rafael Weingärtner"  wrote:

Mike, are you able to pin-point in the old/replaced code the bit that was
handling your use case?  I took the most care not to break anything.
Also, your test case, isn't it in the ACS' integration test suite? In
theory, all test passed when we merged the PR.

I sure can take a look at it. Can you detail your use case? I mean, the
high level execution flow. What API methods you do, what you expected to
happen, and what is happening today.

On Sun, Jul 15, 2018 at 3:25 AM, Tutkowski, Mike 
wrote:

> It looks like this is the problematic PR:
>
> https://github.com/apache/cloudstack/pull/2425/
>
> On 7/15/18, 12:20 AM, "Tutkowski, Mike"  wrote:
>
> Hi,
>
> While running managed-storage regression tests tonight, I noticed a
> problem that is not related to managed storage.
>
> CLOUDSTACK-10240 is a ticket asking that we allow the migration of a
> virtual disk that’s on local storage to shared storage. In the process of
> enabling this feature, the VirtualMachineManagerImpl.
> getPoolListForVolumesForMigration method was re-written in a way that
> completely breaks at least one use case: Migrating a VM across compute
> clusters (at least supported in XenServer). If, say, a virtual disk 
resides
> on shared storage in the source compute cluster, we must 

Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-15 Thread Rafael Weingärtner
Mike, are you able to pin-point in the old/replaced code the bit that was
handling your use case?  I took the most care not to break anything.
Also, your test case, isn't it in the ACS' integration test suite? In
theory, all test passed when we merged the PR.

I sure can take a look at it. Can you detail your use case? I mean, the
high level execution flow. What API methods you do, what you expected to
happen, and what is happening today.

On Sun, Jul 15, 2018 at 3:25 AM, Tutkowski, Mike 
wrote:

> It looks like this is the problematic PR:
>
> https://github.com/apache/cloudstack/pull/2425/
>
> On 7/15/18, 12:20 AM, "Tutkowski, Mike"  wrote:
>
> Hi,
>
> While running managed-storage regression tests tonight, I noticed a
> problem that is not related to managed storage.
>
> CLOUDSTACK-10240 is a ticket asking that we allow the migration of a
> virtual disk that’s on local storage to shared storage. In the process of
> enabling this feature, the VirtualMachineManagerImpl.
> getPoolListForVolumesForMigration method was re-written in a way that
> completely breaks at least one use case: Migrating a VM across compute
> clusters (at least supported in XenServer). If, say, a virtual disk resides
> on shared storage in the source compute cluster, we must be able to copy
> this virtual disk to shared storage in the destination compute cluster.
>
> As the code is currently written, this is no longer possible. It also
> seems that the managed-storage logic has been dropped for some reason in
> the new implementation.
>
> Rafael – It seems that you worked on this feature. Would you be able
> to look into this and create a PR?
>
> Thanks,
> Mike
>
>
>


-- 
Rafael Weingärtner


Re: Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-15 Thread Tutkowski, Mike
It looks like this is the problematic PR:

https://github.com/apache/cloudstack/pull/2425/

On 7/15/18, 12:20 AM, "Tutkowski, Mike"  wrote:

Hi,

While running managed-storage regression tests tonight, I noticed a problem 
that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration of a 
virtual disk that’s on local storage to shared storage. In the process of 
enabling this feature, the 
VirtualMachineManagerImpl.getPoolListForVolumesForMigration method was 
re-written in a way that completely breaks at least one use case: Migrating a 
VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. It also seems 
that the managed-storage logic has been dropped for some reason in the new 
implementation.

Rafael – It seems that you worked on this feature. Would you be able to 
look into this and create a PR?

Thanks,
Mike




Problem with CLOUDSTACK-10240 (Cannot migrate local volume to shared storage)

2018-07-15 Thread Tutkowski, Mike
Hi,

While running managed-storage regression tests tonight, I noticed a problem 
that is not related to managed storage.

CLOUDSTACK-10240 is a ticket asking that we allow the migration of a virtual 
disk that’s on local storage to shared storage. In the process of enabling this 
feature, the VirtualMachineManagerImpl.getPoolListForVolumesForMigration method 
was re-written in a way that completely breaks at least one use case: Migrating 
a VM across compute clusters (at least supported in XenServer). If, say, a 
virtual disk resides on shared storage in the source compute cluster, we must 
be able to copy this virtual disk to shared storage in the destination compute 
cluster.

As the code is currently written, this is no longer possible. It also seems 
that the managed-storage logic has been dropped for some reason in the new 
implementation.

Rafael – It seems that you worked on this feature. Would you be able to look 
into this and create a PR?

Thanks,
Mike