Another comment here: The part that is broken is if you try to let CloudStack 
pick the primary storage on the destination side. That code no longer exists in 
4.11.1.

On 7/16/18, 9:24 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:

    To follow up on this a bit: Yes, you should be able to migrate a VM and its 
storage from one cluster to another today using non-managed (traditional) 
primary storage with XenServer (both the source and destination primary 
storages would be cluster scoped). However, that is one of the features that 
was broken in 4.11.1 that we are discussing in this thread.
    
    On 7/16/18, 9:20 PM, "Tutkowski, Mike" <mike.tutkow...@netapp.com> wrote:
    
        For a bit of info on what managed storage is, please take a look at 
this document:
        
        
https://www.dropbox.com/s/wwz2bjpra9ykk5w/SolidFire%20in%20CloudStack.docx?dl=0
        
        The short answer is that you can have zone-wide managed storage (for 
XenServer, VMware, and KVM). However, there is no current zone-wide non-managed 
storage for XenServer.
        
        On 7/16/18, 6:20 PM, "Yiping Zhang" <yzh...@marketo.com> wrote:
        
            I assume by "managed storage", you guys mean primary storages, 
either zone -wide or cluster-wide.
            
            For Xen hypervisor, ACS does not support "zone-wide" primary 
storage yet. Still, I can live migrate a VM with data disks between clusters 
with storage migration from web GUI, today.  So, your statement below does not 
reflect current behavior of the code.
            
                
                       - If I want to migrate a VM across clusters, but if at 
least one of its
                       volumes is placed in a cluster-wide managed storage, the 
migration is not
                       allowed. Is that it?
                
                [Mike] Correct
                
                
            
            
            
        
        
    
    

Reply via email to