Thanks Prakash,
Let me check the SM.log.
On Fri, Mar 17, 2017 at 4:41 AM, Prakash Sharma
wrote:
> I faintly remember that XS 6.2 had a bug where if the destination SR is
> full, volume migration used to fail and that used to keep volume in very
> vulnerable situation, where garbage collect
I faintly remember that XS 6.2 had a bug where if the destination SR is
full, volume migration used to fail and that used to keep volume in very
vulnerable situation, where garbage collector will kick in and remove those
volumes.
You can go through SM.log in xenserver and verify what might have ha
Hi Tejas,
You may want to read this email thread that I initiated in past around same
problem. I mainly moved the volumes with VMs running.
https://mail-archive.com/users@cloudstack.apache.org/msg20650.html
What is path parameter value under volume table for the missing volumes?
That value repr
How did you migrate your VM? Did you do VM live migration with storage
migration, or stop VM, migrate disks, then start VM on new storage?
My situation was the first case, the VM was running during migration, so we
knew the disks were available on the new storage. The problem was if we stop
th
Hi,
I searched for few documents but in all documents they are mentioning
that VHD will be available on destination storage (but database will point
to source storage)
In our case VHD file is missing from source and destination storage.
This is a known issue and it was fixed in version no later than 4.5.2. If you
search this list for subject line “corrupt DB after VM live migration with
storage migration”, you should see some past discussions on it.
Good luck!
Yiping
On 3/15/17, 10:51 AM, "Tejas Sheth" wrote:
Hi,
Hi,
We have xenserver 6.2 with cloudstack 4.3.1. We had initiated the data
disk migration from one primary storage to another. after the operation we
were not able to start the VM.
when we checked the xenserver storage, we were not able to find the VHD
in any of the primary storage.
Its a