Re: VM Data and root VHD missing after storage migration

2017-03-19 Thread Tejas Sheth
Thanks Prakash,

   Let me check the SM.log.



On Fri, Mar 17, 2017 at 4:41 AM, Prakash Sharma 
wrote:

> I faintly remember that XS 6.2 had a bug where if the destination SR is
> full, volume migration used to fail and that used to keep volume in very
> vulnerable situation, where garbage collector will kick in and remove those
> volumes.
>
> You can go through SM.log in xenserver and verify what might have happened.
>
>
> On 17 Mar 2017 05:39, "Makrand"  wrote:
>
> > Hi Tejas,
> >
> > You may want to read this email thread that I initiated in past around
> same
> > problem. I mainly moved the volumes with VMs running.
> >
> > https://mail-archive.com/users@cloudstack.apache.org/msg20650.html
> >
> > What is path parameter value under volume table for  the missing volumes?
> > That value represents the vdi id while the volume was on old storage.
> >
> >
> >
> > --
> > Makrand
> >
> >
> > On Thu, Mar 16, 2017 at 11:30 PM, Yiping Zhang 
> wrote:
> >
> > > How did you migrate your VM?  Did you do VM live migration with storage
> > > migration, or stop VM, migrate disks, then start VM on new storage?
> > >
> > > My situation was the first case, the VM was running during migration,
> so
> > > we knew the disks were available on the new storage. The problem was if
> > we
> > > stop those migrated VM, they could not be restarted due to DB
> corruption,
> > > thus the need to fix DB to point to correct volumes.
> > >
> > > If you migrated your VM like in the second case, then your volume
> > > migration step failed.  You should find relevant log entries about
> volume
> > > migration, hopefully it gives you more info there.
> > >
> > >
> > > On 3/16/17, 1:31 AM, "Tejas Sheth"  wrote:
> > >
> > > Hi,
> > >
> > >I searched for few documents but in all documents they are
> > > mentioning
> > > that VHD will be available on destination storage (but database
> will
> > > point
> > > to source storage)
> > >
> > >   In our case VHD file is missing from source and destination
> > storage.
> > >
> > >
> > >
> >
>


Re: VM Data and root VHD missing after storage migration

2017-03-16 Thread Prakash Sharma
I faintly remember that XS 6.2 had a bug where if the destination SR is
full, volume migration used to fail and that used to keep volume in very
vulnerable situation, where garbage collector will kick in and remove those
volumes.

You can go through SM.log in xenserver and verify what might have happened.


On 17 Mar 2017 05:39, "Makrand"  wrote:

> Hi Tejas,
>
> You may want to read this email thread that I initiated in past around same
> problem. I mainly moved the volumes with VMs running.
>
> https://mail-archive.com/users@cloudstack.apache.org/msg20650.html
>
> What is path parameter value under volume table for  the missing volumes?
> That value represents the vdi id while the volume was on old storage.
>
>
>
> --
> Makrand
>
>
> On Thu, Mar 16, 2017 at 11:30 PM, Yiping Zhang  wrote:
>
> > How did you migrate your VM?  Did you do VM live migration with storage
> > migration, or stop VM, migrate disks, then start VM on new storage?
> >
> > My situation was the first case, the VM was running during migration, so
> > we knew the disks were available on the new storage. The problem was if
> we
> > stop those migrated VM, they could not be restarted due to DB corruption,
> > thus the need to fix DB to point to correct volumes.
> >
> > If you migrated your VM like in the second case, then your volume
> > migration step failed.  You should find relevant log entries about volume
> > migration, hopefully it gives you more info there.
> >
> >
> > On 3/16/17, 1:31 AM, "Tejas Sheth"  wrote:
> >
> > Hi,
> >
> >I searched for few documents but in all documents they are
> > mentioning
> > that VHD will be available on destination storage (but database will
> > point
> > to source storage)
> >
> >   In our case VHD file is missing from source and destination
> storage.
> >
> >
> >
>


Re: VM Data and root VHD missing after storage migration

2017-03-16 Thread Makrand
Hi Tejas,

You may want to read this email thread that I initiated in past around same
problem. I mainly moved the volumes with VMs running.

https://mail-archive.com/users@cloudstack.apache.org/msg20650.html

What is path parameter value under volume table for  the missing volumes?
That value represents the vdi id while the volume was on old storage.



--
Makrand


On Thu, Mar 16, 2017 at 11:30 PM, Yiping Zhang  wrote:

> How did you migrate your VM?  Did you do VM live migration with storage
> migration, or stop VM, migrate disks, then start VM on new storage?
>
> My situation was the first case, the VM was running during migration, so
> we knew the disks were available on the new storage. The problem was if we
> stop those migrated VM, they could not be restarted due to DB corruption,
> thus the need to fix DB to point to correct volumes.
>
> If you migrated your VM like in the second case, then your volume
> migration step failed.  You should find relevant log entries about volume
> migration, hopefully it gives you more info there.
>
>
> On 3/16/17, 1:31 AM, "Tejas Sheth"  wrote:
>
> Hi,
>
>I searched for few documents but in all documents they are
> mentioning
> that VHD will be available on destination storage (but database will
> point
> to source storage)
>
>   In our case VHD file is missing from source and destination storage.
>
>
>


Re: VM Data and root VHD missing after storage migration

2017-03-16 Thread Tejas Sheth
Hi,

   I searched for few documents but in all documents they are mentioning
that VHD will be available on destination storage (but database will point
to source storage)

  In our case VHD file is missing from source and destination storage.


VM Data and root VHD missing after storage migration

2017-03-15 Thread Tejas Sheth
Hi,

  We have xenserver 6.2 with cloudstack 4.3.1. We had initiated the data
disk migration from one primary storage to another. after the operation we
were not able to start the VM.

  when we checked the xenserver storage, we were not able to find the VHD
in any of the primary storage.

  Its a strange behavior i have seen first time, since the production data
is missing we need to identify the root cause of the same.

NOTE: Even Root disk is also not available in any of the storage hence we
are not able to power on the VM at all.

 we have checked database where metadata is available in database about the
root and data disk.

  following error captured when we start the vm with Data disk.
2017-03-09 11:49:26,964 WARN [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-203:ctx-ff911f73) Catch Exception: class
com.xensource.xenapi.Types$UuidInvalid due to The uuid you supplied was
invalid.

Thanks