=
=new findings/ it's working now =
=
I cloned the faulty system in order to play with it, and the cloned vm *boots
with no problem at all*. so there's clearly an issue with moving between
nfs to iscsi SD's with a snapshot. I have both
Im still wondering why Ovirt moved an image and left it with errors, but
reported the move as successful in the GUI. If you think it's related to
the snapshot, its really strange as it's the first time I see this odd
behavior, never got an issue like this when moving images+snaps.
on the other sid
Hi Nir, thanks for the reply, here's the output:
*(BASE)*
*[root@node02 ~]# * qemu-img info --backing-chain
/rhev/data-center/mnt/blockSD/cec63cf0-9311-488d-b1fa-99c4405e8379/images/65ec515e-0aae-4fe6-a561-387929c7fb4d/52532d05-970e-4643-9774-96c31796062c
image:
/rhev/data-center/mnt/blockSD/cec63
On Mon, May 14, 2018 at 5:19 PM Juan Pablo
wrote:
> ok, so Im confirming that the image is wrong somehow:
> with no snapshot, from inside the vm disk size is reporting 750G.
> with a snapshot, from inside the vm disk size is reporting 1100G.
> both have no partitions on it, so I guess ovirt migra
ok, so Im confirming that the image is wrong somehow:
with no snapshot, from inside the vm disk size is reporting 750G.
with a snapshot, from inside the vm disk size is reporting 1100G.
both have no partitions on it, so I guess ovirt migrated the structure of
the 750G disk on a 1100 disk, any ideas
2 clues:
-the original size of the disk was 750G and was extended a month ago to
1100G. The System rebooted fine several times, and took the new size with
no problems.
-I run fdisk from a centos 7 rescue cd and '/dev/vda' reported 750G. then,
I took a snapshot of the disk to play with recovery too
I removed the auto-snapshot and still no lucky. no bootable disk found. =(
ideas?
2018-05-13 12:26 GMT-03:00 Juan Pablo :
> benny, thanks for your reply:
> ok, so the steps are : removing the snapshot on the first place. then what
> do you suggest?
>
>
> 2018-05-12 15:23 GMT-03:00 Nir Soffer :
>
benny, thanks for your reply:
ok, so the steps are : removing the snapshot on the first place. then what
do you suggest?
2018-05-12 15:23 GMT-03:00 Nir Soffer :
> On Sat, 12 May 2018, 11:32 Benny Zlotnik, wrote:
>
>> Using the auto-generated snapshot is generally a bad idea as it's
>> inconsist
On Sat, 12 May 2018, 11:32 Benny Zlotnik, wrote:
> Using the auto-generated snapshot is generally a bad idea as it's
> inconsistent,
>
What do you mean by inconsistant?
you should remove it before moving further
>
> On Fri, May 11, 2018 at 7:25 PM, Juan Pablo
> wrote:
>
>> I rebooted it with
Using the auto-generated snapshot is generally a bad idea as it's
inconsistent, you should remove it before moving further
On Fri, May 11, 2018 at 7:25 PM, Juan Pablo
wrote:
> I rebooted it with no luck, them I used the auto-gen snapshot , same luck.
> attaching the logs in gdrive
>
> thanks in
I rebooted it with no luck, them I used the auto-gen snapshot , same luck.
attaching the logs in gdrive
thanks in advance
2018-05-11 12:50 GMT-03:00 Benny Zlotnik :
> I see here a failed attempt:
> 2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.dbb
> roker.auditloghandling.AuditLogD
I see here a failed attempt:
2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-67)
[bd8eeb1d-f49a-4f91-a521-e0f31b4a7cbd] EVENT_ID:
USER_MOVED_DISK_FINISHED_FAILURE(2,011),
User admin@internal-aut
Can you provide the logs? engine and vdsm.
Did you perform a live migration (the VM is running) or cold?
On Fri, May 11, 2018 at 2:49 PM, Juan Pablo
wrote:
> Hi! , Im strugled about an ongoing problem:
> after migrating a vm's disk from an iscsi domain to a nfs and ovirt
> reporting the migrati
13 matches
Mail list logo