On 25/05/2018, 12:29, "Tomáš Golembiovský" <tgole...@redhat.com> wrote:

    Hi,
    
    On Fri, 25 May 2018 08:11:13 +0000
    "Vrgotic, Marko" <m.vrgo...@activevideo.com> wrote:
    
    > Dear Nir, Arik and Richard,
    > 
    > I hope discussion will continue somewhere where i am able to join as 
watcher at least.
    
    please open a bug on VDSM. This is something we need to deal with during
    import -- or at least prevent users from importing.
       [Marko] Where? Email to users@ovirt.org? Do you need me to provide more 
information than in this email or additional information?
                       If possible, go with "deal with", instead of just 
preventing. My team very much enjoys oVirt platform and its functionality and 
we would love to see it grow further, internally and externally.
    
    > I have not seen any communication since Nir’s proposal. Please, if 
possible allow me to somehow track in which direction you are leaning.
    > 
    > In the meantime, as experienced engineers, do you have any suggestion how 
I could “workaround” current problem?
    > 
    > Rebasing image using qemu-img to remove backing file did not help (VM was 
able to start, but No Boot Device) and I think now that is due to image having 
functional dependency of base image.
    
    What do you mean by rebasing? On which backing image did you rebase it?
       [Marko] It was using unsafe mode - just wanted to see what results will 
I get if I remove the backing file (inexperienced move). This action result in 
being able to start the VM, but ending up in No Bootable Device.

    I'm not too familiar with openstack, but I'd suggest doing a 'qemu-img 
convert' on the disk in openstack to squash the backing change into new (and 
complete) image, assign this new disk to your VM and import it to oVirt.
        [Marko] Thank you. We will test it and check the result.
    
        Tomas
    
    > 
    > Like with VMs in oVrt, where template can not be deleted if there are 
still VMs existing, which were created from that template.
    > 
    > Please advise
    > 
    > — — —
    > Met vriendelijke groet / Best regards,
    > 
    > Marko Vrgotic
    > Sr. System Engineer
    > ActiveVideo
    > 
    > Tel. +31 (0)35 677 4131
    > email: m.vrgo...@activevideo.com
    > skype: av.mvrgotic.se
    > www.activevideo.com
    > ________________________________
    > From: Vrgotic, Marko
    > Sent: Thursday, May 24, 2018 5:26:30 PM
    > To: Nir Soffer
    > Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas
    > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after 
importing VM from OpenStack
    > 
    > Dear Nir,
    > 
    > I believe i understand now. The image imported is not base image, but 
required backing file to be able to work properly.
    > 
    > Maybe silly move, but i have tried to “solve/workaround” around the 
problem by rebasing image to remove backing file dependency, but it’s clear now 
why I than saw that “no bootable device found” during imported VM boot.
    > 
    > I support you suggestion to solve the import by either importing complete 
chain or recreating image so in a way it’s independent from former chain.
    > 
    > If you decide to go this way, please let me know which issue to track and 
if you need any more data provided from me.
    > 
    > I still need to solve problem with 200+ VM wanting to move to oVirt.
    > 
    > Kindly awaiting further updates.
    > 
    > — — —
    > Met vriendelijke groet / Best regards,
    > 
    > Marko Vrgotic
    > Sr. System Engineer
    > ActiveVideo
    > 
    > Tel. +31 (0)35 677 4131
    > email: m.vrgo...@activevideo.com
    > skype: av.mvrgotic.se
    > www.activevideo.com
    > ________________________________
    > From: Nir Soffer <nsof...@redhat.com>
    > Sent: Thursday, May 24, 2018 5:13:47 PM
    > To: Vrgotic, Marko
    > Cc: users@ovirt.org; Richard W.M. Jones; Arik Hadas
    > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after 
importing VM from OpenStack
    > 
    > On Thu, May 24, 2018 at 6:06 PM Vrgotic, Marko 
<m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>> wrote:
    > Dear Nir,
    > 
    > Thank you for quick reply.
    > 
    > Ok, why it will not work?
    > 
    > Because the image has a backing file which is not accessible to oVirt.
    > 
    > I used qemu+tcp connection, via import method through engine admin UI.
    > 
    > Images was imported and converted according logs, still “backing file” 
invalid entry remained.
    > 
    > Also, I did use same method before, connecting to plain “libvirt kvm” 
host, import and conversion went smooth, no backend file.
    > 
    > Image format is qcow(2) which is supported by oVirt.
    > 
    > What am I missing? Should I use different method?
    > 
    > I guess this is not a problem on your side, but a bug in our side.
    > 
    > Either we should block the operation that cannot work, or fix the process
    > so we don't refer to non-existing image.
    > 
    > When importing we have 2 options:
    > 
    > - import the entire chain,  importing all images in the chain, converting
    >  each image to oVirt volume, and updating the backing file of each layer
    > to point to the oVirt image.
    > 
    > - import the current state of the image into a new image, using either raw
    > or qcow2, but without any backing file.
    > 
    > Arik, do you know why we create qcow2 file with invalid backing file?
    > 
    > Nir
    > 
    > 
    > Kindly awaiting your reply.
    > 
    > — — —
    > Met vriendelijke groet / Best regards,
    > 
    > Marko Vrgotic
    > Sr. System Engineer
    > ActiveVideo
    > 
    > Tel. +31 (0)35 677 4131<tel:+31%2035%20677%204131>
    > email: m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>
    > skype: av.mvrgotic.se<http://av.mvrgotic.se>
    > www.activevideo.com<http://www.activevideo.com>
    > ________________________________
    > From: Nir Soffer <nsof...@redhat.com<mailto:nsof...@redhat.com>>
    > Sent: Thursday, May 24, 2018 4:09:40 PM
    > To: Vrgotic, Marko
    > Cc: users@ovirt.org<mailto:users@ovirt.org>; Richard W.M. Jones; Arik 
Hadas
    > Subject: Re: [ovirt-users] Libvirt ERROR cannot access backing file after 
importing VM from OpenStack
    > 
    > 
    > 
    > On Thu, May 24, 2018 at 5:05 PM Vrgotic, Marko 
<m.vrgo...@activevideo.com<mailto:m.vrgo...@activevideo.com>> wrote:
    > 
    > Dear oVirt team,
    > 
    > 
    > 
    > When trying to start imported VM, it fails with following message:
    > 
    > 
    > 
    > ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ForkJoinPool-1-worker-2) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 
is down with error. Exit message: Cannot access backing file 
'/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of 
storage file 
'/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba'
 (as uid:107, gid:107): No such file or directory.
    > 
    > 
    > 
    > Platform details:
    > 
    > Ovirt SHE
    > 
    > Version 4.2.2.6-1.el7.centos
    > 
    > GlusterFS, unmanaged by oVirt.
    > 
    > 
    > 
    > VM is imported & converted from OpenStack, according to log files, 
successfully (one WARN, related to different MAC address):
    > 
    > 2018-05-24 12:03:31,028+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand]
 (default task-29) [cc5931a2-1af5-4d65-b0b3-362588db9d3f] FINISH, 
GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM 
[instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM 
[instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM 
[instance-0001ecf2], VM [instance-00078d38]], log id: 7f178a5e
    > 
    > 2018-05-24 12:48:33,722+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsNamesFromExternalProviderVDSCommand]
 (default task-8) [103d56e1-7449-4853-ae50-48ee94d43d77] FINISH, 
GetVmsNamesFromExternalProviderVDSCommand, return: [VM [instance-0001f94c], VM 
[instance-00078f6a], VM [instance-00000814], VM [instance-0001f9ac], VM 
[instance-000001ff], VM [instance-0001f718], VM [instance-00000673], VM 
[instance-0001ecf2], VM [instance-00078d38]], log id: 3aa178c5
    > 
    > 2018-05-24 12:48:47,291+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand]
 (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] START, 
GetVmsFullInfoFromExternalProviderVDSCommand(HostName = aws-ovhv-01, 
GetVmsFromExternalProviderParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d',
 url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', 
username='null', originType='KVM', namesOfVms='[instance-00000673]'}), log id: 
4c445109
    > 
    > 2018-05-24 12:48:47,318+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVmsFullInfoFromExternalProviderVDSCommand]
 (default task-17) [4bf555c7-9d64-4ecc-b059-8a60a4b27bdd] FINISH, 
GetVmsFullInfoFromExternalProviderVDSCommand, return: [VM [instance-00000673]], 
log id: 4c445109
    > 
    > 2018-05-24 12:49:20,466+02 INFO  
[org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] 
(default task-41) [14edb003-b4a0-4355-b3de-da2b68774fe3] Lock Acquired to 
object 'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 
1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
    > 
    > 2018-05-24 12:49:20,586+02 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-653408) 
[14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), 
VM instance-00000673 has MAC address(es) fa:16:3e:74:18:50, which is/are out of 
its MAC pool definitions.
    > 
    > 2018-05-24 12:49:21,021+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-653408) 
[14edb003-b4a0-4355-b3de-da2b68774fe3] EVENT_ID: 
IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm instance-00000673 
to Data Center AVEUNL, Cluster AWSEUOPS
    > 
    > 2018-05-24 12:49:28,816+02 INFO  
[org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] 
(EE-ManagedThreadFactory-engine-Thread-653407) [] Lock freed to object 
'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME, 
1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]', sharedLocks=''}'
    > 
    > 2018-05-24 12:49:28,911+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] START, 
ConvertVmVDSCommand(HostName = aws-ovhv-01, 
ConvertVmVDSParameters:{hostId='cbabe1e8-9e7f-4c4b-be9c-49154953564d', 
url='qemu+tcp://root@172.19.0.12/system<http://root@172.19.0.12/system>', 
username='null', vmId='1f0b608f-7cfc-4b27-a876-b5d8073011a1', 
vmName='instance-00000673', 
storageDomainId='2607c265-248c-40ad-b020-f3756454839e', 
storagePoolId='5a5de92c-0120-0167-03cb-00000000038a', virtioIsoPath='null', 
compatVersion='null', Disk0='816ac00f-ba98-4827-b5c8-42a8ba496089'}), log id: 
53408517
    > 
    > 2018-05-24 12:49:29,010+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-2) [2047673e] EVENT_ID: 
IMPORTEXPORT_STARTING_CONVERT_VM(1,193), Starting to convert Vm 
instance-00000673
    > 
    > 2018-05-24 12:52:57,982+02 INFO  
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-16) 
[df1d5f72-eb17-46e4-9946-20ca9809b54c] Failed to Acquire Lock to object 
'EngineLock:{exclusiveLocks='[instance-00000673=VM_NAME]', 
sharedLocks='[1f0b608f-7cfc-4b27-a876-b5d8073011a1=VM]'}'
    > 
    > 2018-05-24 12:59:24,575+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-20) [2047673e] EVENT_ID: 
IMPORTEXPORT_IMPORT_VM(1,152), Vm instance-00000673 was imported successfully 
to Data Center AVEUNL, Cluster AWSEUOPS
    > 
    > 
    > 
    > Than trying to start VM fails with following messages:
    > 
    > 2018-05-24 13:00:32,085+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-653729) [] EVENT_ID: 
USER_STARTED_VM(153), VM instance-00000673 was started by admin@internal-authz 
(Host: aws-ovhv-06).
    > 
    > 2018-05-24 13:00:33,417+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-9) [] VM 
'1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) moved from 
'WaitForLaunch' --> 'Down'
    > 
    > 2018-05-24 13:00:33,436+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(ForkJoinPool-1-worker-9) [] EVENT_ID: VM_DOWN_ERROR(119), VM instance-00000673 
is down with error. Exit message: Cannot access backing file 
'/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce' of 
storage file 
'/rhev/data-center/mnt/glusterSD/aws-gfs-01.awesome.lan:_gv0__he/2607c265-248c-40ad-b020-f3756454839e/images/816ac00f-ba98-4827-b5c8-42a8ba496089/8ecfcd5b-db67-4c23-9869-0e20d7553aba'
 (as uid:107, gid:107): No such file or directory.
    > 
    > 2018-05-24 13:00:33,437+02 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-9) [] add VM 
'1f0b608f-7cfc-4b27-a876-b5d8073011a1'(instance-00000673) to rerun treatment
    > 
    > 2018-05-24 13:00:33,455+02 WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: 
USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM instance-00000673 on Host 
aws-ovhv-06.
    > 
    > 2018-05-24 13:00:33,460+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-653732) [] EVENT_ID: 
USER_FAILED_RUN_VM(54), Failed to run VM instance-00000673  (User: 
admin@internal-authz).
    > 
    > 
    > 
    > Checking on the Gluster volume, directory and files exist, permissions 
are in order:
    > 
    > 
    > 
    > [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]
    > 
    > -rw-rw----.  1 vdsm kvm  14G May 24 12:59 
8ecfcd5b-db67-4c23-9869-0e20d7553aba
    > 
    > -rw-rw----.  1 vdsm kvm 1.0M May 24 12:49 
8ecfcd5b-db67-4c23-9869-0e20d7553aba.lease
    > 
    > -rw-r--r--.  1 vdsm kvm  310 May 24 12:49 
8ecfcd5b-db67-4c23-9869-0e20d7553aba.meta
    > 
    > 
    > 
    > Than I have checked image info, and noticed that backing file entry is 
pointing to non-existing location, which does and should not exist on oVirt 
hosts:
    > 
    > 
    > 
    > [root@aws-ovhv-01 816ac00f-ba98-4827-b5c8-42a8ba496089]# qemu-img info 
8ecfcd5b-db67-4c23-9869-0e20d7553aba
    > 
    > image: 8ecfcd5b-db67-4c23-9869-0e20d7553aba
    > 
    > file format: qcow2
    > 
    > virtual size: 160G (171798691840 bytes)
    > 
    > disk size: 14G
    > 
    > cluster_size: 65536
    > 
    > backing file: 
/var/lib/nova/instances/_base/2f4f8c5fc11bb83bcab03f4c829ddda4da8c0bce
    > 
    > Format specific information:
    > 
    >     compat: 1.1
    > 
    >     lazy refcounts: false
    > 
    >     refcount bits: 16
    > 
    >     corrupt: false
    > 
    > 
    > 
    > Can somebody advise me how to fix, address this, as I am in need of 
importing 200+ VMs from OpenStack to oVirt?
    > 
    > Sure this qcow2 file will not work in oVirt.
    > 
    > I wonder how you did the import?
    > 
    > Nir
    > 
    
    
    -- 
    Tomáš Golembiovský <tgole...@redhat.com>
    

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org

Reply via email to