That cleaned up the qcow image and qemu-img now reports it's ok, but I still 
cannot start the VM, get "Cannot prepare illegal volume".



Is there some metadata somewhere that needs to be cleaned/reset?




---- On Tue, 26 Feb 2019 16:25:22 +0000 Benny Zlotnik <bzlot...@redhat.com> 
wrote ----




it's because the VM is down, you can manually activate using

$ lvchange -a y vgname/lvname



remember to deactivate after






On Tue, Feb 26, 2019 at 6:15 PM Alan G <mailto:alan%2bov...@griff.me.uk> wrote:







I tried that initially but I'm not sure how to access the image on block 
storage? The lv is marked as NOT available in lvdisplay.



--- Logical volume --- 

  LV Path                
/dev/70205101-c6b1-4034-a9a2-e559897273bc/74d27dd2-3887-4833-9ce3-5925dbd551cc

  LV Name                74d27dd2-3887-4833-9ce3-5925dbd551cc

  VG Name                70205101-c6b1-4034-a9a2-e559897273bc

  LV UUID                svAB48-Rgnd-0V2A-2O07-Z2Ic-4zfO-XyJiFo

  LV Write Access        read/write

  LV Creation host, time http://nyc-ovirt-01.redacted.com, 2018-05-15 12:02:41 
+0000

  LV Status              NOT available

  LV Size                14.00 GiB

  Current LE             112

  Segments               9

  Allocation             inherit

  Read ahead sectors     auto





---- On Tue, 26 Feb 2019 15:57:47 +0000 Benny Zlotnik 
<mailto:bzlot...@redhat.com> wrote ----




I haven't found anything other the leaks issue, you can try to run

$ qemu-img check -r leaks <img> 

(make sure to have it backed up)




On Tue, Feb 26, 2019 at 5:40 PM Alan G <mailto:alan%2bov...@griff.me.uk> wrote:




_______________________________________________

Users mailing list -- mailto:users@ovirt.org

To unsubscribe send an email to mailto:users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/

oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/

List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCTVJ4K7QNA653MA7YZZSYNU5OG2SNQE/




Logs are attached. The first error from snapshot deletion is at 2019-02-26 
13:27:11,877Z in the engine log.





---- On Tue, 26 Feb 2019 15:11:39 +0000 Benny Zlotnik 
<mailto:bzlot...@redhat.com> wrote ----




Can you provide full vdsm & engine logs?



On Tue, Feb 26, 2019 at 5:10 PM Alan G <mailto:alan%2bov...@griff.me.uk> wrote:




_______________________________________________

Users mailing list -- mailto:users@ovirt.org

To unsubscribe send an email to mailto:users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/

oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/

List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XKTHCAZENAB2WLPJGDLGDDUWKPCOJCFV/




Hi,





I performed the following: -



1. Shutdown VM.

2. Take a snapshot

3. Create a clone from snapshot.

4. Start the clone. Clone starts fine.

5. Attempt to delete snapshot from original VM, fails.

6. Attempt to start original VM, fails with "Bad volume specification".



This was logged in VDSM during the snapshot deletion attempt.



2019-02-26 13:27:10,907+0000 ERROR (tasks/3) [storage.TaskManager.Task] 
(Task='67577e64-f29d-4c47-a38f-e54b905cae03') Unexpected error (task:872) 

Traceback (most recent call last):

  File "/usr/share/vdsm/storage/task.py", line 879, in _run

    return fn(*args, **kargs)

  File "/usr/share/vdsm/storage/task.py", line 333, in run

    return self.cmd(*self.argslist, **self.argsdict)

  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, 
in wrapper

    return method(self, *args, **kwargs)

  File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge

    merge.finalize(subchainInfo)

  File "/usr/share/vdsm/storage/merge.py", line 271, in finalize

    optimal_size = subchain.base_vol.optimal_size()

  File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size

    check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2)

  File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 157, in check

    out = _run_cmd(cmd)

  File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 426, in _run_cmd

    raise QImgError(cmd, rc, out, err)

QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json', '-f', 
'qcow2', 
'/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-04

6c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f'], ecode=3, 
stdout={

    "image-end-offset": 52210892800,

    "total-clusters": 1638400,

    "check-errors": 0,

    "leaks": 323,

    "leaks-fixed": 0,

    "allocated-clusters": 795890,

    "filename": 
"/rhev/data-center/mnt/blockSD/024109d5-ea84-47ed-87e5-1c8681fdd177/images/f7dea7bd-046c-4923-b5a5-d0c1201607fc/ac540314-989d-42c2-9e7e-3907eedbe27f",

    "format": "qcow2",

    "fragmented-clusters": 692941

}

, stderr=Leaked cluster 81919 refcount=1 reference=0

Leaked cluster 81920 refcount=1 reference=0

Leaked cluster 81921 refcount=1 reference=0

etc..



Is there any way to fix these leaked clusters? 



Running oVirt 4.1.9 with FC block storage.



Thanks,



Alan





_______________________________________________

Users mailing list -- mailto:users@ovirt.org

To unsubscribe send an email to mailto:users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/

oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/

List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYFOHEYPK33FEKKLNHJPJBY2KXYKE56/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3JYJEOK3JYSEDYZ2WRU7ULJUWV4RO5EG/

Reply via email to