I recently decided to move a volume group off of a mixed device (SATA+USB) md raid mirror after encountering the "bio too big" bug (which apparently is likely to cause data corruption, but is classified as WONTFIX! See https://bugzilla.redhat.com/show_bug.cgi?id=498162 or http://lmgtfy.com/?q=bio+too+big+wontfix).
The sequence started to use was: 1. Fail USB drive and remove from the raid mirror (mdadm). 2. Repartition USB drive (parted). 3. Create Physical Volume on USB drive and add to Volume Group (lvm pvcreate & vgextend). 4. Move Volume Group off of raid mirror (lvm pvmove). 5. Remove raid mirror from Volume Group and stop if being a Physical Volume (lvm vgreduce & pvremove). 6. Stop raid mirror (mdadm). 7. Repartiton SATA drive (parted). 8. Create new SATA-based Physical Volume and add to Volume Group (lvm pvcreate & vgextend). 9. Move volume group off USB-based Physical Volume (pvmove). It was at this point (after a few hours) that the sysem 'hung' and was totally unresponsive. After powering down and rebooting, I attempted to abort the pvmove (pvmove --abort) but the system locked up again and I rebooted a second time. This time, the abort appeared to succeed, but when I started the pvmove againb I got a message about ignoring the locked volume 'pvmove0'. I proceeded with the remaining step: 10. Remove USB-based Physical Volume from the Volume Group and stop it being a Physical Volume (lvm vgreduce & pvremove) The system appears to be running OK, but there is a zero-length, locked Logical Volume which I can't remove. # lvs -a raid/pvmove0 LV VG Attr LSize Origin Snap% Move Log Copy% Convert [pvmove0] raid p-C--- 0 # lvdisplay -a -v raid/pvmove0 Using logical volume(s) on command line --- Logical volume --- LV Name /dev/raid/pvmove0 VG Name raid LV UUID 2aWJ0g-B5P8-6bAz-zKlk-f4dQ-nwn8-80n12v LV Write Access read/write LV Status NOT available LV Size 0 Current LE 0 Segments 0 Allocation contiguous Read ahead sectors auto # tail -n11 /etc/lvm/backup/raid pvmove0 { id = "2aWJ0g-B5P8-6bAz-zKlk-f4dQ-nwn8-80n12v" status = ["READ", "WRITE", "PVMOVE", "LOCKED"] flags = [] allocation_policy = "contiguous" segment_count = 0 } } } The volume is locked (as shown from the backup information output) so "lvremove --force raid/pvmove0" simply complains "Can't remove locked LV pvmove0", but "pvmove --abort" (which would normally unlock this) does nothing because no pvmove is in progress. So the question (as in the subject) is how do I unlock and/or remove a locked volume? Thanks. -- Derek.