OK. I decided this is probably impossible (or at least very difficult) since
no-one replied or even commented after over 12 hours, so I too the
alternative roure of creating a new Volume Group, copying and deleting (if
possible) all the Logical Volumes in the 'corrupt' VG, rebooting using the
new VG, then deleting the old VG and moving everything to the right Physical
Volumes.

Of course, I'd still like to know if a damaged pvmove0 (pvmove, locked)
volume can be unlocked and deleted when 'pvmove --abort' does not do the
job.

On 24 June 2011 11:28, Derek Dongray <de...@inverchapel.me.uk> wrote:

> I recently decided to move a volume group off of a mixed device (SATA+USB)
> md raid mirror after encountering the "bio too big" bug (which apparently is
> likely to cause data corruption, but is classified as WONTFIX! See
> https://bugzilla.redhat.com/show_bug.cgi?id=498162 or
> http://lmgtfy.com/?q=bio+too+big+wontfix).
>
> The sequence started to use was:
>
> 1. Fail USB drive and remove from the raid mirror (mdadm).
> 2. Repartition USB drive (parted).
> 3. Create Physical Volume on USB drive and add to Volume Group (lvm
> pvcreate & vgextend).
> 4. Move Volume Group off of raid mirror (lvm pvmove).
> 5. Remove raid mirror from Volume Group and stop if being a Physical
> Volume (lvm vgreduce & pvremove).
> 6. Stop raid mirror (mdadm).
> 7. Repartiton SATA drive (parted).
> 8. Create new SATA-based Physical Volume and add to Volume Group (lvm
> pvcreate & vgextend).
> 9. Move volume group off USB-based Physical Volume (pvmove).
> It was at this point (after a few hours) that the sysem 'hung' and was
> totally unresponsive. After powering down and rebooting, I attempted to
> abort the pvmove (pvmove --abort) but the system locked up again and I
> rebooted a second time. This time, the abort appeared to succeed, but when I
> started the pvmove againb I got a message about ignoring the locked volume
> 'pvmove0'. I proceeded with the remaining step:
>
> 10. Remove USB-based Physical Volume from the Volume Group and stop it
> being a Physical Volume (lvm vgreduce & pvremove)
>
> The system appears to be running OK, but there is a zero-length, locked
> Logical Volume which I can't remove.
>
> # lvs -a raid/pvmove0
>   LV        VG   Attr   LSize Origin Snap%  Move Log Copy%  Convert
>   [pvmove0] raid p-C---    0
>
> # lvdisplay -a -v raid/pvmove0
>     Using logical volume(s) on command line
>   --- Logical volume ---
>   LV Name                /dev/raid/pvmove0
>   VG Name                raid
>   LV UUID                2aWJ0g-B5P8-6bAz-zKlk-f4dQ-nwn8-80n12v
>   LV Write Access        read/write
>   LV Status              NOT available
>   LV Size                0
>   Current LE             0
>   Segments               0
>   Allocation             contiguous
>   Read ahead sectors     auto
>
> # tail -n11 /etc/lvm/backup/raid
>                 pvmove0 {
>                         id = "2aWJ0g-B5P8-6bAz-zKlk-f4dQ-nwn8-80n12v"
>                         status = ["READ", "WRITE", "PVMOVE", "LOCKED"]
>                         flags = []
>                         allocation_policy = "contiguous"
>                         segment_count = 0
>                 }
>         }
> }
>
> The volume is locked (as shown from the backup information output) so
> "lvremove --force raid/pvmove0"  simply complains "Can't remove locked LV
> pvmove0", but "pvmove --abort" (which would normally unlock this) does
> nothing because no pvmove is in progress.
>
> So the question (as in the subject) is how do I unlock and/or remove a
> locked volume?
>
> Thanks.
> --
> Derek.
>
>



-- 
Derek.

Reply via email to