Public bug reported:

Binary package hint: lvm2

I have a raid10 array on which I have a single PV configured.
This was created using a simple pvcreate with the md device.

I extended a logical volume to the maximum size of the PV. This reported to 
work.
I then tried to live resize the underlying ext4 partition to fill the LV, and 
this brought down the computer after saying that the ext4 partition was already 
as big as possible.

On reboot, I found that the LV was reported as suspended (cat 
/etc/lvm/backup/raid10_vg).
dmesg showed that the device manager had seen that the PV wanted more disk than 
the device had available, and borked.

needless to say, no useful error messages on screen, and only hours of
digging in logs and google uncovers the problem.

Fixed by resizing the LV to less than the volume of both the raid10
media, and the larger PV volume size.

I am also getting "open file descriptor" messages whenever I run LVM
commands, and these continue now everything is working okay, but I never
had these before.

I've no idea how to diagnose the problem, and I don't intend to repeat
it! I wonder, though, whether there isa GiB GB unit clash. Or the LVM
layer assumes a block size which is different to the disk block size.
The md device will have a certain stride width according to the RAID
stripe size, but I don't think I set it up with a non-standard block
size.

LVM really should be more robust. If I can manually check in 5 different
places what the number of bytes available are, then why isn't the code
doing this before wildly overallocating space?

** Affects: lvm2 (Ubuntu)
     Importance: Undecided
         Status: New

-- 
lvresize created LV bigger than the media
https://bugs.launchpad.net/bugs/500536
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to