I've tried but have been unable to reproduce this.  I'm not entirely
sure that my environment is equivalent though, so let me explain what I
did and if you have suggestions for other things to try, I can give it a
shot.

I created a brand new kvm vm x86_64 w/ a 40G disk, 512MB.  I grabbed the
lucid-beta1 64bit server iso and did a fresh install.  When it came time
to partition the disk, I created one VG on the PV.  I created 6 LVs on
the VG:

root -> /
home -> /home
opt -> /opt
tmp -> /tmp
var -> /var
varlog -> /var/log

with various sizes ranging from about 5G to 10G apiece.  Everything
installed and booted perfectly fine.  No hang, all filesystems mounted
correctly.  In fact, boot was so blazingly fast I blinked and it was
done.

I updated all packages and rebooted about 10 times.  I never had a hang
or failure to mount any partitions.  Boot never took longer than a
second or two.  I added --debug to mountall as in orgoj's comment #26
and mountall-stderr.log was never anything but empty.  mountall-
stdout.log didn't have any indications of problems (on the contrary, it
looked quite reasonable).

Is this a reasonable test of the reported issue?  Is there anything else
I can try to get a better reproduction of the bug?

-- 
multiple LVM volumes not mounted in Lucid
https://bugs.launchpad.net/bugs/527666
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to