[Expired for lvm2 (Ubuntu) because there has been no activity for 60
days.]
** Changed in: lvm2 (Ubuntu)
Status: Incomplete => Expired
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/83832
Title
I'm not seeing enough information here to proceed, is this still
happening to anyone with a currently supported release?
** Changed in: lvm2 (Ubuntu)
Status: Confirmed => Incomplete
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubun
Most likely same as bug 147216
Thanks for reporting this bug and any supporting documentation. Since this bug
has enough information provided for a developer to begin work, I'm going to
mark it as confirmed and let them handle it from here. Thanks for taking the
time to make Ubuntu better!
BugSq
I had this problem today when I upgraded an old Redhat 9 system. This
was already using LVM for all the file systems except /boot. I wanted to
keep as much of my data file systems but have a clean Ubuntu server
install so just reformatted the main install locations (/ /var /tmp
/usr) but left every
So and upgrade to Gutsy did not fix it and I still had to do break=mount
with lvm vgscan && lvm vgchange -a y to be able to boot. But them I
noticed that it was an lvm1 volume and converted it to lvm2 with lvm
convert.. that appeared to fix it for both Gutsy and Feisty (although I
can't guarantee t
Had the same problem. Updated from edgy to feisty (was running a custom
built 2.6.18 kernel, but with initramfs generated with stock edgy
process). After update, system stuck on kernel messages.
Using lilo here (x86-64, for some reason had problems with grub
originally and lilo just works), with L
Sorry... I fixed my problem... though it was likely the same problem
(symptoms and workaround were the same), I had forgotten that I had
moved / to a separate disk when I ran into problems with LVM in the
past... unfortunately I hadn't changed the kopt=root line in my menu.lst
to reflect the new lo
Same story here.
This machine was dist-upgraded from dapper -> edgy -> feisty...
everything worked fine in edgy AFAIK... I never reboot the server unless
I do a kernel update, which I avoid because I need to recompile IVTV
modules and all of that crap for my mythbackend.
I was able to get the mac
I am also having this problem. Thanks for the bug report BTW, I was able
to boot my server finally with the info here. I have to boot with
break=mount then do the lvm vgscan/lvm vgchange -ay then CRTL-D and
bootup goes fine. If I don't break on the LILO boot, it freezes after
assembling the RAID ar
Oops.
libdevmapper1.02 -> 2:1.02.08-1ubuntu10
--
[feisty] mounting LVM root broken
https://bugs.launchpad.net/bugs/83832
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
Is there any progress on this ? I have the exact same situation, where I
need to supply break=mount and lvm vgscan/lvm vgchange -a y to boot. I
saw this on an edgy->feisty upgraded machine, then did a clean install
of feisty elsewhere (both have boot as a 'normal' partition and lvm for
everything e
Apologies, my typo (as the quote from the boot log indicates).
The boot line is root=/dev/mapper/vg00-root, and the box is still broken into
panic-
stricken non-bootability; I'd still be grateful for further assistance is
getting it to boot again.
-jonathan
--
[feisty] mounting LVM root broke
On Sat, Feb 17, 2007 at 01:32:42PM -, jh wrote:
> root=/dev/mapper/vg00/root break=premount
You should most likely change that to /dev/mapper/vg00-root . I'm not
saying that will fix all of it, but it will definitely not work without
that change.
Cheers.
--
[feisty] mounting LVM root broken
I think I've been hit by something like this too. Having also thought
that the raid1 stuff, was fixed, I dist-upgraded last night, shutdown
and today find the box is broken beyond my ability even to get an
initramfs shell.
With (which booted on Friday) without any break=
root=/dev/mapper/vg00/roo
I think I have fixed the bug that causes the assembly of the RAID
arrays in degraded mode, in mdadm_2.5.6-7ubuntu4 which I have just
uploaded.
Please let me know if it works ...
Thanks,
Ian.
--
[feisty] mounting LVM root broken
https://launchpad.net/bugs/83832
--
ubuntu-bugs mailing list
ubun
Today I managed to boot this system again, so here are some updates:
* the mount error (invalid argument) was caused by a missing "-t reiserfs" (or
"-t ext2" for my /boot partition)
* with "break=premount", the raid array has been correctly assembled twice (out
of 4 tries), in which cas the LVM v
Just to let you know, recreating the RAID array in the same order with
the same options scared me but did work. Everything seems to be "fine",
I am just done playing with it for now, waiting for your analyse of the
initramfs to do more tests. Sorry for tonight's flood :)
--
[feisty] mounting LVM
OK, now I have a problem :/
I wanted to recreate the initramfs outside of a 64bit install, so I fired an
edgy desktop CD: it does not support RAID/lvm anymore.
The dapper CD worked fine, I recreated the initramfs. upon reboot, it is still
broken.
The actual problem is that I tryed to add other d
creating the block devices by hand did not help.
And the major/minor numbers for /dev/sda* are the same in the initramfs as in
my working system.
This looks pretty annoying :/
This initramfs was created in a chroot, the problem may land here ??
My working system is a 64 bit install, while the br
ii lvm-common 1.5.20ubuntu11
The Logical Volume Manager for Linux (common
ii lvm2 2.02.06-2ubuntu8
The Linux Logical Volume Manager
ii udev
20 matches
Mail list logo