** Summary changed:
- boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot)
missing device tables at mount time
+ LVM based boot fails on degraded raid due to missing device tables at premount
** Summary changed:
- LVM based boot fails on degraded raid due to missing device
attached dmesg output for normal boot up to premount
** Attachment added: "dmesg_upto_premount_normal.txt"
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+attachment/4168674/+files/dmesg_upto_premount_normal.txt
--
You received this bug notification because you are a member of
The primary difference that I see for degraded boot is this error just ahead of
activating volume group 'dataone'
'watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y''(err) '
device-mapper: reload ioctl on failed: Invalid argument'
and also after that point the 55-dm and 56-lvm rules d
attached dmesg output for degraded boot up to premount
** Attachment added: "dmesg_upto_premount_degraded.txt"
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+attachment/4168673/+files/dmesg_upto_premount_degraded.txt
--
You received this bug notification because you are a membe
adding kernel parameter lvmwait=2 also works around this issue
(source of idea
http://serverfault.com/questions/567579/boot-failure-with-root-on-md-raid1-lvm-udev-event-timing)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://b
** Summary changed:
- boot fails on raid (md raid1) + LVM (combined / + /boot) + degraded
+ boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot)
missing device tables at mount time
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subsc
here is an interesting find:
replacing the missing disk (mdadm --add /dev/md0 /dev/sdb1) and waiting for
sync to complete leads to proper booting system
the system continued boot even after I --failed and --removed the second disk.
I could not return the system to it's original boot fail state u
Public bug reported:
Trusty installation is combined root + /boot within LVM on top of mdraid (type
1.x) RAID1 with one missing disk (degraded).
[method: basically create the setup in shell first then point install at the
lvm. at end of install create a chroot and add mdadm pkg]
boot fails wi
Most people applied the fixed package manually and moved on over a year
ago. Are you really surprised?
I'm not here to debate whether or not is right or wrong to report a bug,
or provide info regarding a bug, and then not be involved every step of
the way. People have the right to contribute or
nutznboltz please post information that pertains to the bug. All the
extra stuff you are tossing in serves no purpose.
Thanks.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/563895
Title:
grub2 fai
Clint Byrum: Thanks for the explanation.
Could it be something as simple as network-interface.conf emits net-
device-up for loopback and in turn mountall-net.conf fires then fails
and then mountall (or plymouth??) enforces a boot wait?
--
You received this bug notification because you are a mem
I can't speak for Maverick but the lucid code comments in mountall-
net.conf states: "Send mountall the USR1 signal to inform it to try
network filesystems again." and it is told to do this on event "net-
device-up". Does that event always occur when the network is fully
configured (whether static
@Tim: I agree with this policy and perhaps mountall is doing a better
job now at adhering to it.
I have a problem with informational messages and prompts are not showing
up on console. This isn't as big of a deal when you are aware of the
issue; you said yourself that you went through the source
>I have just tested what happens with Debian Etch if there is wrong (missing)
>devices in /etc/fstab : system don't boot, problem
> with fsck ("the superblock could not be read..."). Same behavior on
> Hardy. I think that if users really want to boot with
> missing parts, they should explici
Thanks. I will try to provide feedback on your package hopefully
tonight.
I also like Erik Reuter's idea of a timeout which I think would be a
viable alternative in the event the consensus was that BOOTWAIT should
be the default. This is assuming that prompt/error messages were
corrected to disp
Mathieu Alorent: my guess would be that there would be resistance to
this change as I think the goal is to empower the user to correct a
stituation without having to resort to technical recovery procedures or
find out about the problem after the fact. It is good that it is
configurable via fstab,
Mathieu Alorent I don't see a description on your ppa as to what you
implemented. Does your patch allow you to press "S" to skip more than
just the first missing drive?
>From what I can see (including other bugs) not everyone is seeing the
interactive prompt or errors when this hang issue occurs.
I'm booting some test installs that use vmdks that point to raw devices.
It just so happened one non-crucial drive /dev/sdc was not available and
the boot process would hang. The twist is I don't see any error message
or prompt for input like this bug suggests. If it wasn't for this bug
report I
I had this issue on Lucid and had since reverted to noauto so I cannot
confirm that it still occurs today. I can't test this right now but you
might want to try this (assuming the Maverick file is similar): in
/etc/init/mountall-net.conf change "start on net-device-up" to "start on
started networ
Any updates on this issue?
I use `startx` and have a working keyboard in console outside of X. I'm
getting similar messages in the Xorg log as shown in Raphael Jolivet's
post. In an attempt to address this I generated a basic config using
sudo X -configure and then had to add the Option "AutoAdd
Perhaps the description should become "grub2 fails to boot or install
when an LVM snapshot exists"
--
grub-install fails when LVM snapshot exists
https://bugs.launchpad.net/bugs/563895
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
I did not clarify previously, but the LVM physical volume is indeed a
mdraid root mirror.
# pvs
PV VG Fmt Attr PSize PFree
/dev/md0 mypv lvm2 a- 69.24g 31.24g
--
Disk not found when booting mdadm RAID1 with snapshotted lvm volume
https://bugs.launchpad.net/bugs/563895
You
I believe I have the same issue although I had hard locked before the
reboot and thus interpreted my boot failure as "needing to reinstall the
bootloader" and nothing to do with the snapshot I had made earlier. My
specific filesystem is full root + boot LVM ext3. The reason I mention
that specifi
This message is in /usr/lib/nautilus/extensions-2.0/libnautilus-share-
extension.so, which is part of gnome-user-share package [Description:
User level public file sharing via WebDAV or ObexFTP]
to know whats in this package use, apt-file list gnome-user-package
my workaround, being that I don't
is this bug related or perhaps the core issue?
https://bugs.launchpad.net/ubuntu/+source/apt-setup/+bug/316618
--
Alpha-5 alternate installer fails
https://bugs.launchpad.net/bugs/270461
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
I don't know if it means that it would become available in the proposed
updates for intrepid or not. I don't enable that repo. Also, I think we
all got notified that a guy testing the new udev in jaunty still has
this problem with libsensors. Since I have 3 machines with this issue
I have implem
Sony Optiarc DVD RW AD-7170S (sata) drive, ASUS P5B-Deluxe WIFI/AP,
ubuntu-8.10-server-amd64.iso burned three times of those 3 times, the
image was downloaded twice.
Install the base system dialog eventually indicates "Updating available
list of packages" (I believe) and I am presented wit the sa
i dont see a seperate bug for when mythtv-backend fails while setting up devices
http://pastebin.com/m4e2553f9
stopping and starting udev then a dpkg-reconfigure mythtv-backend
prompted me about created the devices which succeeded this time around.
I imagine this will revert after a reboot and fai
28 matches
Mail list logo