[Bug 1351528] Re: LVM based boot fails on degraded raid due to missing device tables at premount
** Summary changed: - boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time + LVM based boot fails on degraded raid due to missing device tables at premount ** Summary changed: - LVM based boot fails on degraded raid due to missing device tables at premount + boot fails for LVM on degraded raid due to missing device tables at premount ** Description changed: - Trusty installation is combined root + /boot within LVM on top of mdraid (type 1.x) RAID1 with one missing disk (degraded). - [method: basically create the setup in shell first then point install at the lvm. at end of install create a chroot and add mdadm pkg] + This is a Trusty installation with combined root + /boot within LVM on top of mdraid (type 1.x) RAID1. Raid1 was built with one missing disk (degraded). + [method: basically create raid/VG/LV setup in shell first then point installer at the lvm. At the end of the install create a chroot, add the mdadm pkg, and update-initramfs before reboot.] - boot fails with the following messages: - Incrementally starting RAID arrays... - mdadm: CREATE user root not found - mdadm: CREATE group disk not found - Incrementally starting RAID arrays... + The boot process fails with the following messages: + Incrementally starting RAID arrays... + mdadm: CREATE user root not found + mdadm: CREATE group disk not found + Incrementally starting RAID arrays... and slowly repeats the above at this point. workaround: - add break=premount to grub kernel line entry - - for continue visibility of boot output also remove quiet, splash and possibly set gxmode 640x480 + - for continued visibility of text boot output also remove quiet, splash and possibly set gxmode 640x480 now @ initramfs prompt: mdadm --detail /dev/md0 should indicate a state of clean, degraded, array is started so this part is ok lvm lvs output attributes are as follows: -wi-d (instead of the expected -wi-a) lvs manpage this means device tables are missing (device mapper?) FIX: simply run lvm vgchange -ay and exit initramsfs. This will lead to a booting system. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1351528 Title: boot fails for LVM on degraded raid due to missing device tables at premount To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1351528] Re: boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time
attached dmesg output for degraded boot up to premount ** Attachment added: dmesg_upto_premount_degraded.txt https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+attachment/4168673/+files/dmesg_upto_premount_degraded.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1351528 Title: boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1351528] Re: boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time
The primary difference that I see for degraded boot is this error just ahead of activating volume group 'dataone' 'watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y''(err) ' device-mapper: reload ioctl on failed: Invalid argument' and also after that point the 55-dm and 56-lvm rules do not fire and create the device setup like what happens in the _normal.txt log. If anyone does compare these logs you can search for the volume group name 'dataone' on both files to see what I am referring to. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1351528 Title: boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1351528] Re: boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time
attached dmesg output for normal boot up to premount ** Attachment added: dmesg_upto_premount_normal.txt https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+attachment/4168674/+files/dmesg_upto_premount_normal.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1351528 Title: boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1351528] Re: boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time
** Summary changed: - boot fails on raid (md raid1) + LVM (combined / + /boot) + degraded + boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1351528 Title: boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1351528] Re: boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time
adding kernel parameter lvmwait=2 also works around this issue (source of idea http://serverfault.com/questions/567579/boot-failure-with-root-on-md-raid1-lvm-udev-event-timing) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1351528 Title: boot fails on degraded raid (mdraid) due to LVM root (combined / + /boot) missing device tables at mount time To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1351528] [NEW] boot fails on raid (md raid1) + LVM (combined / + /boot) + degraded
Public bug reported: Trusty installation is combined root + /boot within LVM on top of mdraid (type 1.x) RAID1 with one missing disk (degraded). [method: basically create the setup in shell first then point install at the lvm. at end of install create a chroot and add mdadm pkg] boot fails with the following messages: Incrementally starting RAID arrays... mdadm: CREATE user root not found mdadm: CREATE group disk not found Incrementally starting RAID arrays... and slowly repeats the above at this point. workaround: - add break=premount to grub kernel line entry - for continue visibility of boot output also remove quiet, splash and possibly set gxmode 640x480 now @ initramfs prompt: mdadm --detail /dev/md0 should indicate a state of clean, degraded, array is started so this part is ok lvm lvs output attributes are as follows: -wi-d (instead of the expected -wi-a) lvs manpage this means device tables are missing (device mapper?) FIX: simply run lvm vgchange -ay and exit initramsfs. This will lead to a booting system. ** Affects: lvm2 (Ubuntu) Importance: Undecided Status: New ** Tags: boot degraded lvm raid ** Package changed: cryptsetup (Ubuntu) = lvm2 (Ubuntu) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1351528 Title: boot fails on raid (md raid1) + LVM (combined / + /boot) + degraded To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1351528] Re: boot fails on raid (md raid1) + LVM (combined / + /boot) + degraded
here is an interesting find: replacing the missing disk (mdadm --add /dev/md0 /dev/sdb1) and waiting for sync to complete leads to proper booting system the system continued boot even after I --failed and --removed the second disk. I could not return the system to it's original boot fail state until I zero'd the super block on the second disk. some additional messages that I had not seen before (after boot is failing again) device-mapper: table: 252:0: linear: dm-linear: Device lookup failed device-mapper: ioctl: error adding target to table device-mapper: table: 252:1: linear: dm-linear: Device lookup failed device-mapper: ioctl: error adding target to table (then repeats the above once more) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1351528 Title: boot fails on raid (md raid1) + LVM (combined / + /boot) + degraded To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1351528/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 563895] Re: grub2 fails to boot or install when an LVM snapshot exists
Most people applied the fixed package manually and moved on over a year ago. Are you really surprised? I'm not here to debate whether or not is right or wrong to report a bug, or provide info regarding a bug, and then not be involved every step of the way. People have the right to contribute or not contribute as they see fit. Your attitude is not going to affect this fact in a positive way. All you have really achieved here is made pull my name off the CC list for this bug. I did it because the latest updates you've made are the equivalent of spam. Maybe you need to take a step back and realize how little control you actually have on this matter. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/563895 Title: grub2 fails to boot or install when an LVM snapshot exists To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/563895/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 563895] Re: grub2 fails to boot or install when an LVM snapshot exists
nutznboltz please post information that pertains to the bug. All the extra stuff you are tossing in serves no purpose. Thanks. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/563895 Title: grub2 fails to boot or install when an LVM snapshot exists To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/563895/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 586022] Re: nfs mounts happen before network is up
Clint Byrum: Thanks for the explanation. Could it be something as simple as network-interface.conf emits net- device-up for loopback and in turn mountall-net.conf fires then fails and then mountall (or plymouth??) enforces a boot wait? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/586022 Title: nfs mounts happen before network is up -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 586022] Re: nfs mounts happen before network is up
I can't speak for Maverick but the lucid code comments in mountall- net.conf states: Send mountall the USR1 signal to inform it to try network filesystems again. and it is told to do this on event net- device-up. Does that event always occur when the network is fully configured (whether static or dhcp)? The error mount.nfs: DNS resolution failed for nfshost: Name or service not known could indicate that it is not configured. I never got around to testing if start on started networking would work. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/586022 Title: nfs mounts happen before network is up -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 571444] Re: Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected)
I have just tested what happens with Debian Etch if there is wrong (missing) devices in /etc/fstab : system don't boot, problem with fsck (the superblock could not be read...). Same behavior on Hardy. I think that if users really want to boot with missing parts, they should explicitly add nobootwait to concerned fstab lines. I don't see it as that simple. Yes root is one thing and your argument applies there, but hanging because of external or extraneous drives coupled with not seeing the error/prompt text and possibly not having the option to interact is another problem. * missing splash in grub2 conf IS a bug, that don't let users see why their system won't boot to me, specifically requiring splash to see the error/prompt text is either a design flaw or bug. I expect text mode (read: console) to work before splash does. I manually set my systems to boot text mode. -- Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected) https://bugs.launchpad.net/bugs/571444 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 571444] Re: Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected)
@Tim: I agree with this policy and perhaps mountall is doing a better job now at adhering to it. I have a problem with informational messages and prompts are not showing up on console. This isn't as big of a deal when you are aware of the issue; you said yourself that you went through the source code to learn of nobootwait. Other users could search the output text, if it shows up, and then learn what they can do about it. -- Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected) https://bugs.launchpad.net/bugs/571444 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 571444] Re: Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected)
Thanks. I will try to provide feedback on your package hopefully tonight. I also like Erik Reuter's idea of a timeout which I think would be a viable alternative in the event the consensus was that BOOTWAIT should be the default. This is assuming that prompt/error messages were corrected to display in all scenarios. This would also accommodate server configurations . -- Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected) https://bugs.launchpad.net/bugs/571444 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 571444] Re: Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected)
Mathieu Alorent I don't see a description on your ppa as to what you implemented. Does your patch allow you to press S to skip more than just the first missing drive? From what I can see (including other bugs) not everyone is seeing the interactive prompt or errors when this hang issue occurs. This maybe to due to things like fb/videos drivers loading around that time which typically clear the screen or not using splash, I'm not sure. This also major issue on servers with remote only access where interactive prompts would not be desired anyway. It seems like to me, at least in this state, that mountall's default functionality should actually be NOBOOTWAIT and those that want to take advantage of this functionality could specify BOOTWAIT manually. -- Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected) https://bugs.launchpad.net/bugs/571444 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 571444] Re: Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected)
Mathieu Alorent: my guess would be that there would be resistance to this change as I think the goal is to empower the user to correct a stituation without having to resort to technical recovery procedures or find out about the problem after the fact. It is good that it is configurable via fstab, but my opinion stands that the *better* default at this point in time would be NOBOOTWAIT if feesible. -- Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected) https://bugs.launchpad.net/bugs/571444 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 571444] Re: Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected)
I'm booting some test installs that use vmdks that point to raw devices. It just so happened one non-crucial drive /dev/sdc was not available and the boot process would hang. The twist is I don't see any error message or prompt for input like this bug suggests. If it wasn't for this bug report I would not have learned that I needed to press 'S' to finish booting. -- Boot hangs and unable to continue when automount disk in fstab is not available (Off or Disconnected) https://bugs.launchpad.net/bugs/571444 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 586022] Re: nfs mounts happen before network is up
I had this issue on Lucid and had since reverted to noauto so I cannot confirm that it still occurs today. I can't test this right now but you might want to try this (assuming the Maverick file is similar): in /etc/init/mountall-net.conf change start on net-device-up to start on started networking -- nfs mounts happen before network is up https://bugs.launchpad.net/bugs/586022 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 582201] Re: [lucid] USB Keyboard not working (ie unable to input anything) in X
Any updates on this issue? I use `startx` and have a working keyboard in console outside of X. I'm getting similar messages in the Xorg log as shown in Raphael Jolivet's post. In an attempt to address this I generated a basic config using sudo X -configure and then had to add the Option AutoAddDevices False line to my xorg.conf before my mouse would work. My keyboard is still not working. I started out on Lucid beta and remember updating back in june after release and since then I can't use this install. I've also tried enabling Legacy USB support and dpkg-reconfigure console-setup (just in case). -- [lucid] USB Keyboard not working (ie unable to input anything) in X https://bugs.launchpad.net/bugs/582201 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 563895] Re: grub-install fails when LVM snapshot exists
Perhaps the description should become grub2 fails to boot or install when an LVM snapshot exists -- grub-install fails when LVM snapshot exists https://bugs.launchpad.net/bugs/563895 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 563895] Re: Disk not found when booting mdadm RAID1 with snapshotted lvm volume
I believe I have the same issue although I had hard locked before the reboot and thus interpreted my boot failure as needing to reinstall the bootloader and nothing to do with the snapshot I had made earlier. My specific filesystem is full root + boot LVM ext3. The reason I mention that specifically as I have not tested if this issue also persists with a seperate /boot. I basically couldn't grub to (re)install and the experienced symptoms described here - https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/528670 I finally found out that while a snapshot of root was active there was an extra /dev/mapper/lvgname-lvname-real device. The key part being real here I suppose. I'm pretty new to LVM, but my impression was that with whatever manipulation was going on for copy on write functionality, it could be causing grub to get confused. With a snapshot active Blkid output would now show 2 devices with the exact same UUID. The moment I killed the snapshot, deleted device.map and ran nothing more than `grub-install /dev/sda`, it completed normally, generated grub.cfg entries loading all the proper modules (raid mdraid lvm ext2) as well as populating a proper /boot/grub/core.img. -- Disk not found when booting mdadm RAID1 with snapshotted lvm volume https://bugs.launchpad.net/bugs/563895 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 563895] Re: Disk not found when booting mdadm RAID1 with snapshotted lvm volume
I did not clarify previously, but the LVM physical volume is indeed a mdraid root mirror. # pvs PV VG Fmt Attr PSize PFree /dev/md0 mypv lvm2 a- 69.24g 31.24g -- Disk not found when booting mdadm RAID1 with snapshotted lvm volume https://bugs.launchpad.net/bugs/563895 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 532101] Re: You can receive files over Bluetooth into this folder always visible on XDG_DOWNLOAD_DIR folder, even without a Bluetooth device
This message is in /usr/lib/nautilus/extensions-2.0/libnautilus-share- extension.so, which is part of gnome-user-share package [Description: User level public file sharing via WebDAV or ObexFTP] to know whats in this package use, apt-file list gnome-user-package my workaround, being that I don't require these features, is to remove gnome-user-share and pkill nautilus -- You can receive files over Bluetooth into this folder always visible on XDG_DOWNLOAD_DIR folder, even without a Bluetooth device https://bugs.launchpad.net/bugs/532101 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 270461] Re: Alpha-5 alternate installer fails
is this bug related or perhaps the core issue? https://bugs.launchpad.net/ubuntu/+source/apt-setup/+bug/316618 -- Alpha-5 alternate installer fails https://bugs.launchpad.net/bugs/270461 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 253786] Re: /dev/.static/dev is mounted ro in intrepid
I don't know if it means that it would become available in the proposed updates for intrepid or not. I don't enable that repo. Also, I think we all got notified that a guy testing the new udev in jaunty still has this problem with libsensors. Since I have 3 machines with this issue I have implemented what i think Peter Cordes was hinting at by modifying /etc/init.d/checkroot.sh for the time being: after: # # Remove /lib/init/rw/rootdev if we created it. # rm -f /lib/init/rw/rootdev added: echo UDEV /dev/.static/dev REMOUNT RW HACK mount -o remount,rw /dev /dev/.static/dev after reboot: $ cat /proc/mounts |grep /dev udev /dev tmpfs rw,mode=755 0 0 /dev/disk/by-uuid/8a22bb23-7295-4efc-9d0a-a0b6c25dba27 / ext3 rw,relatime,errors=remount-ro,data=ordered 0 0 /dev/disk/by-uuid/8a22bb23-7295-4efc-9d0a-a0b6c25dba27 /dev/.static/dev ext3 rw,errors=remount-ro,data=ordered 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0 devpts /dev/pts devpts rw,nosuid,noexec,gid=5,mode=620 0 0 hopefully this is sane and sets aside the symptom long enough for things to get sorted out the way they should be -- /dev/.static/dev is mounted ro in intrepid https://bugs.launchpad.net/bugs/253786 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 270461] Re: Alpha-5 alternate installer fails
Sony Optiarc DVD RW AD-7170S (sata) drive, ASUS P5B-Deluxe WIFI/AP, ubuntu-8.10-server-amd64.iso burned three times of those 3 times, the image was downloaded twice. Install the base system dialog eventually indicates Updating available list of packages (I believe) and I am presented wit the same media change dialog and the only option is to reset. I had no issues installing 8.10 desktop amd64 on this same setup and right now what i'm actually doing is trying to install to a thumb drive while all my local hard drives are unplugged (protection). I'm wondering now if having the raid rom on the controller enabled with no other drives is triggering it, but who knows if a perl script is dumping errors. -- Alpha-5 alternate installer fails https://bugs.launchpad.net/bugs/270461 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 253786] Re: /dev/.static/dev is mounted ro in intrepid
i dont see a seperate bug for when mythtv-backend fails while setting up devices http://pastebin.com/m4e2553f9 stopping and starting udev then a dpkg-reconfigure mythtv-backend prompted me about created the devices which succeeded this time around. I imagine this will revert after a reboot and fail with the next install that needs to create devices ** Attachment added: mythtv-backend_dev_failure.txt http://launchpadlibrarian.net/20730750/mythtv-backend_dev_failure.txt -- /dev/.static/dev is mounted ro in intrepid https://bugs.launchpad.net/bugs/253786 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs