seems or seemed to be a general issue with dpkg or udev or something
(search bugs with rm: cannot remove)
--
irda-utils error during installation
https://bugs.launchpad.net/bugs/317528
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
seems or seemed to be a general issue with dpkg or udev or something
(search bugs with rm: cannot remove)
--
intrepid - rm: cannot remove `md0-': Read-only file system
https://bugs.launchpad.net/bugs/270043
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Maybe this is actually a kernel / module issue. Are those
controllers/disks also so slow with other OS?
Still valid with current releases?
** Summary changed:
- Intrepid ubuntu server won't boot RAID1
+ disk detection is real slow with some hardware (timout shell drops)
** Also affects: linux
Concerning RAID degradiation, as RAIDs may take minutes until they come up, but
regular ones are quick, this should be handled nicely:
NOTICE: /dev/mdX required for the root filesystem didn't get up
within the last 10 seconds.
We continue to wait up to a total of xxx seconds
** Description changed:
I have installed Intrepid server i386 beta on a Dell PowerEdge 600SC.
Everything seemed to install fine but upon boot it always drops into the
BusyBox shell. The RAID is *not* degraded. At the BusyBox prompt if I
type 'exit' it will proceed to boot normally and
9.10 here
#ls -l /dev/disk/by-uuid/$ ls -l /dev/disk/by-uuid/
insgesamt 0
lrwxrwxrwx 1 root root 26 2010-03-28 11:35 0a. - ../../mapper/vg-root
lrwxrwxrwx 1 root root 10 2010-03-28 11:35 1e. - ../../sda1
lrwxrwxrwx 1 root root 23 2010-03-28 11:35 35. - ../../mapper/vg-home
lrwxrwxrwx
well this made it into the 9.10 release notes
** Changed in: mdadm (Ubuntu)
Status: New = Confirmed
--
Ubuntu asks me what I want to do even if I set BOOT_DEGRADED=true in Ubuntu 8.10
https://bugs.launchpad.net/bugs/291488
You received this bug notification because you are a member of
BOOT_DEGRADED=YES does actually not work in most cases because of Bug
#251164 boot impossible due to missing initramfs failure hook / event
driven initramfs, see also https://wiki.ubuntu.com/ReliableRaid
--
Ubuntu asks me what I want to do even if I set BOOT_DEGRADED=true in Ubuntu 8.10
is this still an issue with recent releases?
** Summary changed:
- Problem booting raid 1 since upgrading to intrepid (uuid missing)
+ /dev/disk/by-uuid not set up reliably
** Description changed:
+ ( since upgrading to intrepid )
+
When booting, the system cannot find the /dev/disk-by-uuid
sound just like Bug #298481
** Changed in: mdadm (Ubuntu)
Status: New = Incomplete
--
Software raid hangs on boot after upgrade to 8.10
https://bugs.launchpad.net/bugs/302061
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
** Summary changed:
- mdadm crashed with SIGSEGV in __libc_start_main() after grow interrupted
+ shipped old mdadm version crashes with SIGSEGV in __libc_start_main() after
grow interrupted
--
shipped old mdadm version crashes with SIGSEGV in __libc_start_main() after
grow interrupted
** Summary changed:
- mdadm segfault on rebuild of raid5
+ shipped (old) mdadm version segfaults on rebuild of raid5
** Changed in: mdadm (Ubuntu)
Status: New = Fix Released
--
shipped (old) mdadm version segfaults on rebuild of raid5
https://bugs.launchpad.net/bugs/320950
You received
*** This bug is a duplicate of bug 251164 ***
https://bugs.launchpad.net/bugs/251164
This is Bug #251164 boot impossible due to missing initramfs failure
hook / event driven initramfs
** This bug has been marked a duplicate of bug 251164
boot impossible due to missing initramfs failure
This should be no issue with UUID-based raid assembly that makes
mdadm.conf ARRAY definition maintanance obsolete.
** Summary changed:
- MD/LVM boot broken
+ [-UUIDudev] MD/LVM boot broken
--
[-UUIDudev] MD/LVM boot broken
https://bugs.launchpad.net/bugs/328892
You received this bug
This should be no issue with UUID-based raid assembly that makes
mdadm.conf ARRAY definition maintenance obsolete. Bug #158918
** Summary changed:
- HOMEHOST system does not work inside initrd
+ [-UUIDudev] HOMEHOST system does not work inside initrd
** Changed in: mdadm (Ubuntu)
Status:
UUID-based raid assembly: Bug #158918
--
[-UUIDudev] MD/LVM boot broken
https://bugs.launchpad.net/bugs/328892
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
Is this still an issue with recent releases?
Maye related to udev state or mdadm --incremental map not transitioned
from initramfs to rootsystem?
** Changed in: mdadm (Ubuntu)
Status: New = Incomplete
--
race condition finding devices for RAID group
** Changed in: mdadm (Ubuntu)
Status: New = Fix Released
--
mdadm docs include broken ostenfeld.dk link
https://bugs.launchpad.net/bugs/344556
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
With mdadm --zero-superblock you can remove md info from the partition
(prior to deleting the partition table entry)
Otherwise dd zeros to the partition or try to use the --force
mkfs/gparted.
** Summary changed:
- Old raid array blocks format of new partition
+ old superblocks can prevent
May be related to other bugreports about segfaults occuring with older mdadm
versions.
Is this still an issue with recent releases?
** Changed in: mdadm (Ubuntu)
Status: New = Incomplete
--
harsh crashes during mdadm rebuild [SCARING!!!]
https://bugs.launchpad.net/bugs/344748
You
*** This bug is a duplicate of bug 251164 ***
https://bugs.launchpad.net/bugs/251164
This would be fixed with Bug #251164 boot impossible due to missing
initramfs failure hook / event driven initramfs
** Description changed:
- My linux system isn't booting anymore.
+ My linux system isn't
** Also affects: mdadm (Ubuntu)
Importance: Undecided
Status: New
** Changed in: mdadm (Ubuntu)
Status: New = Confirmed
--
boot impossible due to missing initramfs failure hook / event driven initramfs
https://bugs.launchpad.net/bugs/251164
You received this bug notification
non-root filesystems that get degraded on power down are actually never
explicity run (Bug #259145 degraded non-root raids are not run on
boot.)
However mdadm's init-premount however watches *all* ARRAYS stated in
initramfs' mdadm.conf and indiscriminately executes mdadm --assemble
--scan --run,
Public bug reported:
Binary package hint: mdadm
mdadm's init-premount watches all ARRAYS stated in initramfs' mdadm.conf
and indiscriminately executes mdadm --assemble --scan --run, starting
*all* arrays that are present with enough redundancy.
This is not good since some collateral arrays may
Err, bad since some collateral arrays may be available incompletely
(only due to the early stage during boot) and the method used is not
designed for later additions with and like mdadm --incremental. (Bug
#550634 )
--
[-UUIDudev] Raid 1 configuration failed to start on boot
can you check the info in the superblocks of /dev/md/2?
** Changed in: mdadm (Ubuntu)
Status: New = Incomplete
** Summary changed:
- Raid 1 configuration failed to start on boot
+ [-UUIDudev] Raid 1 configuration failed to start on boot
--
[-UUIDudev] Raid 1 configuration failed to
UUID-based raid assembly that makes mdadm.conf maintenance obsolete
fixes this.
** Summary changed:
- bad mdadm.conf on jaunty upgrade
+ [-UUIDudev] bad mdadm.conf on jaunty upgrade
--
[-UUIDudev] bad mdadm.conf on jaunty upgrade
https://bugs.launchpad.net/bugs/366204
You received this bug
UUID-based raid assembly that makes mdadm.conf maintenance obsolete
fixes this.
** Summary changed:
- 9.04 upgrade lose my Raid1
+ [-UUIDudev] 9.04 upgrade lose my Raid1
** Changed in: mdadm (Ubuntu)
Status: New = Confirmed
--
[-UUIDudev] 9.04 upgrade lose my Raid1
As long as the proper mdadm --incremental mode command to start a single
array is not available (Bug #251646) a workaround needs to be used:
mdadm --remove incomplete-md-device arbitrary-member-device-of-
incomplete-array
mdadm --incremental --run arbitrary-member-device-of-incomplete-array
--
** Description changed:
Binary package hint: util-linux
After the member is opened as luks device it is booted instead of the md
device, while the raid remains inactive.
I first noticed /proc/mdstat reported the root raid as inactive
(although the system seemed to run fine!).
confirmed from Bug #541058 and #136252
** Description changed:
Binary package hint: mdadm
mdadm --incremental will save state in a map file under
/var/run/mdadm/map.
But in initramfs and early boot this directory does not exist.
The state is then saved in
maybe Bug #550131
--
/dev/disk/by-uuid not set up reliably (sometimes boot is failing)
https://bugs.launchpad.net/bugs/298481
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
is this still an issue with recent releases?
** Changed in: mdadm (Ubuntu)
Status: Confirmed = Incomplete
** Changed in: mdadm (Ubuntu)
Status: Incomplete = Fix Released
--
[apport] mdadm crashed with SIGSEGV
https://bugs.launchpad.net/bugs/96543
You received this bug
last line needs to address for esmtprc of course:
#echo mda=\'/usr/bin/formail -a \Date: \`date -R\`\ \|
/usr/bin/procmail -d %T\' /etc/esmtprc
--
mdadm monitor feature broken, not depending on local MTA/MDA or using
wall/notify-send
https://bugs.launchpad.net/bugs/535417
You received this
Nice you found that out, Sami!
Are your arrays taged with your hostname? From Bug #252345 mdadm
--incremental seems to block members with non-matching homehost tag and
does not seem to have an option to disable that. (Or comprehend the
--auto-update-homehost option as a workaround with the
@Sami, with that directory created in initramfs, your setup is now
working without any mdadm.conf ARRAY lines, right?
--
[karmic] mdadm.conf w/o ARRAY lines but udev/mdadm not assembling arrays.
https://bugs.launchpad.net/bugs/136252
You received this bug notification because you are a member
@Sami (newly subscribed): Some previous comments relate to you.
--
[karmic] mdadm.conf w/o ARRAY lines but udev/mdadm not assembling arrays.
https://bugs.launchpad.net/bugs/136252
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
Note that starting all incomplete array indiscriminately is not a good
idea in general.
i.e. on boot, start only selected arrays degraded that are known to be
needed.
--
--incremental --scan --run does not start anything
https://bugs.launchpad.net/bugs/244808
You received this bug notification
*** This bug is a duplicate of bug 136252 ***
https://bugs.launchpad.net/bugs/136252
** This bug has been marked a duplicate of bug 136252
[karmic] mdadm.conf w/o ARRAY lines but udev/mdadm not assembling arrays.
--
boot fails: mdadm not looking for UUIDs but hostname in superblocks
Adopting from #226484 (now a duplicate of this one):
This is a bug with mdadm --incremental not doing hotplug, because it
looks for permission to do so in mdadm.conf.
On hotplug systems mdadm.conf should not have to contain any specific
references, to make things clear and for backwards
** Summary changed:
- [karmic] mdadm.conf w/o ARRAY lines but udev/mdadm not assembling arrays.
+ [karmic] mdadm.conf w/o ARRAY lines but udev/mdadm not assembling arrays.
(boot fails)
** Description changed:
Binary package hint: mdadm
Hi,
I could not boot from my /dev/md1 -
Public bug reported:
Binary package hint: gksu
During login seahorse/gdm/pam? seems to open login.keyring, but gksu always
saves passwords to default.keyring.
Either gksu needs to ask which keyring to use, or seahorse/gdm/pam?
needs to be able to open secondary keyrings on login. (possibly
This is with the radeon driver used here, not a dupe of #290704.
--
fast-user-switch-applet crashes system on usage
https://bugs.launchpad.net/bugs/352056
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
here is a Xorg log of the crash
** Attachment added: Xorg.0.log.old
http://launchpadlibrarian.net/42160315/Xorg.0.log.old
--
fast-user-switch-applet crashes system on usage
https://bugs.launchpad.net/bugs/352056
You received this bug notification because you are a member of Ubuntu
Bugs,
It crashed when switching back to the first user.
I used the fast-user-switch-applet as user2 and then selecting the user1
(already active user) from the gdm screen.
--
fast-user-switch-applet crashes system on usage
https://bugs.launchpad.net/bugs/352056
You received this bug notification
Could you maybe post the mdadm.conf and try what happens if you
(temporarily) change the homehost of array members to somthing
different, to make sure what problem got solved.
-mdadm.conf DEVICE partitions has mdadm consider any partition available to
the system.
-mdadm.conf ARRAY identification
** Summary changed:
- pam_umask.so missing in common-session
+ pam_umask.so missing in common-account
--
pam_umask.so missing in common-account
https://bugs.launchpad.net/bugs/253096
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
** Description changed:
+ The pam_umask.so module determines the umask (from system and user
+ config files) and sets it for users accordingly.
- pam_umask.so determines the umask (from system and user config files (see man
page)) and sets it accordingly.
+ The umask itself should not be set
** Description changed:
The pam_umask.so module determines the umask (from system and user
config files) and sets it for users accordingly.
+ from /etc/login.defs:
+ # the use of pam_umask is recommended as the solution which
+ # catches all these cases on PAM-enabled systems.
+
The
forget comment #7
Properly fixing this issue of no central, consistent and tunable umask
setting in debian and ubuntu systems is now only a matter of adding the
line session optional pam_umask.so usergroups to /etc/pam.d/common-
account.
Thanks to pam_umask and its inclusion, and the all the
** Description changed:
Current behaviour is:
- The users group exists but when setting up a (set group ID) groupdirectory
(i.e. /home/group/users) it does not work as expected, because the users group
is not populated (empty).
+ The users group exists but is not populated (empty). When
** Description changed:
+ This is broken behavior (bug not wish).
+
Current behaviour is:
The users group exists but is not populated (empty). When setting up a
(set group ID) group directory (i.e. /home/group/users) users can not
collaborate on files in that directory.
- Changed
Could you consider the adduser patch for building packages that can be
tested/used (independently from g-s-t)?
--
patch: makes adduser.conf a symlink to a profile (provides switchable profile
feature to all frontends)
https://bugs.launchpad.net/bugs/489136
You received this bug notification
As this problem was introduced by commenting out EXTRA_GROUPS in
adduser.conf completely instead of just removing the device groups
handled by g-s-t, please set importance back to bug.
--
users are not added to users group
https://bugs.launchpad.net/bugs/253103
You received this bug notification
Public bug reported:
Binary package hint: gnome-system-tools
Current behaviour is:
The users group exists but is not populated (empty). When setting up a (set
group ID) group directory (i.e. /home/group/users) users can not collaborate on
files in that directory.
As long as g-s-t is
*** This bug is a duplicate of bug 549117 ***
https://bugs.launchpad.net/bugs/549117
Attaching a simple patch that targets adduser. Please consider it for
quick inclusion for the next release to stop creating broken user
accounts.
* Re-enables EXTRA_GROUPS=users.
* Its not proposing a new
Public bug reported:
Binary package hint: adduser
Current behaviour is:
The users group exists but is not populated (empty). When setting up a (set
group ID) group directory (i.e. /home/group/users) users can not collaborate on
files in that directory.
(This was originally broken because
Attaching a simple patch that targets adduser. Please consider it for quick
inclusion for the next release to stop creating broken user accounts.
* Re-enables EXTRA_GROUPS=users.
* Its not proposing a new behavior (change) but re-enables standard (and prior)
bahaviour.
* If an admin does not
*** This bug is a duplicate of bug 549117 ***
https://bugs.launchpad.net/bugs/549117
** This bug has been marked a duplicate of bug 549117
users are not added to users group (empty, broken behaviour)
--
users are not added to users group
https://bugs.launchpad.net/bugs/253103
You
Forgot to mention the reason I changed it to common-account is that
/etc/pam.d/sudo does not include common-session. This might be OK if
sudo is not considered to open a session. But having a different umask
using sudo would again not be expected and lead to wrong permissions
etc.
Maybe it is
Public bug reported:
Binary package hint: sudo
The pam configuration seems to be incomplete in the sense that no
common-session modules are included in sudo's pam config.
(i.e. things like pam_umask and pam_env don't work with sudo)
/etc/pam.d/sudo should contain
@include
Sudo seems to be patched to parse /etc/environment itself. With sudo correctly
iplementing pam sessions that hack can be reverted.
http://www.sudo.ws/bugs/show_bug.cgi?id=83
** Bug watch added: Sudo Bugzilla #83
http://www.sudo.ws/bugs/show_bug.cgi?id=83
--
common-session pam configuration
** Summary changed:
- sudo does not use /etc/environment on interactive logins (through pam_env)
+ sudo does not read /etc/environment on interactive logins (directly, not
through pam_env)
--
sudo does not read /etc/environment on interactive logins (directly, not
through pam_env)
** Description changed:
Binary package hint: sudo
The pam configuration seems to be incomplete in the sense that no
common-session modules are included in sudo's pam config.
- (i.e. things like pam_umask and pam_env don't work with sudo)
+ (i.e. things like pam_umask, limits and
explicit workaround commands using esmtp:
#apt-get install esmtp procmail
#echo mda=\'/usr/bin/formail -a \Date: \`date -R\`\ \| /usr/bin/procmail -d
%T\'
--
mdadm monitor feature broken, not depending on local MTA/MDA or using
wall/notify-send
https://bugs.launchpad.net/bugs/535417
You
can you identify this with Bug #527401 maybe?
why should this be invalid in lucid (grub2 also)?
--
karmic server 64 bit installer fails at GRUB when installing with RAID1
https://bugs.launchpad.net/bugs/485604
You received this bug notification because you are a member of Ubuntu
Bugs, which is
is this 10.04?
https://wiki.ubuntu.com/ReliableRaid
--
GRUB2 boots from raid1 device only after grub rescue insmod linux +
rescue:grub normal
https://bugs.launchpad.net/bugs/493268
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
Intented to mark ths as fixed for newer release (lucid) in launchpad but
could not do it.
--
RAID1 data-checks cause CPU soft lockups
https://bugs.launchpad.net/bugs/212684
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs
** Changed in: mdadm (Ubuntu)
Status: New = Confirmed
--
mdadm monitor feature broken, not depending on local MTA/MDA or using
wall/notify-send
https://bugs.launchpad.net/bugs/535417
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
Public bug reported:
Binary package hint: debian-installer
Install fails with debian installer complainig about not finding a
common cdrom drive.
A module missing on CD?
The machine has a regular ide (pata) combo drive that works fine with the 9.10
alternate installer.
(For hardware specs see
** Attachment added: BootDmesg.txt
http://launchpadlibrarian.net/41884605/BootDmesg.txt
** Attachment added: CurrentDmesg.txt
http://launchpadlibrarian.net/41884606/CurrentDmesg.txt
** Attachment added: DiskUsage.txt
http://launchpadlibrarian.net/41884607/DiskUsage.txt
** Attachment
Here /boot is on /dev/md0 (raid1), the installer installed grub2 into sda and
sdb, and I am seeing the huge delay.
Grub2 finds /boot always/never to be on the same drive?
Does the proposed patch fix the delay in this case?
** Changed in: grub2 (Ubuntu)
Status: Fix Released = In Progress
For me (on 9.10) the issue with group dirs resolved to be an issue with
the list view not showing all emblems.
I have no idea why you are not seeing the correct emblems for non
writeable dirs. (even though a lock may suggest you can not even enter
the dir)
But I can confirm that dirs owned by
** Description changed:
Binary package hint: nautilus
9.10: Nautilus 2.28.1
In the list view the difference in permissions does not result in
appropriate icon differences. (only one emblem is shown)
- A lock is shown (suggesting heavy access restriction but meaning read-
- only,
** Description changed:
Binary package hint: nautilus
9.10: Nautilus 2.28.1
In the list view the difference in permissions does not result in
appropriate icon differences. (only one emblem is shown)
i.e. in the case below a lock emblem is shown but the X emblem for
Found that in #371434, maybe it can help some of you:
Would the subscribers of this bug please try booting with kcmdline
pciehp.pciehp_force=0 and then sudo modprobe acpiphp ?
--
JMicron internal card reader recognizes SD only when inserted at startup
https://bugs.launchpad.net/bugs/258446
You
The eSATA hotpluging seems to work regulary after booting with the
express card inserted, but as described the express card hotpluging is
messed up.
--
PCI ExpressCard hotplug requires pciehp.pciehp_force=1
https://bugs.launchpad.net/bugs/371434
You received this bug notification because you are
9.10
I got an eSATA disk hotpluged now only once with acpiphp, and only when
it was connected after inserting the express card eSATA controller.
Most of the times it did not work. I notice an eSATA disk gets
disconnected when connecting an USB disk, I added comments about what
worked an what not
Thanks for the hint mlx, I could see that the express card is present after
boot and disappears when removed. I mistakenly confused the internal SATA
controller with the express card because it seems to be made of the same
chipset.
lspci
00:00.0 Host bridge: ATI Technologies Inc RS480 Host
** Also affects: policykit (Ubuntu)
Importance: Undecided
Status: New
--
policykit introduction broke unix user/group privileges
https://bugs.launchpad.net/bugs/326135
You received this bug notification because you are a member of Ubuntu
Bugs, which is a direct subscriber.
--
** Summary changed:
- policykit introduction broke unix user/group privileges
+ policykit introduction broke unix user groups
--
policykit introduction broke unix user groups
https://bugs.launchpad.net/bugs/326135
You received this bug notification because you are a member of Ubuntu
Bugs, which
** Summary changed:
- firehol not started on boot (with START_FIREHOL=yes)
+ not started on boot (DNS resolv fails)
--
not started on boot (DNS resolv fails)
https://bugs.launchpad.net/bugs/490317
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
** Description changed:
Binary package hint: firehol
ubuntu 9.10
- * /etc/init.d/firehol script is there
- * /etc/firehol/firehol.conf is in place
+ The failure to load with domain names used in the firehol.conf may have
+ arisen with the network now set up by upstart's native
** Changed in: ubuntu-docs (Ubuntu)
Assignee: viclyn (oobiauk) = Connor Imes (rocket2dmn)
--
server guide: warn about using ubuntu with raid or provide reliable raid config
https://bugs.launchpad.net/bugs/496478
You received this bug notification because you are a member of Ubuntu
Bugs,
** Also affects: rdesktop (Ubuntu)
Importance: Undecided
Status: New
--
unable to to copy and paste files in a rdesktop session with glipper active
https://bugs.launchpad.net/bugs/283385
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
(using 9.10)
Neither forcing pciehp nor loding acpiphp made the hd connected to the
following eSATA express card reappear when it was removed and plugged in
again after booting.
02:00.1 IDE interface: JMicron Technology Corp. JMB362/JMB363 AHCI
Controller (rev 03)
Note that lspci still lists
Public bug reported:
Binary package hint: debian-installer
i.e. it's not possible to span a mirror over the laptop's internal disk
and the docking station's disk at the office, while the data on the
laptop resides inside a luks partition.
After creating a luks partition, the installer does not
Public bug reported:
Binary package hint: mdadm
Raid is designed to provide redundancy and failure tolerance. If a raid member
fails and redundancy allows it, system operation is not compromised. Upon
failure of components the raid array is properly degraded on the fly, and the
system *will
** Description changed:
Binary package hint: mdadm
-
- Raid is designed to provide redundancy and failure tolerance. If a raid
member fails and redundancy allows it, system operation is not compromised.
Upon failure of components the raid array is properly degraded on the fly, and
the
Maybe https://wiki.ubuntu.com/ReliableRaid (Blocked hotplugging
mechanisims)?
--
RAID1 partitions created in Jaunty not recognised by Karmic
https://bugs.launchpad.net/bugs/538597
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
The thread http://ubuntuforums.org/showthread.php?p=8407182 mentioned on
https://wiki.ubuntu.com/ReliableRaid talks about /dev/md_d* type autocreated
devices, as a symptom of un-defined/un-white-listed arrays with ubuntu's
hotplug setup.
--
/dev/md_* devices are not created in /dev
Defaulting to boot_degraded=false is bad replacement for buggy handling
of raid degradation. (not a valid fix)
https://wiki.ubuntu.com/ReliableRaid
--
bogus debconf question mdadm/boot_degraded
https://bugs.launchpad.net/bugs/539597
You received this bug notification because you are a member of
manual workaround: install esmtp and procmail (and configure procmail as
local delivery agent, see /usr/share/doc/esmtp)
--
mdadm monitor feature broken, not depending on local MTA/MDA or using
wall/notify-send
https://bugs.launchpad.net/bugs/535417
You received this bug notification because
Are you using brasero? Its CD images burned produce kernel I/O errors
with some optical drives.
https://bugzilla.redhat.com/show_bug.cgi?id=571074
** Bug watch added: Red Hat Bugzilla #571074
https://bugzilla.redhat.com/show_bug.cgi?id=571074
--
IO errors when inserting a disc
are you using brasero?
https://bugzilla.redhat.com/show_bug.cgi?id=571074
--
IO errors when inserting a disc
https://bugs.launchpad.net/bugs/304954
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
--
ubuntu-bugs mailing list
are you using brasero?
https://bugzilla.redhat.com/show_bug.cgi?id=571074
** Bug watch added: Red Hat Bugzilla #571074
https://bugzilla.redhat.com/show_bug.cgi?id=571074
--
ISO burn on Intrepid with Brasero will not mount
https://bugs.launchpad.net/bugs/295795
You received this bug
are you using brasero?
https://bugzilla.redhat.com/show_bug.cgi?id=571074
** Bug watch added: Red Hat Bugzilla #571074
https://bugzilla.redhat.com/show_bug.cgi?id=571074
--
Buffer I/O error on device sr0 Logical Block XX in Intrepid Ibex Alpha 5
https://bugs.launchpad.net/bugs/266951
You
concerning cryptsetup wishlist:
Precautions like this show the level of safety and quality in
implementing basic OS operations. If cryptsetup would check the given
device before opening, this data loss would not occur now or any time
later if blkid, an admin or another script makes an error.
Its
** Description changed:
Binary package hint: util-linux
After the member is opened as luks device it is booted instead of the md
device, while the raid remains inactive.
I first noticed /proc/mdstat reported the root raid as inactive
(although the system seemed to run fine!).
** Summary changed:
- non-root raids fail to run degraded on boot
+ degraded non-root raids are not run on boot
--
degraded non-root raids are not run on boot
https://bugs.launchpad.net/bugs/259145
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed
601 - 700 of 1115 matches
Mail list logo