[Bug 221162] [NEW] FGLRX with compiz fusion and KVM XP crashes X11

2008-04-23 Thread netslayer
Public bug reported:

Updated 8.04 base to latest today, have been running it for a while and
enabled compiz fusion. Once I launch KVM with my Windows XP VM and then
start using heavy I/O it will crash X11. I dont even have to be moving
the mouse or any windows, effects, etc it will just crash almost
predictably (3 times in 10 minutes).

Kernel: 2.6.24-16-generic #1 SMP Thu Apr 10 13:23:42 UTC 2008 i686
GNU/Linux

Laptop: Lenovo T60, 2GB ram

Hardware:
00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 
945GT Express Memory Controller Hub (rev 03)
00:01.0 PCI bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT 
Express PCI Express Root Port (rev 03)
00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition 
Audio Controller (rev 02)
00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 
(rev 02)
00:1c.1 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 2 
(rev 02)
00:1c.2 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 3 
(rev 02)
00:1c.3 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 4 
(rev 02)
00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI 
Controller #1 (rev 02)
00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI 
Controller #2 (rev 02)
00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI 
Controller #3 (rev 02)
00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI 
Controller #4 (rev 02)
00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI 
Controller (rev 02)
00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2)
00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge 
(rev 02)
00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller 
(rev 02)
00:1f.2 SATA controller: Intel Corporation 82801GBM/GHM (ICH7 Family) SATA AHCI 
Controller (rev 02)
00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 02)
01:00.0 VGA compatible controller: ATI Technologies Inc M52 [Mobility Radeon 
X1300]
02:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet 
Controller
03:00.0 Network controller: Intel Corporation PRO/Wireless 3945ABG Network 
Connection (rev 02)
15:00.0 CardBus bridge: Texas Instruments PCI1510 PC card Cardbus Controller

X11 Backtrace:
snip
SetClientVersion: 0 9
Warning: LookupDrawable()/SecurityLookupDrawable() are deprecated.  Please 
convert your driver/module to use dixLookupDrawable().
Receive 3D performance mode message with status: 0001
(II) XAA: Evicting pixmaps

Backtrace:
0: /usr/bin/X(xf86SigHandler+0x7e) [0x80c780e]
1: [0xb7fb3420]
2: /usr/lib/dri/fglrx_dri.so [0xb4e2d903]
3: /usr/lib/dri/fglrx_dri.so [0xb4e2fa62]
4: /usr/lib/dri/fglrx_dri.so [0xb4e3ee6f]
5: /usr/lib/dri/fglrx_dri.so [0xb4e3f03d]
6: /usr/lib/dri/fglrx_dri.so [0xb497df88]
7: /usr/lib/dri/fglrx_dri.so [0xb4bf9145]
8: /usr/lib/xorg/modules/extensions//libglx.so [0xb7c176e5]
9: /usr/lib/xorg/modules/extensions//libglx.so [0xb7bdc99c]
10: /usr/lib/xorg/modules/extensions//libglx.so [0xb7bdc077]
11: /usr/lib/xorg/modules/extensions//libglx.so [0xb7be0996]
12: /usr/bin/X [0x81506de]
13: /usr/bin/X(Dispatch+0x2cf) [0x808d8df]
14: /usr/bin/X(main+0x48b) [0x807471b]
15: /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe0) [0xb7d45450]
16: /usr/bin/X(FontFileCompleteXLFD+0x201) [0x8073a91]

Fatal server error:
Caught signal 8.  Server aborting

(II) AIGLX: Suspending AIGLX clients for VT switch

** Affects: ubuntu
 Importance: Undecided
 Status: New

-- 
FGLRX with compiz fusion and KVM XP crashes X11
https://bugs.launchpad.net/bugs/221162
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 221162] Re: FGLRX with compiz fusion and KVM XP crashes X11

2008-04-23 Thread netslayer
Disabling Compiz Fusion entirely through Ubuntu - Appearance set to none
fixes the problem obviously but it would be nice to use an accelerated
desktop.

KVM run options:
kvm -localtime -net nic,macaddr=DE:AD:BE:EF:10:27 -net tap -smp 2 -m 1024 -usb 
/var/kvm/windows-xp.img

-- 
FGLRX with compiz fusion and KVM XP crashes X11
https://bugs.launchpad.net/bugs/221162
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 221162] Re: FGLRX with compiz fusion and KVM XP crashes X11

2008-04-23 Thread netslayer
*** This bug is a duplicate of bug 172715 ***
https://bugs.launchpad.net/bugs/172715

Possible dupe of 172715

** This bug has been marked a duplicate of bug 172715
   [hardy] Xorg crash

-- 
FGLRX with compiz fusion and KVM XP crashes X11
https://bugs.launchpad.net/bugs/221162
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 172715] Re: [hardy] Xorg crash

2008-04-23 Thread netslayer
ogc, we both have (II) AIGLX: Suspending AIGLX clients for VT switch in our 
xorg log output at the end.. interesting?
attached mine

** Attachment added: Xorg.0.log.old
   http://launchpadlibrarian.net/13841131/Xorg.0.log.old

-- 
[hardy] Xorg crash
https://bugs.launchpad.net/bugs/172715
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 172715] Re: [hardy] Xorg crash

2008-04-23 Thread netslayer
please see my possible duplicate report, may be helpful in repro:
https://bugs.launchpad.net/ubuntu/+bug/221162

-- 
[hardy] Xorg crash
https://bugs.launchpad.net/bugs/172715
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 151327] Re: [Gutsy] binary graphics drivers don't load with linux-image-2.6.22-14-xen

2008-03-06 Thread netslayer
Hardy Alpha 5 (fresh install cd + online update) + xen + fglrx fails with the 
same errors above. sources 2.6.24 xen
I re installed with Gutsy (fresh install cd) + xen + fglrx and I get the same..

** shrugs..

-- 
[Gutsy] binary graphics drivers don't load with linux-image-2.6.22-14-xen
https://bugs.launchpad.net/bugs/151327
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 140854] Re: Boot with Software Raid most times causes mdadm to not complete (possible race)

2007-09-21 Thread netslayer
I should experiment with my 4 disk raid 5 array disconnected. It's the
one that has half SATA and half PATA drives as well. We have pretty much
the same setup, and I mean I just formatted my drive and installed Gutsy
so it's probably not just us.

-- 
Boot with Software Raid most times causes mdadm to not complete (possible race)
https://bugs.launchpad.net/bugs/140854
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 140854] Re: Boot with Software Raid most times causes mdadm to not complete (possible race)

2007-09-21 Thread netslayer
*** This bug is a duplicate of bug 139802 ***
https://bugs.launchpad.net/bugs/139802

I just tried the solution in the dup bug
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/139802

It looks like /usr/share/mdadm/mkconf /etc/mdadm/mdadm.conf is
detecting an array I dont have from somewhere. You can see the output of
my mdadm.conf is totally messed up and this makes sense now why it's
trying to add a drive to the array that doesnt exist. It could be an
older one I had.. ??

ARRAY /dev/md0 level=raid5 num-devices=4 
UUID=2fc65d39:8f7cbdb5:7072d7cc:0e4fe29d
ARRAY /dev/md0 level=raid0 num-devices=2 
UUID=db874dd1:759d986d:cc05af5c:cfa1abed
ARRAY /dev/md1 level=raid5 num-devices=5 
UUID=9c2534ac:1de3420b:e368bf24:bd0fce41

mdadm -E /dev/sdf
UUID : db874dd1:759d986d:cc05af5c:cfa1abed

mdadm -E /dev/sdh
UUID : db874dd1:759d986d:cc05af5c:cfa1abed

Interesting, so what happened is that I bought 3 new drives when I
created this array, and the remaining two I formatted and put in here
are the ones that still have the old UUID's. Since I applied the raid
array in partitions for these drives instead of the physical device I
have two UUIDs for two of my drives. Then I guess udev finds that at
different times and causes it to fail if it tries to bring up the old
array and add it to an existing one.. total race condition.

So how do I get rid of the other UUIDs safely..

-- 
Boot with Software Raid most times causes mdadm to not complete (possible race)
https://bugs.launchpad.net/bugs/140854
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 139802] Re: long bootup, dmesg full of md: array md1 already has disks!

2007-09-21 Thread netslayer
I think I figured out why we both have this bug, please read
https://bugs.launchpad.net/ubuntu/+bug/140854

-- 
long bootup, dmesg full of md: array md1 already has disks!
https://bugs.launchpad.net/bugs/139802
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 140854] Re: Boot with Software Raid most times causes mdadm to not complete (possible race)

2007-09-21 Thread netslayer
*** This bug is a duplicate of bug 139802 ***
https://bugs.launchpad.net/bugs/139802

I can confirm this fixed it, but I think it's still a bug in the mdadm
tool or udev that handles it poorly

1. Use mdadm -E /dev/sdX1 and mdadm -E /dev/sdX device to examine the UUIDs of 
the raid array. One of them will be the one you no longer use, check the other 
drives and determine which one is no longer valid
2. Then unmount the raid array umount /dev/md1, mdadm --stop /dev/md1 
3. Then mdadm --zero-superblock /dev/sdf   
4. Then mdadm --zero-superblock /dev/sdh as I have to bad ones
5. Remount the array, Reboot

Mine works perfect now :-)
Please make sure you check your zeroing out the right drives before you do 
this..

-- 
Boot with Software Raid most times causes mdadm to not complete (possible race)
https://bugs.launchpad.net/bugs/140854
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 140854] Boot with Software Raid most times causes mdadm to not complete (possible race)

2007-09-18 Thread netslayer
Public bug reported:

Since Feisty when I implemented this raid configuration mdadm does not
always boot correctly even though it seems setup perfectly. I just did a
format and install of fresh Gutsy and it still is doing it. Right during
early boot the status bar will start to load and gets about 1/5th of the
way and locks (disk activity light on solid/high I/O). I can goto a
console, and I know at this time todo ctrl/alt/del to have it reboot or
else it will eventually fail out. Eventually my system will boot.

Once I was able to get a terminal: ps aux | grep mdadm
root  4508  0.0  0.0  10360   608 ?S   00:49   0:00 
/lib/udev/watershed /sbin/mdadm --assemble --scan --no-degraded
root  4509  0.0  0.0  10364   580 ?S   00:49   0:00 
/lib/udev/watershed /sbin/mdadm --assemble --scan --no-degraded
root  8436  0.0  0.0  12392   528 ?Ss   00:54   0:00 /sbin/mdadm 
--monitor --pid-file /var/run/mdadm/monitor.pid --daemonise --scan --syslog
root  8892 34.1 26.6 562048 550600 ?   D   00:56   0:34 /sbin/mdadm 
--assemble --scan --no-degraded
chris 8951  0.0  0.0   5124   836 pts/2S+   00:57   0:00 grep mdadm

Notice, the last mdadm process kicked off during boot is using 27% of my
memory and creeping up fast, in a matter of a minute all my ram is gone
(2GB) and it hits swap until it consumes it all and the system becomes
unusable. This occurs half the time I start my computer and the drives
are always detected during POST. I even can killall the mdmadm processes
and they reappear with the same behavior. I feel it is some kind of race
condition with udev/mount/mdadm placing my drives in a state that will
not work. I even tried adding the udevsettle timeout in the init but it
didnt help.

I either ran mdadm --assemble --scan manually or checked dmesg and it would 
flood with this:
md: array md0 already has disks!

Until recently the --no-degraded option was not on by default and I'd
always seem to loose a drive during boot if I restarted during the mount
attempt by using ctrl alt del. I would have to re add it.

Setup:
Latest Gutsy Ubuntu
AMD Opteron 170 (X2)

/dev/md0:
4x300GB drives

/dev/md1:
5x500GB drives

md0 : active raid5 sde[0] hdd[3] hdb[2] sdf[1]
  879171840 blocks level 5, 64k chunk, algorithm 2 [4/4] []

md1 : active raid5 sdh1[0] sdd1[4] sdc1[3] sdb1[2] sda1[1]
  1953535744 blocks level 5, 64k chunk, algorithm 2 [5/5] [U]

[EMAIL PROTECTED]:~$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST system

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
#ARRAY /dev/md0 level=raid0 num-devices=2 
UUID=db874dd1:759d986d:cc05af5c:cfa1abed
ARRAY /dev/md1 level=raid5 num-devices=5 
UUID=9c2534ac:1de3420b:e368bf24:bd0fce41
ARRAY /dev/md2 level=raid0 num-devices=2 
UUID=0cc8706d:e9eedd66:33a70373:7f0eea01
ARRAY /dev/md3 level=raid0 num-devices=2 
UUID=af45c1d8:338c6b67:e4ad6b92:34a89b78

# This file was auto-generated on Wed, 06 Jun 2007 19:58:28 -0700
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $

** Affects: ubuntu
 Importance: Undecided
 Status: New

-- 
Boot with Software Raid most times causes mdadm to not complete (possible race)
https://bugs.launchpad.net/bugs/140854
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 140854] Re: Boot with Software Raid most times causes mdadm to not complete (possible race)

2007-09-18 Thread netslayer
I just noticed the mdadm.conf I posted doesnt match my drive
configuration and I dont think it's using it since all the UUIDs are on
the partition tables. In feisty I had this file setup correctly and it
made no difference. (this is what gutsy setup for me)

My drive ids as mapped to the above cat /proc/mdstat
/dev/md0
/dev/sde   UUID : 2fc65d39:8f7cbdb5:7072d7cc:0e4fe29d (local to host delorean)
/dev/hdd   UUID : 2fc65d39:8f7cbdb5:7072d7cc:0e4fe29d (local to host delorean)
/dev/hdb   UUID : 2fc65d39:8f7cbdb5:7072d7cc:0e4fe29d (local to host delorean)
/dev/sdf   UUID : 2fc65d39:8f7cbdb5:7072d7cc:0e4fe29d (local to host delorean)

/dev/md1
/dev/sdh1 UUID : 2fc65d39:8f7cbdb5:7072d7cc:0e4fe29d (local to host delorean)
/dev/sdd1 UUID : 2fc65d39:8f7cbdb5:7072d7cc:0e4fe29d (local to host delorean)
/dev/sdc1 UUID : 2fc65d39:8f7cbdb5:7072d7cc:0e4fe29d (local to host delorean)
/dev/sdb1 UUID : 2fc65d39:8f7cbdb5:7072d7cc:0e4fe29d (local to host delorean)
/dev/sda1 UUID : 2fc65d39:8f7cbdb5:7072d7cc:0e4fe29d (local to host delorean)

-- 
Boot with Software Raid most times causes mdadm to not complete (possible race)
https://bugs.launchpad.net/bugs/140854
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 140854] Re: Boot with Software Raid most times causes mdadm to not complete (possible race)

2007-09-18 Thread netslayer
** sighs (one more correction)
The second UUID set are all the same (poor copy n paste): 
[EMAIL PROTECTED]:~$ sudo mdadm -E /dev/sdh1 | grep UUID
   UUID : 9c2534ac:1de3420b:e368bf24:bd0fce41
[EMAIL PROTECTED]:~$ sudo mdadm -E /dev/sdd1 | grep UUID
   UUID : 9c2534ac:1de3420b:e368bf24:bd0fce41
[EMAIL PROTECTED]:~$ sudo mdadm -E /dev/sdc1 | grep UUID
   UUID : 9c2534ac:1de3420b:e368bf24:bd0fce41
[EMAIL PROTECTED]:~$ sudo mdadm -E /dev/sdb1 | grep UUID
   UUID : 9c2534ac:1de3420b:e368bf24:bd0fce41
[EMAIL PROTECTED]:~$ sudo mdadm -E /dev/sda1 | grep UUID
   UUID : 9c2534ac:1de3420b:e368bf24:bd0fce4

-- 
Boot with Software Raid most times causes mdadm to not complete (possible race)
https://bugs.launchpad.net/bugs/140854
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 110023] Upgraded edgy to feisty, device-mapper is dieing on raid0 arrays

2007-04-25 Thread netslayer
Public bug reported:

On boot the kernels device mapper contiously floods output:

[35923.922369] device-mapper: table: 254:5: striped: Couldn't parse stripe 
destination
[35923.922420] device-mapper: ioctl: error adding target to table
[35924.034654] device-mapper: table: 254:5: striped: Couldn't parse stripe 
destination
[35924.034704] device-mapper: ioctl: error adding target to table

The root drive is correctly mapped:
/dev/mapper/hda3  30715280   7114904  23600376  24% /

but both of my raid 0 arrays (2 disks each) are not working anymore:
brw-rw 1 root disk 9,   0 2007-04-25 00:59 /dev/md0
brw-rw 1 root disk 9, 255 2007-04-25 00:59 /dev/md255

The mdstat shows one is online
chris@:~$ cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 hdc[0] sdc[1]
  586114560 blocks 64k chunks

unused devices: none

but it wont mount with my original fstab entry of /dev/md0 and complains
of wrong fs type

This output seems wrong as to state there are two arrays on the same device?
chris@:~$ sudo vim /etc/mdadm/mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid0 num-devices=2 
UUID=db874dd1:759d986d:6a6c7bce:5be0cc86
ARRAY /dev/md0 level=raid0 num-devices=2 
UUID=af45c1d8:338c6b67:e4ad6b92:34a89b78
MAILADDR root

chris@:~$ ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 17 2007-04-25 01:00 352ecc55-8262-41b6-a689-2500824e5ecc 
- ../../mapper/hda3
lrwxrwxrwx 1 root root 17 2007-04-25 01:00 5746f359-1050-46bd-b4da-306be01b9377 
- ../../mapper/hda2
lrwxrwxrwx 1 root root  9 2007-04-25 00:59 60b92513-d244-4148-9d7c-0fc7702ff438 
- ../../md0
lrwxrwxrwx 1 root root 19 2007-04-25 00:59 60fcedab-d7ef-494c-ad1b-c1659550366f 
- ../../mapper/md|md0
lrwxrwxrwx 1 root root 17 2007-04-25 01:00 9159561a-d3f3-4fd8-ba75-075f9607b0e8 
- ../../mapper/hda4
lrwxrwxrwx 1 root root 17 2007-04-25 01:00 e831455b-f381-4c13-817d-d7ebaa3f1d5d 
- ../../mapper/hda1

chris@:-$ sudo fdisk -l (snippet)
Disk /dev/md0: 600.1 GB, 600181309440 bytes
2 heads, 4 sectors/track, 146528640 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

This is a result of an upgrade from a working edgy installation, which
was upgraded previously from an original dapper install. The previous
kernel was 2.6.17-11, and now I'm using 2.6.20-15. I tried booting the
17 kernel and it soft panicked on the device mapper. There was a patch
added to this code (I googled the above error message) that traps this
error in the recent kernel so something happened to my arrays during the
upgrade. Any ideas?

** Affects: Ubuntu
 Importance: Undecided
 Status: Unconfirmed

-- 
Upgraded edgy to feisty, device-mapper is dieing on raid0 arrays
https://bugs.launchpad.net/bugs/110023
You received this bug notification because you are a member of Ubuntu
Bugs, which is the bug contact for Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs