Bug#385951: marked as done (Fail to scan array in some case when partition is at the end of the disk)

2006-10-19 Thread Debian Bug Tracking System
Your message dated Thu, 19 Oct 2006 09:33:09 +0200
with message-id [EMAIL PROTECTED]
and subject line (no subject)
has caused the attached Bug report to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what I am
talking about this indicates a serious mail system misconfiguration
somewhere.  Please contact me immediately.)

Debian bug tracking system administrator
(administrator, Debian Bugs database)

---BeginMessage---
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Package: mdadm
Version: 2.5.2-7
Severity: critical

Tha actual mdadm in testing makes the whole system to unbootable if on
raid slice is on the end of a disk. (In some cases)

My config:
- ---mdadm.conf---
DEVICE partitions
ARRAY /dev/md0 level=raid1 num-devices=2 
UUID=88cf7fb7:6fab12d7:b713c983:af6eaca5
MAILADDR root
- 

- ---hdc---
Disk /dev/hdc: 60.0 GB, 60060155904 bytes
16 heads, 63 sectors/track, 116374 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/hdc1   *   1 496  249983+  83  Linux
/dev/hdc2 4971488  499968   82  Linux swap / Solaris
/dev/hdc31489  11637457902544   fd  Linux raid autodetect
- -
- ---hdd---
Disk /dev/hdd: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/hdd1   1972878140159+  8e  Linux LVM
/dev/hdd29729   1945678140160   8e  Linux LVM
/dev/hdd3   31705   3891357906292+  fd  Linux raid autodetect
- -

- ---/proc/mdstat---
Personalities : [raid1] 
read_ahead 1024 sectors
md0 : active raid1 hdd3[0]
  57902464 blocks [2/1] [U_]
  
unused devices: none
- --

Please note that I had to run the raid in degraded mode as /dev/hdc3
cannot be in the array (See below).

When I do a mdadm --assemble --scan --auto=yes (as done in
/etc/init.d/mdadm-raid) then I get the following error:

- ---
mdadm: no recogniseable superblock on /dev/hda2
mdadm: /dev/hda2 has wrong uuid.
mdadm: no recogniseable superblock on /dev/hda1
mdadm: /dev/hda1 has wrong uuid.
mdadm: no recogniseable superblock on /dev/hda
mdadm: /dev/hda has wrong uuid.
mdadm: no RAID superblock on /dev/hdd2
mdadm: /dev/hdd2 has wrong uuid.
mdadm: no RAID superblock on /dev/hdd1
mdadm: /dev/hdd1 has wrong uuid.
mdadm: no RAID superblock on /dev/hdd
mdadm: /dev/hdd has wrong uuid.
mdadm: no RAID superblock on /dev/hdc2
mdadm: /dev/hdc2 has wrong uuid.
mdadm: no RAID superblock on /dev/hdc1
mdadm: /dev/hdc1 has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_hathi
mdadm: /dev/vg1/lv_hathi has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_misc
mdadm: /dev/vg1/lv_misc has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_mirror
mdadm: /dev/vg1/lv_mirror has wrong uuid.
mdadm: WARNING /dev/hdc3 and /dev/hdc appear to have very similar superblocks.
  If they are really different, please --zero the superblock on one
  If they are the same, please remove one from the list.
- ---

Cleaning the superblock of /dev/hdc also clean the one of /dev/hdc3 and
so leaving the raid in a degraded state. But at least I can now boot the
system after running the md0 by hand.

Note that the partition /dev/hdd3 is also at the end of the disk but do
not make problemes.

This is a very critical bug and should be fixed in etch (I think, this
is release critical!!!)

- -- Package-specific info:
- --- mount output
/dev/hda1 on / type ext3 (rw)
proc on /proc type proc (rw)
tmpfs on /dev/shm type tmpfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,mode=1777)
/dev/sysvg/lv_usr on /usr type ext3 (rw,noatime)
/dev/sysvg/lv_var on /var type reiserfs (rw)
/dev/sysvg/lv_local on /usr/local type ext3 (rw,noatime)
/dev/sysvg/lv_home on /home type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_misc on /misc type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_mirror on /mirror type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_hathi on /hathi type ext2 (ro)
capifs on /dev/capi type capifs (rw,mode=0666)
/proc/bus/usb on /proc/bus/usb type usbdevfs (rw)
AFS on /afs type afs (rw)
localhost:/var/lib/cfs/.cfsfs on /var/cfs type nfs 
(rw,port=3049,intr,nfsvers=2,addr=127.0.0.1)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

- --- mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid1 num-devices=2 
UUID=88cf7fb7:6fab12d7:b713c983:af6eaca5
MAILADDR root

- --- /proc/mdstat:
Personalities : [raid1] 
read_ahead 1024 sectors
md0 : active raid1 hdd3[0]
  57902464 blocks [2/1] [U_]
  
unused devices: none

- --- /proc/partitions:
major minor  

Bug#385951: marked as done (Fail to scan array in some case when partition is at the end of the disk)

2006-10-13 Thread Debian Bug Tracking System
Your message dated Fri, 13 Oct 2006 00:17:31 -0700
with message-id [EMAIL PROTECTED]
and subject line Bug#385951: fixed in mdadm 2.5.4-1
has caused the attached Bug report to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what I am
talking about this indicates a serious mail system misconfiguration
somewhere.  Please contact me immediately.)

Debian bug tracking system administrator
(administrator, Debian Bugs database)

---BeginMessage---
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Package: mdadm
Version: 2.5.2-7
Severity: critical

Tha actual mdadm in testing makes the whole system to unbootable if on
raid slice is on the end of a disk. (In some cases)

My config:
- ---mdadm.conf---
DEVICE partitions
ARRAY /dev/md0 level=raid1 num-devices=2 
UUID=88cf7fb7:6fab12d7:b713c983:af6eaca5
MAILADDR root
- 

- ---hdc---
Disk /dev/hdc: 60.0 GB, 60060155904 bytes
16 heads, 63 sectors/track, 116374 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/hdc1   *   1 496  249983+  83  Linux
/dev/hdc2 4971488  499968   82  Linux swap / Solaris
/dev/hdc31489  11637457902544   fd  Linux raid autodetect
- -
- ---hdd---
Disk /dev/hdd: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/hdd1   1972878140159+  8e  Linux LVM
/dev/hdd29729   1945678140160   8e  Linux LVM
/dev/hdd3   31705   3891357906292+  fd  Linux raid autodetect
- -

- ---/proc/mdstat---
Personalities : [raid1] 
read_ahead 1024 sectors
md0 : active raid1 hdd3[0]
  57902464 blocks [2/1] [U_]
  
unused devices: none
- --

Please note that I had to run the raid in degraded mode as /dev/hdc3
cannot be in the array (See below).

When I do a mdadm --assemble --scan --auto=yes (as done in
/etc/init.d/mdadm-raid) then I get the following error:

- ---
mdadm: no recogniseable superblock on /dev/hda2
mdadm: /dev/hda2 has wrong uuid.
mdadm: no recogniseable superblock on /dev/hda1
mdadm: /dev/hda1 has wrong uuid.
mdadm: no recogniseable superblock on /dev/hda
mdadm: /dev/hda has wrong uuid.
mdadm: no RAID superblock on /dev/hdd2
mdadm: /dev/hdd2 has wrong uuid.
mdadm: no RAID superblock on /dev/hdd1
mdadm: /dev/hdd1 has wrong uuid.
mdadm: no RAID superblock on /dev/hdd
mdadm: /dev/hdd has wrong uuid.
mdadm: no RAID superblock on /dev/hdc2
mdadm: /dev/hdc2 has wrong uuid.
mdadm: no RAID superblock on /dev/hdc1
mdadm: /dev/hdc1 has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_hathi
mdadm: /dev/vg1/lv_hathi has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_misc
mdadm: /dev/vg1/lv_misc has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_mirror
mdadm: /dev/vg1/lv_mirror has wrong uuid.
mdadm: WARNING /dev/hdc3 and /dev/hdc appear to have very similar superblocks.
  If they are really different, please --zero the superblock on one
  If they are the same, please remove one from the list.
- ---

Cleaning the superblock of /dev/hdc also clean the one of /dev/hdc3 and
so leaving the raid in a degraded state. But at least I can now boot the
system after running the md0 by hand.

Note that the partition /dev/hdd3 is also at the end of the disk but do
not make problemes.

This is a very critical bug and should be fixed in etch (I think, this
is release critical!!!)

- -- Package-specific info:
- --- mount output
/dev/hda1 on / type ext3 (rw)
proc on /proc type proc (rw)
tmpfs on /dev/shm type tmpfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,mode=1777)
/dev/sysvg/lv_usr on /usr type ext3 (rw,noatime)
/dev/sysvg/lv_var on /var type reiserfs (rw)
/dev/sysvg/lv_local on /usr/local type ext3 (rw,noatime)
/dev/sysvg/lv_home on /home type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_misc on /misc type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_mirror on /mirror type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_hathi on /hathi type ext2 (ro)
capifs on /dev/capi type capifs (rw,mode=0666)
/proc/bus/usb on /proc/bus/usb type usbdevfs (rw)
AFS on /afs type afs (rw)
localhost:/var/lib/cfs/.cfsfs on /var/cfs type nfs 
(rw,port=3049,intr,nfsvers=2,addr=127.0.0.1)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

- --- mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid1 num-devices=2 
UUID=88cf7fb7:6fab12d7:b713c983:af6eaca5
MAILADDR root

- --- /proc/mdstat:
Personalities : [raid1] 
read_ahead 1024 sectors
md0 : active raid1 hdd3[0]
  57902464 blocks [2/1] [U_]
  
unused devices: none

- --- 

Bug#385951: marked as done (Fail to scan array in some case when partition is at the end of the disk)

2006-09-17 Thread Debian Bug Tracking System
Your message dated Sun, 17 Sep 2006 11:18:52 -0700
with message-id [EMAIL PROTECTED]
and subject line Bug#385951: fixed in mdadm 2.5.3.git200608202239-5
has caused the attached Bug report to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what I am
talking about this indicates a serious mail system misconfiguration
somewhere.  Please contact me immediately.)

Debian bug tracking system administrator
(administrator, Debian Bugs database)

---BeginMessage---
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Package: mdadm
Version: 2.5.2-7
Severity: critical

Tha actual mdadm in testing makes the whole system to unbootable if on
raid slice is on the end of a disk. (In some cases)

My config:
- ---mdadm.conf---
DEVICE partitions
ARRAY /dev/md0 level=raid1 num-devices=2 
UUID=88cf7fb7:6fab12d7:b713c983:af6eaca5
MAILADDR root
- 

- ---hdc---
Disk /dev/hdc: 60.0 GB, 60060155904 bytes
16 heads, 63 sectors/track, 116374 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/hdc1   *   1 496  249983+  83  Linux
/dev/hdc2 4971488  499968   82  Linux swap / Solaris
/dev/hdc31489  11637457902544   fd  Linux raid autodetect
- -
- ---hdd---
Disk /dev/hdd: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/hdd1   1972878140159+  8e  Linux LVM
/dev/hdd29729   1945678140160   8e  Linux LVM
/dev/hdd3   31705   3891357906292+  fd  Linux raid autodetect
- -

- ---/proc/mdstat---
Personalities : [raid1] 
read_ahead 1024 sectors
md0 : active raid1 hdd3[0]
  57902464 blocks [2/1] [U_]
  
unused devices: none
- --

Please note that I had to run the raid in degraded mode as /dev/hdc3
cannot be in the array (See below).

When I do a mdadm --assemble --scan --auto=yes (as done in
/etc/init.d/mdadm-raid) then I get the following error:

- ---
mdadm: no recogniseable superblock on /dev/hda2
mdadm: /dev/hda2 has wrong uuid.
mdadm: no recogniseable superblock on /dev/hda1
mdadm: /dev/hda1 has wrong uuid.
mdadm: no recogniseable superblock on /dev/hda
mdadm: /dev/hda has wrong uuid.
mdadm: no RAID superblock on /dev/hdd2
mdadm: /dev/hdd2 has wrong uuid.
mdadm: no RAID superblock on /dev/hdd1
mdadm: /dev/hdd1 has wrong uuid.
mdadm: no RAID superblock on /dev/hdd
mdadm: /dev/hdd has wrong uuid.
mdadm: no RAID superblock on /dev/hdc2
mdadm: /dev/hdc2 has wrong uuid.
mdadm: no RAID superblock on /dev/hdc1
mdadm: /dev/hdc1 has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_hathi
mdadm: /dev/vg1/lv_hathi has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_misc
mdadm: /dev/vg1/lv_misc has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_mirror
mdadm: /dev/vg1/lv_mirror has wrong uuid.
mdadm: WARNING /dev/hdc3 and /dev/hdc appear to have very similar superblocks.
  If they are really different, please --zero the superblock on one
  If they are the same, please remove one from the list.
- ---

Cleaning the superblock of /dev/hdc also clean the one of /dev/hdc3 and
so leaving the raid in a degraded state. But at least I can now boot the
system after running the md0 by hand.

Note that the partition /dev/hdd3 is also at the end of the disk but do
not make problemes.

This is a very critical bug and should be fixed in etch (I think, this
is release critical!!!)

- -- Package-specific info:
- --- mount output
/dev/hda1 on / type ext3 (rw)
proc on /proc type proc (rw)
tmpfs on /dev/shm type tmpfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,mode=1777)
/dev/sysvg/lv_usr on /usr type ext3 (rw,noatime)
/dev/sysvg/lv_var on /var type reiserfs (rw)
/dev/sysvg/lv_local on /usr/local type ext3 (rw,noatime)
/dev/sysvg/lv_home on /home type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_misc on /misc type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_mirror on /mirror type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_hathi on /hathi type ext2 (ro)
capifs on /dev/capi type capifs (rw,mode=0666)
/proc/bus/usb on /proc/bus/usb type usbdevfs (rw)
AFS on /afs type afs (rw)
localhost:/var/lib/cfs/.cfsfs on /var/cfs type nfs 
(rw,port=3049,intr,nfsvers=2,addr=127.0.0.1)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

- --- mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid1 num-devices=2 
UUID=88cf7fb7:6fab12d7:b713c983:af6eaca5
MAILADDR root

- --- /proc/mdstat:
Personalities : [raid1] 
read_ahead 1024 sectors
md0 : active raid1 hdd3[0]
  57902464 blocks [2/1] [U_]
  
unused devices: none

Bug#385951: marked as done (Fail to scan array in some case when partition is at the end of the disk)

2006-09-13 Thread Debian Bug Tracking System
Your message dated Wed, 13 Sep 2006 05:47:06 -0700
with message-id [EMAIL PROTECTED]
and subject line Bug#385951: fixed in mdadm 2.5.3.git200608202239-3
has caused the attached Bug report to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what I am
talking about this indicates a serious mail system misconfiguration
somewhere.  Please contact me immediately.)

Debian bug tracking system administrator
(administrator, Debian Bugs database)

---BeginMessage---
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Package: mdadm
Version: 2.5.2-7
Severity: critical

Tha actual mdadm in testing makes the whole system to unbootable if on
raid slice is on the end of a disk. (In some cases)

My config:
- ---mdadm.conf---
DEVICE partitions
ARRAY /dev/md0 level=raid1 num-devices=2 
UUID=88cf7fb7:6fab12d7:b713c983:af6eaca5
MAILADDR root
- 

- ---hdc---
Disk /dev/hdc: 60.0 GB, 60060155904 bytes
16 heads, 63 sectors/track, 116374 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/hdc1   *   1 496  249983+  83  Linux
/dev/hdc2 4971488  499968   82  Linux swap / Solaris
/dev/hdc31489  11637457902544   fd  Linux raid autodetect
- -
- ---hdd---
Disk /dev/hdd: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/hdd1   1972878140159+  8e  Linux LVM
/dev/hdd29729   1945678140160   8e  Linux LVM
/dev/hdd3   31705   3891357906292+  fd  Linux raid autodetect
- -

- ---/proc/mdstat---
Personalities : [raid1] 
read_ahead 1024 sectors
md0 : active raid1 hdd3[0]
  57902464 blocks [2/1] [U_]
  
unused devices: none
- --

Please note that I had to run the raid in degraded mode as /dev/hdc3
cannot be in the array (See below).

When I do a mdadm --assemble --scan --auto=yes (as done in
/etc/init.d/mdadm-raid) then I get the following error:

- ---
mdadm: no recogniseable superblock on /dev/hda2
mdadm: /dev/hda2 has wrong uuid.
mdadm: no recogniseable superblock on /dev/hda1
mdadm: /dev/hda1 has wrong uuid.
mdadm: no recogniseable superblock on /dev/hda
mdadm: /dev/hda has wrong uuid.
mdadm: no RAID superblock on /dev/hdd2
mdadm: /dev/hdd2 has wrong uuid.
mdadm: no RAID superblock on /dev/hdd1
mdadm: /dev/hdd1 has wrong uuid.
mdadm: no RAID superblock on /dev/hdd
mdadm: /dev/hdd has wrong uuid.
mdadm: no RAID superblock on /dev/hdc2
mdadm: /dev/hdc2 has wrong uuid.
mdadm: no RAID superblock on /dev/hdc1
mdadm: /dev/hdc1 has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_hathi
mdadm: /dev/vg1/lv_hathi has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_misc
mdadm: /dev/vg1/lv_misc has wrong uuid.
mdadm: no RAID superblock on /dev/vg1/lv_mirror
mdadm: /dev/vg1/lv_mirror has wrong uuid.
mdadm: WARNING /dev/hdc3 and /dev/hdc appear to have very similar superblocks.
  If they are really different, please --zero the superblock on one
  If they are the same, please remove one from the list.
- ---

Cleaning the superblock of /dev/hdc also clean the one of /dev/hdc3 and
so leaving the raid in a degraded state. But at least I can now boot the
system after running the md0 by hand.

Note that the partition /dev/hdd3 is also at the end of the disk but do
not make problemes.

This is a very critical bug and should be fixed in etch (I think, this
is release critical!!!)

- -- Package-specific info:
- --- mount output
/dev/hda1 on / type ext3 (rw)
proc on /proc type proc (rw)
tmpfs on /dev/shm type tmpfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,mode=1777)
/dev/sysvg/lv_usr on /usr type ext3 (rw,noatime)
/dev/sysvg/lv_var on /var type reiserfs (rw)
/dev/sysvg/lv_local on /usr/local type ext3 (rw,noatime)
/dev/sysvg/lv_home on /home type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_misc on /misc type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_mirror on /mirror type reiserfs (rw,nosuid,nodev)
/dev/vg1/lv_hathi on /hathi type ext2 (ro)
capifs on /dev/capi type capifs (rw,mode=0666)
/proc/bus/usb on /proc/bus/usb type usbdevfs (rw)
AFS on /afs type afs (rw)
localhost:/var/lib/cfs/.cfsfs on /var/cfs type nfs 
(rw,port=3049,intr,nfsvers=2,addr=127.0.0.1)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

- --- mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid1 num-devices=2 
UUID=88cf7fb7:6fab12d7:b713c983:af6eaca5
MAILADDR root

- --- /proc/mdstat:
Personalities : [raid1] 
read_ahead 1024 sectors
md0 : active raid1 hdd3[0]
  57902464 blocks [2/1] [U_]
  
unused devices: none