tags 396582 - unreproducible
thanks

also sprach Dan Pascu <[EMAIL PROTECTED]> [2006.11.02.0946 +0100]:
> Yes. In my case, if I fail a drive, it is still there in a failed state, 
> but if I then stop the raid array, when it's restarted, the failed drive 
> is no longer there, as if it was removed meanwhile, only that I never 
> issued the remove command. And when the array starts, it shows that it 
> started degraded with only 1 out of 2 drives.

I managed to reproduce it; you just have to write to the array after
fail and before stop:

piper:~# mdadm -Cl1 -n2 /dev/md99 /dev/sd[ef]1
piper:~# mdadm --fail /dev/md99 /dev/sde1
mdadm: set /dev/sde1 faulty in /dev/md99
piper:~# dd if=/dev/zero of=/dev/md99
[...]
65667072 bytes (66 MB) copied, 2.57956 seconds, 25.5 MB/s
piper:~# mdadm -Ss
mdadm: stopped /dev/md99
piper:~# mdadm -As
mdadm: /dev/md/99 has been started with 1 drive (out of 2).
mdadm: /dev/md/99 already active, cannot restart it!
mdadm: /dev/md/99 already active, cannot restart it!
[...]



Neil, can we apply the patch contributed to fix this:

  
http://bugs.debian.org/cgi-bin/bugreport.cgi/mdadm-fix-infinite-loop.diff?bug=396582;msg=5;att=1

or do I remember that you previously replaced devlist with NULL to
fix another bug?

Full report: http://bugs.debian.org/396582

-- 
 .''`.   martin f. krafft <[EMAIL PROTECTED]>
: :'  :  proud Debian developer, author, administrator, and user
`. `'`   http://people.debian.org/~madduck - http://debiansystem.info
  `-  Debian - when you have better things to do than fixing systems

Attachment: signature.asc
Description: Digital signature (GPG/PGP)

Reply via email to