also sprach Dan Pascu <[EMAIL PROTECTED]> [2006.11.01.2323 +0100]:
> Also I've noticed something weird in the test you did. After failing sde1 
> from md99 and stopping the array, when it was started with the startup 
> script it said it assembled md99 with 2 drives. The same was said by 
> mdadm --assemble later, as if between stopping and starting it the failed 
> drive was magically re-added. The message should have been something 
> like "starting degraded array with 1 drive (out of 2)" if I'm not 
> mistaken.

No, because the drives were only marked as failed but not yet
removed. On reassembly, they just get added again.

Are you seeing different behaviour?

> Personalities : [raid1] 
> md1 : active raid1 sdb2[1] sda2[0]
>       231609024 blocks [2/2] [UU]
>       bitmap: 5/221 pages [20KB], 512KB chunk
> 
> md0 : active raid1 sdb1[1] sda1[0]
>       12586816 blocks [2/2] [UU]
>       bitmap: 12/193 pages [48KB], 32KB chunk

You are using bitmaps, I am not. Maybe that's the cause?

Could you please recreate the problem from scratch and show me *all*
steps?

-- 
 .''`.   martin f. krafft <[EMAIL PROTECTED]>
: :'  :  proud Debian developer, author, administrator, and user
`. `'`   http://people.debian.org/~madduck - http://debiansystem.info
  `-  Debian - when you have better things to do than fixing systems

Attachment: signature.asc
Description: Digital signature (GPG/PGP)

Reply via email to