On Friday October 6, [EMAIL PROTECTED] wrote:
>
> This patch has resolved the immediate issue I was having on 2.6.18 with
> RAID10. Previous to this change, after removing a device from the array
> (with mdadm --remove), physically pulling the device and
> changing/re-inserting, the "Number" of the new device would be
> incremented on top of the highest-present device in the array. Now, it
> resumes its previous place.
>
> Does this look to be 'correct' output for a 14-drive array, which dev 8
> was failed/removed from then "add"'ed? I'm trying to determine why the
> device doesn't get pulled back into the active configuration and
> re-synced. Any comments?
Does this patch help?
Fix count of degraded drives in raid10.
Signed-off-by: Neil Brown <[EMAIL PROTECTED]>
### Diffstat output
./drivers/md/raid10.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff .prev/drivers/md/raid10.c ./drivers/md/raid10.c
--- .prev/drivers/md/raid10.c 2006-10-09 14:18:00.000000000 +1000
+++ ./drivers/md/raid10.c 2006-10-05 20:10:07.000000000 +1000
@@ -2079,7 +2079,7 @@ static int run(mddev_t *mddev)
disk = conf->mirrors + i;
if (!disk->rdev ||
- !test_bit(In_sync, &rdev->flags)) {
+ !test_bit(In_sync, &disk->rdev->flags)) {
disk->head_position = 0;
mddev->degraded++;
}
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html