Thanks. That's really helpful.
When doing this, I used "tail -f /var/log/messages" to monitor my
program, which does I/O to md0.
After sdd was removed physically, it seemed the I/O operations to the
md0 continued:
"tail -f /var/log/messages" occasionally shows:
SCSI error: return code = 0x00040000
kernel: end_request: I/O error, dev sdd, sector 15357599
but showed all the other messages just as sdd was still there.
Is this normal?
I am really not sure whether the result of my program is still
trustable.
Also, if sdd1 is the only device which fails, is there any way to know
it? Just as what "mdadm --monitor" does?
Thank you.
On Tue, 2008-01-08 at 14:53 +1100, Neil Brown wrote:
> On Monday January 7, [EMAIL PROTECTED] wrote:
> >
> > The /dev/md0 is set as RAID0
> > "cat /proc/mdstat" shows
> > md0 : active raid0 sda1[0] sdd1[3] sdc1[2] sdb1[1]
> > 157307904 blocks 64k chunks
> >
> > Then sdd is removed.
> >
> > But "cat /proc/mdsta" still shows the same information as above, while two
> > RAID5 devices show their sdd parts as (F)
> > md0 : active raid0 sda1[0] sdd1[3] sdc1[2] sdb1[1]
> > 157307904 blocks 64k chunks
> >
> > Is this normal?
>
> Yes.
>
> raid0 is not real raid. It is not able to cope with disk failures, so
> it doesn't even try. Devices in a raid0 are never marked failed as
> doing so would be of no benefit.
>
> NeilBrown
>
>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html