Many mid-range/high-end RAID controllers work by having a small timeout on 
individual disk I/O operations. If the disk doesn't respond quickly, they'll 
issue an I/O to the redundant disk(s) to get the data back to the host in a 
reasonable time. Often they'll change parameters on the disk to limit how long 
the disk retries before returning an error for a bad sector (this is 
standardized for SCSI, I don't recall offhand whether any of this is 
standardized for ATA).

RAID 3 units, e.g. DataDirect, issue I/O to all disks simultaneously and when 
enough (N-1 or N-2) disks return data, they'll return the data to the host. At 
least they do that for full stripes. But this strategy works better for 
sequential I/O, not so good for random I/O, since you're using up extra 
bandwidth.

Host-based RAID/mirroring almost never takes this strategy for two reasons. 
First, the bottleneck is almost always the channel from disk to host, and you 
don't want to clog it. [Yes, I know there's more bandwidth there than the sum 
of the disks, but consider latency.] Second, to read from two disks on a 
mirror, you'd need two memory buffers.
--
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to