:> Hm. But I'd think that even with modern drives a smaller number of bigger
:> I/Os is preferable over lots of very small I/Os.
:
:Not necessarily. It depends upon overhead costs per-i/o. With larger I/Os, you
:do pay in interference costs (you can't transfer data for request N because
:the 256Kbytes of request M is still in the pipe).

    This problem has scaled over the last few years.  With 5 MB/sec SCSI
    busses it was a problem.  With 40, 80, and 160 MB/sec it isn't as big
    an issue any more.

        256K @ 40 MBytes/sec = 6.25 mS.
        256K @ 80 MBytes/sec = 3.125 mS.

    When you add in write-decoupling (take softupdates, for example), the
    issue become even less of a problem.

    The biggest single item that does not scale well is command/response 
    overhead.  I think it has been successfully argued (but I forgot who
    made the point) that 64K is not quite into the sweet spot - that 256K
    is closer to the mark.  But one has to be careful to only issue large
    requests for things that are actually going to be used.  If you read
    256K but only use 8K of it, you just wasted a whole lot of cpu and bus
    bandwidth.

                                        -Matt
                                        Matthew Dillon 
                                        <[EMAIL PROTECTED]>


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to