:>
:>I agree that it is obvious for NFS, but I don't see it as being
:>obvious at all for (modern) disks, so for that case I would like
:>to see numbers.
:>
:>If running without clustering is just as fast for modern disks,
:>I think the clustering needs rethought.
:
: Depends on the type of disk drive and how it is configured. Some drives
:perform badly (skip a revolution) with back-to-back writes. In all cases,
:without aggregation of blocks, you pay the extra cost of additional interrupts
:and I/O rundowns, which can be a significant factor. Also, unless the blocks
:were originally written by the application in a chunk, they will likely be
:mixed with blocks to varying locations, in which case for drives without
:write caching enabled, you'll have additional seeks to write the blocks out.
:Things like this don't show up when doing simplistic sequential write tests.
:
:-DG
:
:David Greenman
:Co-founder/Principal Architect, The FreeBSD Project - http://www.freebsd.org
I have an excellent example of this related to NFS. It's still applicable
even though the NFS point has already been conceeded.
As part of the performance enhancements package I extended the sequential
detection heuristic to the NFS server side code and turned on clustering.
On the server, mind you, not the client.
Read performance went up drastically. My 100BaseTX network instantly
maxed out and, more importantly, the server side cpu use went down
drastically. Here is the relevant email from my archives describing the
performance gains:
:From: dillon
:To: Alfred Perlstein <[EMAIL PROTECTED]>
:Cc: Alan Cox <[EMAIL PROTECTED]>, Julian Elischer <[EMAIL PROTECTED]>
:Date: Sun Dec 12 10:11:06 1999
:
:...
: This proposed patch allows us to maintain a sequential read heuristic
: on the server side. I noticed that the NFS server side reads only 8K
: blocks from the physical media even when the NFS client is reading a
: file sequentially.
:
: With this heuristic in place I can now get 9.5 to 10 MBytes/sec reading
: over NFS on a 100BaseTX network, and the server winds up being 80%
: idle. Under -stable the same test runs 72% idle and 8.4 MBytes/sec.
This is in spite of the fact that in this sequential test the hard
drives were caching the read data ahead anyway. The reduction in
command/response/interrupt overhead on the server by going from 8K read
I/O's to 64K read I/O's in the sequential case made an obvious beneficial
impact on the cpu. I almost halved the cpu overhead on the server!
So while on-disk caching makes a lot of sense, it is in no way able
to replace software clustering. Having both working together is a
killer combination.
-Matt
Matthew Dillon
<[EMAIL PROTECTED]>
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message