:>      -vfs.nfs.realign_test: 22141777
:>      +vfs.nfs.realign_test: 498351
:> 
:>      -vfs.nfsrv.realign_test: 5005908
:>      +vfs.nfsrv.realign_test: 0
:> 
:>      +vfs.nfsrv.commit_miss: 0
:>      +vfs.nfsrv.commit_blks: 0
:> 
:> changing them did nothing - or at least with respect to nfs throughput :-)
:
:I'm not sure what any of these do, as NFS is a bit out of my league.
::-)  I'll be following this thread though!
:
:-- 
:| Jeremy Chadwick                                jdc at parodius.com |

    A non-zero nfs_realign_count is bad, it means NFS had to copy the
    mbuf chain to fix the alignment.  nfs_realign_test is just the
    number of times it checked.  So nfs_realign_test is irrelevant.
    it's nfs_realign_count that matters.

    Several things can cause NFS payloads to be improperly aligned.
    Anything from older network drivers which can't start DMA on a 
    2-byte boundary, resulting in the 14-byte encapsulation header 
    causing improper alignment of the IP header & payload, to rpc
    embedded in NFS TCP streams winding up being misaligned.

    Modern network hardware either support 2-byte-aligned DMA, allowing
    the encapsulation to be 2-byte aligned so the payload winds up being
    4-byte aligned, or support DMA chaining allowing the payload to be
    placed in its own mbuf, or pad, etc.

    --

    One thing I would check is to be sure a couple of nfsiod's are running
    on the client when doing your tests.  If none are running the RPCs wind
    up being more synchronous and less pipelined.  Another thing I would
    check is IP fragment reassembly statistics (for UDP) - there should be
    none for TCP connections no matter what the NFS I/O size selected.

    (It does seem more likely to be scheduler-related, though).

                                                -Matt

_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to