On Sun, Feb 25, 2024 at 4:57 PM Mark Saad <nones...@longcount.org> wrote:
>
> H
>
> On Sun, Feb 25, 2024 at 6:51 PM Rick Macklem <rick.mack...@gmail.com> wrote:
>>
>> On Sun, Feb 25, 2024 at 1:21 AM <tue...@freebsd.org> wrote:
>> >
>> > CAUTION: This email originated from outside of the University of Guelph. 
>> > Do not click links or open attachments unless you recognize the sender and 
>> > know the content is safe. If in doubt, forward suspicious emails to 
>> > ith...@uoguelph.ca.
>> >
>> >
>> > > On Feb 25, 2024, at 01:18, Hannes Hauswedell <h2+lists2...@fsfe.org> 
>> > > wrote:
>> > >
>> > > Hi everyone,
>> > >
>> > > I am coming here from
>> > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=2771971160
>> > I guess this should read:
>> > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=277197
>> Btw, what Hannes reported in the PR was that performance was
>> about the same for Linux and FreeBSD NFS clients when the link
>> was using a 1500byte ethernet frames.
>> However, Linux performs much better with 9K jumbo frames
>> whereas FreeBSD performance does not improve for 9K jumbo
>> frames. (Some mount options I suggested did help somewhat
>> for FreeBSD. Basically increasing rsize/wsize did help, but he
>> still sees performance below what Linux gets when 9K jumbo frames
>> are used. (I did note the potential problem of mbuf cluster pool
>> fragmentation when 9K jumbo frames are used, although I did not
>> intent to imply that this issue is involved, just that it is a known
>> deficiency that "might" be a factor.)
>>
>> rick
>> >
>> > Best regards
>> > Michael
>> > >
>> > > TL;DR:
>> > >
>> > > * I have a FreeBSD14 Server and Client with an Intel X540 (ix) adaptor 
>> > > each.
>> > > * I am trying to improve the NFS throughput.
>> > > * I get 1160 MiB/s via nc, but only ~200 MiB/s via NFS.
>> > > * Increasing rsize and wsize to 1 MiB increases throughput to 395 MiB/s
>> > > * But a Linux client achieves 560-600 MiB/s with any rsize.
>> > > * The mtu is set to 9000 but this doesn't make a difference for the 
>> > > FreeBSD client (it does make a difference for Linux).
>> > >
>> > > I assume < 400 MiB/s is not the expected performance? Do you have any 
>> > > advice on debugging this?
>> > >
>> > > Thank you for your help,
>> > > Hannes
>> > >
>> >
>> >
>> >
>>
>  Hannes
>    In the dmesg posted I see that you have a epair loaded . Are you trying to 
> do NFS inside of a Jail ?
>
> Rick, Didn't someone from Isilon or Dell/EMC post about the 9K frames a long 
> time ago ?  I know in isilon land
> their FreeBSD can do this, but I can't say I have any idea how it's being 
> done. They do have some kernel auto-tune magic as well
> to find optimal send and receive buffers. Maybe what we are seeing is Linux 
> having better ergonomics on buffers out of the box ?
>
Oops, my bad. Yes, I just took a quick look and it appears that jumbo
mbuf clusters
are now allocated from separate uma_zones, so the fragmentation problem should
not exist. (I hadn't heard about this for quite a while, but hadn't
noticed a commit
fixing it I'll wade through the commit log to see when it got changed tomorrow.)

So, I think the above comment should be ignored, unless others need to correct
it further.

Sorry about this, rick
ps: I have no idea how Linux does network buffers.

> Hannes
>   Can you post your boot.conf and sysctl.conf settings.
> --
> mark saad | nones...@longcount.org

Reply via email to