I've been trying to enable jumbo frames on my Solaris server for some time,
to no avail.

Oh, the driver works fine - it's the nge driver, so I edit the config file
and reboot, and everything works great.

But when I try to enable jumbo frames on any client mounting any/all of the
~260 ZFS filesystems shared over NFS, it immediately becomes unable to talk
to the NFS server, and this persists forever (or until I reset the frame
size to 1500, at which time it eventually recovers).

I initially saw this on a b87 server, but I'm currently running a
cleanly-installed snv_107 server on the same physical machine, different
hard drive, and the problem is still around.

The fascinating part is that, when I initially tested this, it worked great
- but the nge NIC was not the primary interface over which all 6 of these
clients were mounting the ~260 filesystems, just one or two for testing
purposes. It worked fine, and there was no significant problem (either with
the primary, not-jumbo NIC or the secondary, jumbo-enabled NIC).

I imagine this has something to do with the quantity of mounts being shared,
but this occurs on clients that are only mounting two or three of this pool
of mounts (each filesystem is individually exported via an inherited
sharenfs setting).

This is between Linux clients (various kernels tested between 2.6.16 and
2.6.24, inclusive) and the aforementioned snv_107 server - I've not got any
Solaris clients up right now to test this behavior with them, but
Linux<->Linux does not have this problem as far as I can tell, and I've not
seen any reports of this happening online for the Linux clients, so I think
this is a problem on the server-side. I'll have time to set up a Solaris
client in a few days to test this.

This behavior, incidentally, persists regardless of whether the clients
mount with NFSv3 or NFSv4, TCP or UDP, and rsize/wsize explicitly specified
or not.

Anyone seen this?

- Rich

-- 

I will not forget you.

Reply via email to