the max_link hdr is basically how much space to reserve before an
ip packet for link headers, eg, ethernet.

16 bytes was a good choice when everything was just ip inside
ethernet, but now we deal with a bunch of encapsulations which blow
that out of the water. 16 bytes isnt even enough if we have to
inject a vlan tag ourselves.

im suggesting 64 because it will comfortably allow you to encap an
ethernet header inside of an ip protocol. i think it is even enough
to accomodate vxlan overhead.

the caveat to this is that it changes the watermark for what is the
smallest packet that can go into an mbuf. currently the space in
an mbuf with headers is about 184 bytes. with a max_linkhdr of 16
that would allow a 168 byte ip packet. after this you can put a 120
byte packet in.

however, most ip packets are either small (think keystrokes over
ssh, ACKs, or dns lookups) or full sized (always greater than MHLEN
anyway). this change therefore has minimal impact on the majority
of traffic, except to make prepending encap headers a lot cheaper.

ok?

Index: uipc_domain.c
===================================================================
RCS file: /cvs/src/sys/kern/uipc_domain.c,v
retrieving revision 1.43
diff -u -p -r1.43 uipc_domain.c
--- uipc_domain.c       4 Sep 2015 08:43:39 -0000       1.43
+++ uipc_domain.c       2 Mar 2016 03:49:33 -0000
@@ -89,8 +89,8 @@ domaininit(void)
                                (*pr->pr_init)();
        }
 
-       if (max_linkhdr < 16)           /* XXX */
-               max_linkhdr = 16;
+       if (max_linkhdr < 64)           /* XXX */
+               max_linkhdr = 64;
        max_hdr = max_linkhdr + max_protohdr;
        timeout_set(&pffast_timeout, pffasttimo, &pffast_timeout);
        timeout_set(&pfslow_timeout, pfslowtimo, &pfslow_timeout);

Reply via email to