On 01/22/2017 10:58 AM, David Scott wrote:



On Thu, Jan 19, 2017 at 2:40 PM, Mindy <[email protected] <mailto:[email protected]>> wrote:

    On 01/19/2017 04:14 AM, Anil Madhavapeddy wrote:

        On 19 Jan 2017, at 10:00, David Scott <[email protected]
        <mailto:[email protected]>> wrote:

            Hi,

            I'm trying to increase the performance a program which
            uses the mirage-tcpip stack (specifically vpnkit[1]
            running on Windows). I noticed the total CPU overhead in
            `top` was higher than I expected so I attempted to reduce
            the overhead per byte by enabling jumbo frames. I bumped
            the MTU of the ethernet link however this was not enough
            -- mirage-tcpip was still sending frames of size ~1500
            bytes. I tracked the problem down to the
            
[max_mss](https://github.com/mirage/mirage-tcpip/blob/756db428db2346a7b7461805cf233631b8f61a1e/lib/tcp/window.ml#L62
            
<https://github.com/mirage/mirage-tcpip/blob/756db428db2346a7b7461805cf233631b8f61a1e/lib/tcp/window.ml#L62>)
            -- when I manually bumped this and recompiled, I got
            larger frames and my TCP throughput increased from
            500Mbit/sec to 600Mbit/sec (there are other overheads that
            also need addressing)

            So my questions is: how should this be done properly?
            Should the TCP layer query the maximum IP datagram size
            (derived from the underlying ethernet MTU)? Or is
            something more complicated needed?

        That sounds right -- one missing feature is that we don't have
        Path MTU discovery in the stack, and so can only select on the
        basis of the immediate MTU (which may be larger than some
        intermediate hop, causing fragmentation on the wire).


    I've thought about this a bit recently (since
    https://github.com/mirage/mirage/issues/622#issuecomment-254513280
    <https://github.com/mirage/mirage/issues/622#issuecomment-254513280>)
    but have lacked the time and focus to improve the situation.  It's
    a bit worse than the comment above implies, because we currently
    have no concept of an MTU at all in the Ethernet implementation
    used by mirage-tcpip's `direct` stack.

    An important first step would be adding any facility for setting
    (on `connect`, presumably) the MTU in the Ethernet layer, and
    adding a function for querying that information to the ETHIF
    module type so higher layers can rely on it. Right now there's no
    mechanism for discovering that the packet to be sent is larger
    than our own MTU, let alone one further along the path.


Ah, I hadn't spotted that an MTU accessor function is missing! I think this probably explains why the MSS value is hardcoded :)

I had a go at adding a simple `mtu: t -> int` accessor to both ethernet and IPv* and then patched TCP to compute the MSS from the MTU of the layer beneath. As you suggested I added a `connect` parameter to the ethernet layer:

https://github.com/mirage/mirage-protocols/pull/4
https://github.com/mirage/mirage-tcpip/pull/288

Let me know what you think!

This is great!

Another context in which I was recently thinking about MTUs: they're one of the only situations I could think of where Ethif.write or Ethif.writev would sensibly return an `Error` that wasn't passed up the stack from the underlying NETIF module's `write{v}`.


(There's no rush on this from my point of view -- I imagine things are really busy with the release. If the shape of the interface is ok then I might proceed and base further speculative work on it in branches)

I think that's pretty safe!  These patches look like the right start to me.

-Mindy
_______________________________________________
MirageOS-devel mailing list
[email protected]
https://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel

Reply via email to