I have an check that I run before I run performance tests which sends a
ping along all of the paths that I run perf testing on. It runs `ping -q -c
1 -s 1472 -M do <dest>` from every source to every destination running an
iperf server It passes. I've set the MTU on all our network devices to

Just for kicks, I lowered the MTU on one of my sources and reran iperf as
you suggested. The result was still very low.

    root@lab1r1u05:~# iperf -c 10.224.12.20
    ------------------------------------------------------------
    Client connecting to 10.224.12.20, TCP port 5001
    TCP window size: 85.0 KByte (default)
    ------------------------------------------------------------
    [   3] local 10.224.12.10 port 34524 connected with 10.224.12.20 port
5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-11.0 sec  2.75 MBytes  2.10 Mbits/sec

Carl

On Tue, Jul 31, 2018 at 1:07 PM Guru Shetty <g...@ovn.org> wrote:

> MTU? Big drops in performance numbers are usually because of packet
> fragmentation. Keep the MTU of your packet origin to, say 1450 and retry?
>
> On 31 July 2018 at 12:01, Carl Baldwin <c...@ecbaldwin.net> wrote:
>
>> My apologies. I failed to include the ovs version number that I'm using.
>> It
>> is 2.7.3. Is there anything else I could check that maybe I'm not thinking
>> of?
>>
>> Carl
>>
>> On Tue, Jul 31, 2018 at 11:01 AM Ben Pfaff <b...@ovn.org> wrote:
>>
>> > On Mon, Jul 30, 2018 at 02:02:16PM -0600, Carl Baldwin wrote:
>> > > I recently tried pushing MPLS labels using OVS in a lab.
>> > >
>> > > Before adding MPLS push to the mix, I had four hosts: two pairs, each
>> > > connected to a different set of TOR switches running VRRP. OVS (using
>> > > kernel datapath) had a flow to write the VRRP mac address and output
>> to a
>> > > bond port. The bond is a Linux LACP bond, not an OVS bond. In this
>> > > scenario, the TORs would route the packets through a default route our
>> > > gateway routers to egress from the DC. This did fine and I was able to
>> > push
>> > > something around 42 Gbps using16 iperf2 TCP streams.
>> > >
>> > >     ovs-ofctl add-flow -OOpenFlow13 br0 "table=25, ip,
>> > > actions=output=${bond_port}"
>> > >
>> > > My next step was to push an MPLS label onto the packet. The above flow
>> > > became this:
>> > >
>> > >     ovs-ofctl add-flow -OOpenFlow13 br0 "table=25, ip,
>> > >
>> >
>> actions=push_mpls:0x8847,set_field:1048001->mpls_label,output=${bond_port}"
>> > >
>> > > 1048001 is a static label that I configure on the TORs which sends the
>> > > packet up to the same gateway using MPLS instead of IP routing. So,
>> the
>> > > packets would take the same path out of the network but using an MPLS
>> > path.
>> > > With this change, things worked well from a functional perspective but
>> > the
>> > > performance fell drastically to around 30-40 Mbps.
>> > >
>> > > I'm pretty confident in the network fabric because I tried the same
>> > > scenario using LInux MPLS and it performed well. From the network
>> > fabric's
>> > > point of view, it was exactly the same (static label through LACP
>> bond to
>> > > VRRP mac).
>> > >
>> > > I found in the faq [1] under "Does Open vSwitch support MPLS?" that
>> "Open
>> > > vSwitch version 2.4 can match, push, or pop up to 3 MPLS labels and
>> look
>> > > past the MPLS label into the encapsulated packet. It will have kernel
>> > > support for MPLS, yielding improved performance." I looked through the
>> > git
>> > > history and I don't see much evidence of this actually getting done
>> for
>> > the
>> > > 2.4 release. Is this faq accurate?
>> >
>> > It looks like MPLS datapath support was fairly solid by OVS 2.6, at any
>> > rate.  If your kernel module is older than that, I'd recommend
>> > upgrading.
>> >
>> _______________________________________________
>> dev mailing list
>> d...@openvswitch.org
>> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>>
>
>
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to