Hi Dom, 

From the logs it looks like TSO is not on. I wonder if the vhost nic actually 
honors the “tso on” flag. Have you also tried with native vhost driver, instead 
of the dpdk one? I’ve never tried with the tcp, so I don’t know if it properly 
advertises the fact that it supports TSO. 

Lower you can see how it looks on my side, between two Broadwell boxes with 
XL710s. The tcp connection TSO flag needs to be on, otherwise tcp will do the 
segmentation by itself. 

Regards, 
Florin

$ ~/vpp/vcl_iperf_client 6.0.1.2 -t 10
[snip]
[ ID] Interval           Transfer     Bandwidth       Retr
[ 33]   0.00-10.00  sec  42.2 GBytes  36.2 Gbits/sec    0             sender
[ 33]   0.00-10.00  sec  42.2 GBytes  36.2 Gbits/sec                  receiver

vpp# show session verbose 2
[snip]
[1:1][T] 6.0.1.1:27240->6.0.1.2:5201              ESTABLISHED
 index: 1 cfg: TSO flags: PSH pending timers: RETRANSMIT
 snd_una 2731494347 snd_nxt 2731992143 snd_una_max 2731992143 rcv_nxt 1 rcv_las 
1
 snd_wnd 1999872 rcv_wnd 3999744 rcv_wscale 10 snd_wl1 1 snd_wl2 2731494347
 flight size 497796 out space 716 rcv_wnd_av 3999744 tsval_recent 1787061797
 tsecr 3347210414 tsecr_last_ack 3347210414 tsval_recent_age 4294966829 snd_mss 
1448
 rto 200 rto_boff 0 srtt 1 us .101 rttvar 1 rtt_ts 8.6696 rtt_seq 2731733367
 next_node 0 opaque 0x0
 cong:   none algo cubic cwnd 498512 ssthresh 407288 bytes_acked 17376
         cc space 716 prev_cwnd 581841 prev_ssthresh 403737
         snd_cong 2702482407 dupack 0 limited_tx 1608697445
         rxt_bytes 0 rxt_delivered 0 rxt_head 13367060 rxt_ts 3347210414
         prr_start 2701996195 prr_delivered 0 prr space 0
 sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
         last_delivered 0 high_sacked 2702540327 is_reneging 0
         cur_rxt_hole 4294967295 high_rxt 2702048323 rescue_rxt 2701996194
 stats: in segs 293052 dsegs 0 bytes 0 dupacks 5568
        out segs 381811 dsegs 381810 bytes 15628627726 dupacks 0
        fr 229 tr 0 rxt segs 8207 bytes 11733696 duration 3.468
        err wnd data below 0 above 0 ack below 0 above 0
 pacer: rate 4941713080 bucket 2328382 t/p 4941.713 last_update 0 us idle 100
 Rx fifo: cursize 0 nitems 3999999 has_event 0
          head 0 tail 0 segment manager 1
          vpp session 1 thread 1 app session 1 thread 0
          ooo pool 0 active elts newest 0
 Tx fifo: cursize 1999999 nitems 1999999 has_event 1
          head 396234 tail 396233 segment manager 1
          vpp session 1 thread 1 app session 1 thread 0
          ooo pool 0 active elts newest 4294967295
 session: state: ready opaque: 0x0 flags:

vpp# sh run 
[snip]
Thread 1 vpp_wk_0 (lcore 24)
Time 774.3, 10 sec internal node vector rate 0.00
  vector rates in 2.5159e3, out 1.4186e3, drop 1.2915e-3, punt 0.0000e0
             Name                 State         Calls          Vectors        
Suspends         Clocks       Vectors/Call
FortyGigabitEthernet84/0/0-out   active             977678         1099456      
         0          2.47e2            1.12
FortyGigabitEthernet84/0/0-tx    active             977678         1098446      
         0          2.17e3            1.12
ethernet-input                   active             442524          848618      
         0          2.69e2            1.92
ip4-input-no-checksum            active             442523          848617      
         0          2.86e2            1.92
ip4-local                        active             442523          848617      
         0          3.24e2            1.92
ip4-lookup                       active            1291425         1948073      
         0          2.09e2            1.51
ip4-rewrite                      active             977678         1099456      
         0          2.23e2            1.12
session-queue                    polling        7614793106         1099452      
         0          7.45e5            0.00
tcp4-established                 active             442520          848614      
         0          1.26e3            1.92
tcp4-input                       active             442523          848617      
         0          3.04e2            1.92
tcp4-output                      active             977678         1099456      
         0          3.77e2            1.12
tcp4-rcv-process                 active                  1               1      
         0          5.82e3            1.00
tcp4-syn-sent                    active                  2               2      
         0          6.84e4            1.00


> On Dec 13, 2019, at 12:58 PM, dch...@akouto.com wrote:
> 
> Hi,
> I rebuilt VPP on master and updated startup.conf to enable tso as follows:
> dpdk {
>   dev 0000:00:03.0{
>           num-rx-desc 2048
>           num-tx-desc 2048
>           tso on
>   }
>   uio-driver vfio-pci
>   enable-tcp-udp-checksum
> }
> 
> I'm not sure whether it is working or not, there is nothing in show session 
> verbose 2 to indicate whether it is on or off (output at the end of this 
> update). Unfortunately there was no improvement from a performance 
> perspective. 
> 
> Then I figured I would try using a tap interface on the VPP side so I could 
> run iperf3 "natively" on the VPP client side as well, but got the same result 
> again. I find this so perplexing, two test runs back to back with reboots in 
> between to rule out any configuration issues:
> 
> Test 1 using native linux networking on both sides:
> [iperf3 client --> linux networking eth0] --> [Openstack/Linuxbridge] --> 
> [linux networking eth0 --> iperf3 server]
> Result: 10+ Gbps
> 
> Reboot both instances and assign the NIC on the client side to VPP :
> vpp# set int l2 bridge GigabitEthernet0/3/0 1
> vpp# set int state GigabitEthernet0/3/0 up
> vpp# create tap
> tap0
> vpp# set int l2 bridge tap0 1
> vpp# set int state tap0 up
> [root]# ip addr add 10.0.0.152/24 dev tap0
>  
> 
> [iperf3 client --> tap0 --> VPP GigabitEthernet0/3/0] --> 
> [Openstack/Linuxbridge] --> [ linux networking eth0 --> iperf3 server]
> Result: 1 Gbps
> 
> I had started to suspect the host OS or OpenStack Neutron, linuxbridge etc, 
> but based on this it just *has* to be something in the guest running VPP. Any 
> and all ideas or suggestions are welcome!
> 
> Regards,
> Dom
> 
> Note: this output is from a run using iperf3+VCL with the TSO settings in 
> startup.conf, not the tap interface test described above:
> 
> vpp# set interface ip address GigabitEthernet0/3/0 10.0.0.152/24
> vpp# set interface state GigabitEthernet0/3/0 up
> vpp# session enable
> vpp# sh session verbose 2
> Thread 0: no sessions
> [1:0][T] 10.0.0.152:6445->10.0.0.156:5201         ESTABLISHED
>  index: 0 cfg:  flags:  timers:
>  snd_una 124 snd_nxt 124 snd_una_max 124 rcv_nxt 5 rcv_las 5
>  snd_wnd 29056 rcv_wnd 7999488 rcv_wscale 10 snd_wl1 4 snd_wl2 124
>  flight size 0 out space 4473 rcv_wnd_av 7999488 tsval_recent 3428491
>  tsecr 193532193 tsecr_last_ack 193532193 tsval_recent_age 13996 snd_mss 1448
>  rto 259 rto_boff 0 srtt 67 us 3.891 rttvar 48 rtt_ts 0.0000 rtt_seq 124
>  next_node 0 opaque 0x0
>  cong:   none algo cubic cwnd 4473 ssthresh 2147483647 bytes_acked 0
>          cc space 4473 prev_cwnd 0 prev_ssthresh 0
>          snd_cong 1281277517 dupack 0 limited_tx 1281277517
>          rxt_bytes 0 rxt_delivered 0 rxt_head 1281277517 rxt_ts 193546719
>          prr_start 1281277517 prr_delivered 0 prr space 0
>  sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
>          last_delivered 0 high_sacked 1281277517 is_reneging 0
>          cur_rxt_hole 4294967295 high_rxt 1281277517 rescue_rxt 1281277517
>  stats: in segs 6 dsegs 4 bytes 4 dupacks 0
>         out segs 7 dsegs 2 bytes 123 dupacks 0
>         fr 0 tr 0 rxt segs 0 bytes 0 duration 14.539
>         err wnd data below 0 above 0 ack below 0 above 0
>  pacer: rate 1149550 bucket 0 t/p 1.149 last_update 14.526 s idle 194
>  Rx fifo: cursize 0 nitems 7999999 has_event 0
>           head 4 tail 4 segment manager 2
>           vpp session 0 thread 1 app session 0 thread 0
>           ooo pool 0 active elts newest 4294967295
>  Tx fifo: cursize 0 nitems 7999999 has_event 0
>           head 123 tail 123 segment manager 2
>           vpp session 0 thread 1 app session 0 thread 0
>           ooo pool 0 active elts newest 4294967295
>  session: state: ready opaque: 0x0 flags:
> [1:1][T] 10.0.0.152:10408->10.0.0.156:5201        ESTABLISHED
>  index: 1 cfg:  flags:  timers: RETRANSMIT
>  snd_una 2195902174 snd_nxt 2196262726 snd_una_max 2196262726 rcv_nxt 1 
> rcv_las 1
>  snd_wnd 1574016 rcv_wnd 7999488 rcv_wscale 10 snd_wl1 1 snd_wl2 2195902174
>  flight size 360552 out space 832 rcv_wnd_av 7999488 tsval_recent 3443014
>  tsecr 193546715 tsecr_last_ack 193546715 tsval_recent_age 4294966768 snd_mss 
> 1448
>  rto 200 rto_boff 0 srtt 1 us 2.606 rttvar 1 rtt_ts 45.0534 rtt_seq 2195903622
>  next_node 0 opaque 0x0
>  cong:   none algo cubic cwnd 361384 ssthresh 329528 bytes_acked 2896
>          cc space 832 prev_cwnd 470755 prev_ssthresh 340435
>          snd_cong 2188350854 dupack 0 limited_tx 2709798285
>          rxt_bytes 0 rxt_delivered 0 rxt_head 2143051622 rxt_ts 193546719
>          prr_start 2187975822 prr_delivered 0 prr space 0
>  sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
>          last_delivered 0 high_sacked 2188350854 is_reneging 0
>          cur_rxt_hole 4294967295 high_rxt 2187977270 rescue_rxt 2187975821
>  stats: in segs 720132 dsegs 0 bytes 0 dupacks 127869
>         out segs 1549120 dsegs 1549119 bytes 2243122901 dupacks 0
>         fr 43 tr 0 rxt segs 32362 bytes 46860176 duration 14.529
>         err wnd data below 0 above 0 ack below 0 above 0
>  pacer: rate 361384000 bucket 1996 t/p 361.384 last_update 619 us idle 100
>  Rx fifo: cursize 0 nitems 7999999 has_event 0
>           head 0 tail 0 segment manager 2
>           vpp session 1 thread 1 app session 1 thread 0
>           ooo pool 0 active elts newest 0
>  Tx fifo: cursize 7999999 nitems 7999999 has_event 1
>           head 3902173 tail 3902172 segment manager 2
>           vpp session 1 thread 1 app session 1 thread 0
>           ooo pool 0 active elts newest 4294967295
>  session: state: ready opaque: 0x0 flags:
> Thread 1: active sessions 2
> Thread 2: no sessions
> Thread 3: no sessions
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#14891): https://lists.fd.io/g/vpp-dev/message/14891
> Mute This Topic: https://lists.fd.io/mt/65863639/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14892): https://lists.fd.io/g/vpp-dev/message/14892
Mute This Topic: https://lists.fd.io/mt/65863639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to