[vpp-dev] LDP in comparison with OpenOnload

2022-04-06 Thread Kunal Parikh
Hi,

I want to gauge if the plan for LDP is to be similar to OpenOnload ( 
https://github.com/Xilinx-CNS/onload )

We use OpenOnload with SolarFlare cards with great success.

It doesn't require us to change our code while getting the benefits of kernel 
bypass (and hardware acceleration from SolarFlare cards).

Thanks,

Kunal

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21215): https://lists.fd.io/g/vpp-dev/message/21215
Mute This Topic: https://lists.fd.io/mt/90298662/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VCL & netperf #hoststack

2022-04-06 Thread Kunal Parikh
I am using LD_PRELOAD

Is there a particular example of netperf flags you can recommend for measuring 
per packet latency?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21214): https://lists.fd.io/g/vpp-dev/message/21214
Mute This Topic: https://lists.fd.io/mt/90297978/21656
Mute #hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VCL & netperf #hoststack

2022-04-06 Thread Kunal Parikh
Hi Folks,

I want visualize the latency profile of VCL HostStack.

I am using netperf and am receiving this error on the server:

Issue receiving request on control connection. Errno 19 (No such device)

Detailed logs attached.
root@ip-10-21-120-191:~# netperf -d -H 10.21.120.48 -l -1000 -t TCP_RR -w 10ms 
-b 1 -v 2 -- -O 
min_latency,mean_latency,max_latency,stddev_latency,transaction_rate
Packet rate control is not compiled in.
Packet burst size is not compiled in.
resolve_host called with host '10.21.120.48' port '(null)' family AF_UNSPEC
getaddrinfo returned the following for host '10.21.120.48' port '(null)'  
family AF_UNSPEC
cannonical name: '10.21.120.48'
flags: 22 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP 
addrlen 16
sa_family: AF_INET sadata: 0 0 10 21 120 48 0 0 0 0 0 0 0 0 0 0
scan_omni_args called with the following argument vector
netperf -d -H 10.21.120.48 -l -1000 -t TCP_RR -w 10ms -b 1 -v 2 -- -O 
min_latency,mean_latency,max_latency,stddev_latency,transaction_rate
print_omni_init entered
print_omni_init_list called
parse_output_selection is parsing the output selection 
'min_latency,mean_latency,max_latency,stddev_latency,transaction_rate'
Program name: netperf
Local send alignment: 8
Local recv alignment: 8
Remote send alignment: 8
Remote recv alignment: 8
Local socket priority: -1
Remote socket priority: -1
Local socket TOS: cs0
Remote socket TOS: cs0
Report local CPU 0
Report remote CPU 0
Verbosity: 2
Debug: 1
Port: 12865
Test name: TCP_RR
Test bytes: 1000 Test time: 0 Test trans: 1000
Host name: 10.21.120.48

installing catcher for all signals
Could not install signal catcher for sig 32, errno 22
Could not install signal catcher for sig 33, errno 22
Could not install signal catcher for sig 65, errno 22
remotehost is 10.21.120.48 and port 12865
resolve_host called with host '10.21.120.48' port '12865' family AF_INET
getaddrinfo returned the following for host '10.21.120.48' port '12865'  family 
AF_INET
cannonical name: '10.21.120.48'
flags: 22 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP 
addrlen 16
sa_family: AF_INET sadata: 50 65 10 21 120 48 0 0 0 0 0 0 0 0 144 250
resolve_host called with host '0.0.0.0' port '0' family AF_UNSPEC
getaddrinfo returned the following for host '0.0.0.0' port '0'  family AF_UNSPEC
cannonical name: '0.0.0.0'
flags: 22 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP 
addrlen 16
sa_family: AF_INET sadata: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 114 97
establish_control called with host '10.21.120.48' port '12865' remfam AF_INET
local '0.0.0.0' port '0' locfam AF_UNSPEC
bound control socket to 0.0.0.0 and 0
successful connection to remote netserver at 10.21.120.48 and 12865
recv_response: received a 0 byte response
recv_response: Connection reset by peerroot@ip-10-21-120-92:~# netserver -f -D -4 -v 9 -L 10.21.120.48 -d
check_if_inetd: enter
setup_listens: enter
create_listens: called with host '10.21.120.48' port '12865' family AF_INET(2)
getaddrinfo returned the following for host '10.21.120.48' port '12865'  family 
AF_INET
cannonical name: '(nil)'
flags: 1 family: AF_INET: socktype: SOCK_STREAM protocol IPPROTO_TCP 
addrlen 16
sa_family: AF_INET sadata: 50 65 10 21 120 48 0 0 0 0 0 0 0 0 48 118
Starting netserver with host '10.21.120.48' port '12865' and family AF_INET
accept_connections: enter
set_fdset: enter list 0x559276ed74d0 fd_set 0x7ffc7020f9e0
setting 32 in fdset
accept_connection: enter
process_requests: enter
Issue receiving request on control connection. Errno 19 (No such device)
set_fdset: enter list 0x559276ed74d0 fd_set 0x7ffc7020f9e0
setting 32 in fdset
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21212): https://lists.fd.io/g/vpp-dev/message/21212
Mute This Topic: https://lists.fd.io/mt/90297978/21656
Mute #hoststack:https://lists.fd.io/g/vpp-dev/mutehashtag/hoststack
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Iperf3 test

2022-03-30 Thread Kunal Parikh
Hi Florin

Following is the output from
> 
> vppctl show hardware-interfaces

> 
> Name                Idx   Link  Hardware
> local0                             0    down  local0
> Link speed: unknown
> local
> vpp0                               1     up   vpp0
> Link speed: unknown
> RX Queues:
> queue thread         mode
> 0     vpp_wk_0 (1)   polling
> Ethernet address 0e:ca:6b:19:5b:95
> AWS ENA VF
> carrier up full duplex max-frame-size 9026
> flags: admin-up maybe-multiseg rx-ip4-cksum
> Devargs:
> rx: queues 1 (max 8), desc 256 (min 128 max 2048 align 1)
> tx: queues 2 (max 8), desc 256 (min 128 max 1024 align 1)
> pci: device 1d0f:ec20 subsystem : address :00:06.00 numa 0
> max rx packet len: 9234
> promiscuous: unicast off all-multicast off
> vlan offload: strip off filter off qinq off
> *rx offload avail:  ipv4-cksum udp-cksum tcp-cksum scatter*
> *rx offload active: ipv4-cksum scatter*
> *tx offload avail:  ipv4-cksum udp-cksum tcp-cksum multi-segs*
> *tx offload active: multi-segs*
> rss avail:         ipv4-tcp ipv4-udp ipv6-tcp ipv6-udp
> rss active:        none
> tx burst function: (not available)
> rx burst function: (not available)
> 

Should my goal be to move items in the "avail" list to the "active" list?

> 
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21151): https://lists.fd.io/g/vpp-dev/message/21151
Mute This Topic: https://lists.fd.io/mt/89961794/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP installation on Amazon Linux 2

2022-03-30 Thread Kunal Parikh
Thanks Ray, I tried building from source too, but ran into some dependency 
issues.

I'll post back if I have some success.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21147): https://lists.fd.io/g/vpp-dev/message/21147
Mute This Topic: https://lists.fd.io/mt/90130898/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP installation on Amazon Linux 2

2022-03-30 Thread Kunal Parikh
Hello!

Has anyone attempted to (rpm) install VPP on Amazon Linux 2?

I've added the repo, but latest RPMs for VPP are not available at 
https://packagecloud.io/fdio/release

TIA,

Kunal

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21145): https://lists.fd.io/g/vpp-dev/message/21145
Mute This Topic: https://lists.fd.io/mt/90130898/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Iperf3 test

2022-03-29 Thread Kunal Parikh
Diagnostics produced using -b 5g

> 
> taskset --cpu-list 10-15 iperf3 -4 -c 10.21.120.133 -b 5g -t 30
root@ip-10-21-120-238:~# vppctl clear errors; vppctl clear run
root@ip-10-21-120-238:~# vppctl show run
Thread 0 vpp_main (lcore 1)
Time 6.1, 10 sec internal node vector rate 0.00 loops/sec 171189.88
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
dpdk-processany wait 0   0  
 2  4.98e30.00
fib-walkany wait 0   0  
 3  3.88e30.00
ip4-full-reassembly-expire-wal  any wait 0   0  
 1  1.54e30.00
ip4-sv-reassembly-expire-walk   any wait 0   0  
 1  1.50e30.00
ip6-full-reassembly-expire-wal  any wait 0   0  
 1  1.37e30.00
ip6-mld-process any wait 0   0  
 6  1.83e30.00
ip6-ra-process  any wait 0   0  
 6  9.64e20.00
ip6-sv-reassembly-expire-walk   any wait 0   0  
 1  2.33e30.00
session-queue-process   any wait 0   0  
 6  1.51e50.00
statseg-collector-process   time wait0   0  
 1  1.52e60.00
unix-cli-local:5 active  1   0  
 2  5.22e80.00
unix-cli-new-sessionany wait 0   0  
 2  1.92e30.00
unix-epoll-input polling 77290   0  
 0  2.35e50.00
---
Thread 1 vpp_wk_0 (lcore 2)
Time 6.1, 10 sec internal node vector rate 0.00 loops/sec 6922329.85
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
dpdk-input   polling  42859353   0  
 0  1.45e20.00
session-queuepolling  42859353   0  
 0  1.76e20.00
unix-epoll-input polling 41814   0  
 0  1.48e30.00
root@ip-10-21-120-238:~# vppctl show session verbose 2
Thread 0: no sessions
Thread 1: no sessions
root@ip-10-21-120-238:~# vppctl show session verbose 2
Thread 0: no sessions
[1:0][T] 10.21.120.187:3067->10.21.120.133:5201 ESTABLISHED
 index: 0 cfg: No csum offload, TSO off flags: PSH pending timers:
 snd_una 165 snd_nxt 165 rcv_nxt 5 rcv_las 5
 snd_wnd 3488 rcv_wnd 3488 rcv_wscale 10 snd_wl1 4 snd_wl2 165
 flight size 0 out space 4508 rcv_wnd_av 3488 tsval_recent 51916
 tsecr 51907 tsecr_last_ack 51907 tsval_recent_age 4630 snd_mss 1448
 rto 717 rto_boff 0 srtt 156.2 us 156.199 rttvar 140.2 rtt_ts 0. rtt_seq 165
 next_node 0 opaque 0x0 fib_index 0 sw_if_index 1
 cong:   none algo cubic cwnd 4508 ssthresh 2147483647 bytes_acked 0
 cc space 4508 prev_cwnd 0 prev_ssthresh 0
 snd_cong 0 dupack 0 limited_tx 729424615
 rxt_bytes 0 rxt_delivered 0 rxt_head 729424615 rxt_ts 56542
 prr_start 729424615 prr_delivered 0 prr space 0
 sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
 last_delivered 0 high_sacked 729424615 is_reneging 0 reorder 3
 cur_rxt_hole 4294967295 high_rxt 729424615 rescue_rxt 729424615
 stats: in segs 7 dsegs 4 bytes 4 dupacks 0
out segs 9 dsegs 2 bytes 164 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 4.636
err wnd data below 0 above 0 ack below 0 above 0
 pacer: rate 28860 bucket 0 t/p .029 last_update 4.629 s burst 1460
 transport: flags 0x5
 Rx fifo: cursize 0 nitems 4000 has_event 0 min_alloc 65536
  head 4 tail 4 segment manager 2
  vpp session 0 thread 1 app session 0 thread 0
  ooo pool 0 active elts newest 4294967295
 Tx fifo: cursize 0 nitems 4000 has_event 0 min_alloc 65536
  head 164 tail 164 segment manager 2
  vpp session 0 thread 1 app session 0 thread 0
  ooo pool 0 active elts newest 0
 session: state: ready opaque: 0x0 flags:
[1:1][T] 10.21.120.187:37150->10.21.120.133:5201ESTABLISHED
 index: 1 cfg: No csum offload, TSO off flags: PSH pending timers: RETRANSMIT
 snd_una 2172306318 snd_nxt 2172430846 rcv_nxt 1 rcv_las 1
 snd_wnd 3488 rcv_wnd 3488 rcv_wscale 

Re: [vpp-dev] VPP Iperf3 test

2022-03-29 Thread Kunal Parikh
Attaching diagnostics.
root@ip-10-21-120-238:~# vppctl show session verbose 2
Thread 0: no sessions
Thread 1: no sessions
root@ip-10-21-120-238:~# vppctl show session verbose 2
Thread 0: no sessions
[1:0][T] 10.21.120.187:46669->10.21.120.133:5201ESTABLISHED
 index: 0 cfg: No csum offload, TSO off flags: PSH pending timers:
 snd_una 142 snd_nxt 142 rcv_nxt 5 rcv_las 5
 snd_wnd 3488 rcv_wnd 3488 rcv_wscale 10 snd_wl1 4 snd_wl2 142
 flight size 0 out space 4485 rcv_wnd_av 3488 tsval_recent 241503
 tsecr 241484 tsecr_last_ack 241484 tsval_recent_age 1709 snd_mss 1448
 rto 200 rto_boff 0 srtt .2 us .212 rttvar .2 rtt_ts 0. rtt_seq 142
 next_node 0 opaque 0x0 fib_index 0 sw_if_index 1
 cong:   none algo cubic cwnd 4485 ssthresh 2147483647 bytes_acked 0
 cc space 4485 prev_cwnd 0 prev_ssthresh 0
 snd_cong 0 dupack 0 limited_tx 330743771
 rxt_bytes 0 rxt_delivered 0 rxt_head 330743771 rxt_ts 243194
 prr_start 330743771 prr_delivered 0 prr space 0
 sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
 last_delivered 0 high_sacked 330743771 is_reneging 0 reorder 3
 cur_rxt_hole 4294967295 high_rxt 330743771 rescue_rxt 330743771
 stats: in segs 7 dsegs 4 bytes 4 dupacks 0
out segs 8 dsegs 2 bytes 141 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 1.712
err wnd data below 0 above 0 ack below 0 above 0
 pacer: rate 21255924 bucket 0 t/p 21.256 last_update 1.709 s burst 1460
 transport: flags 0x5
 Rx fifo: cursize 0 nitems 4000 has_event 0 min_alloc 65536
  head 4 tail 4 segment manager 1
  vpp session 0 thread 1 app session 0 thread 0
  ooo pool 0 active elts newest 4294967295
 Tx fifo: cursize 0 nitems 4000 has_event 0 min_alloc 65536
  head 141 tail 141 segment manager 1
  vpp session 0 thread 1 app session 0 thread 0
  ooo pool 0 active elts newest 0
 session: state: ready opaque: 0x0 flags:
[1:1][T] 10.21.120.187:58184->10.21.120.133:5201ESTABLISHED
 index: 1 cfg: No csum offload, TSO off flags: PSH pending timers: RETRANSMIT
 snd_una 797988494 snd_nxt 798113022 rcv_nxt 1 rcv_las 1
 snd_wnd 39996416 rcv_wnd 3488 rcv_wscale 10 snd_wl1 1 snd_wl2 797988494
 flight size 124528 out space 39871888 rcv_wnd_av 3488 tsval_recent 243212
 tsecr 243194 tsecr_last_ack 243194 tsval_recent_age 0 snd_mss 1448
 rto 200 rto_boff 0 srtt .9 us .283 rttvar 0.0 rtt_ts 243.1948 rtt_seq 798113022
 next_node 0 opaque 0x0 fib_index 0 sw_if_index 1
 cong:   none algo cubic cwnd 40001037 ssthresh 2147483647 bytes_acked 13032
 cc space 39871888 prev_cwnd 0 prev_ssthresh 0
 snd_cong 0 dupack 0 limited_tx 2865278347
 rxt_bytes 0 rxt_delivered 0 rxt_head 2865278347 rxt_ts 243194
 prr_start 2865278347 prr_delivered 0 prr space 0
 sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
 last_delivered 0 high_sacked 2865278347 is_reneging 0 reorder 3
 cur_rxt_hole 4294967295 high_rxt 2865278347 rescue_rxt 2865278347
 stats: in segs 128916 dsegs 0 bytes 0 dupacks 0
out segs 551186 dsegs 551184 bytes 798113021 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 1.711
err wnd data below 0 above 0 ack below 0 above 0
 pacer: rate 141564850229 bucket 516 t/p 141564.844 last_update 19 us burst 
62780
 transport: flags 0x1
 Rx fifo: cursize 0 nitems 4000 has_event 0 min_alloc 65536
  head 0 tail 0 segment manager 1
  vpp session 1 thread 1 app session 1 thread 0
  ooo pool 0 active elts newest 0
 Tx fifo: cursize 4000 nitems 4000 has_event 1 min_alloc 65536
  head 797988493 tail 837988493 segment manager 1
  vpp session 1 thread 1 app session 1 thread 0
  ooo pool 0 active elts newest 0
 session: state: ready opaque: 0x0 flags:
Thread 1: active sessions 2
root@ip-10-21-120-238:~# vppctl show session verbose 2
Thread 0: no sessions
[1:0][T] 10.21.120.187:46669->10.21.120.133:5201ESTABLISHED
 index: 0 cfg: No csum offload, TSO off flags: PSH pending timers:
 snd_una 142 snd_nxt 142 rcv_nxt 5 rcv_las 5
 snd_wnd 3488 rcv_wnd 3488 rcv_wscale 10 snd_wl1 4 snd_wl2 142
 flight size 0 out space 4485 rcv_wnd_av 3488 tsval_recent 241503
 tsecr 241484 tsecr_last_ack 241484 tsval_recent_age 5086 snd_mss 1448
 rto 200 rto_boff 0 srtt .2 us .212 rttvar .2 rtt_ts 0. rtt_seq 142
 next_node 0 opaque 0x0 fib_index 0 sw_if_index 1
 cong:   none algo cubic cwnd 4485 ssthresh 2147483647 bytes_acked 0
 cc space 4485 prev_cwnd 0 prev_ssthresh 0
 snd_cong 0 dupack 0 limited_tx 330743771
 rxt_bytes 0 rxt_delivered 0 rxt_head 330743771 rxt_ts 246571
 prr_start 330743771 prr_delivered 0 prr space 0
 sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
 last_delivered 0 high_sacked 330743771 is_reneging 0 reorder 3
 cur_rxt_hole 4294967295 high_rxt 330743771 rescue_rxt 330743771
 

Re: [vpp-dev] VPP Iperf3 test

2022-03-29 Thread Kunal Parikh
:(
Same outcome.

Added tcp { no-csum-offload } to /etc/vpp/startup.conf
Tested with and without tx-checksum-offload.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21133): https://lists.fd.io/g/vpp-dev/message/21133
Mute This Topic: https://lists.fd.io/mt/89961794/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Iperf3 test

2022-03-29 Thread Kunal Parikh
Hi Florian,

Confirming that rx/tx descriptors are set to 256.

However, bitrate is still at 3.78 Gbits/sec with VPP vs 11.9 Gbits/sec without 
VPP

> 
> Beyond that, the only thing I’m noticing is that the client is very
> bursty, i.e., sends up to 42 packets / dispatch but the receiver only gets
> 4. There are no drops so it looks like the network is struggling to buffer
> and deliver the packets instead of dropping, which might actually help in
> this case.

I'm unsure on how to resolve this.

Also, re-posting startup.conf to see if I am missing something.

root@ip-10-21-120-238:~# vppctl show session verbose 2
Thread 0: no sessions
[1:0][T] 10.21.120.187:38421->10.21.120.133:5201ESTABLISHED
 index: 0 cfg: TSO off flags: PSH pending timers:
 snd_una 142 snd_nxt 142 rcv_nxt 5 rcv_las 5
 snd_wnd 3488 rcv_wnd 3488 rcv_wscale 10 snd_wl1 4 snd_wl2 142
 flight size 0 out space 4485 rcv_wnd_av 3488 tsval_recent 81914
 tsecr 81905 tsecr_last_ack 81905 tsval_recent_age 18822 snd_mss 1448
 rto 724 rto_boff 0 srtt 157.7 us 157.731 rttvar 141.6 rtt_ts 0. rtt_seq 142
 next_node 0 opaque 0x0 fib_index 0 sw_if_index 1
 cong:   none algo cubic cwnd 4485 ssthresh 2147483647 bytes_acked 0
 cc space 4485 prev_cwnd 0 prev_ssthresh 0
 snd_cong 0 dupack 0 limited_tx 2905031342
 rxt_bytes 0 rxt_delivered 0 rxt_head 2905031342 rxt_ts 100732
 prr_start 2905031342 prr_delivered 0 prr space 0
 sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
 last_delivered 0 high_sacked 2905031342 is_reneging 0 reorder 3
 cur_rxt_hole 4294967295 high_rxt 2905031342 rescue_rxt 2905031342
 stats: in segs 7 dsegs 4 bytes 4 dupacks 0
out segs 9 dsegs 2 bytes 141 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 18.829
err wnd data below 0 above 0 ack below 0 above 0
 pacer: rate 28434 bucket 0 t/p .028 last_update 18.823 s burst 1460
 transport: flags 0x5
 Rx fifo: cursize 0 nitems 4000 has_event 0 min_alloc 65536
  head 4 tail 4 segment manager 2
  vpp session 0 thread 1 app session 0 thread 0
  ooo pool 0 active elts newest 4294967295
 Tx fifo: cursize 0 nitems 4000 has_event 0 min_alloc 65536
  head 141 tail 141 segment manager 2
  vpp session 0 thread 1 app session 0 thread 0
  ooo pool 0 active elts newest 0
 session: state: ready opaque: 0x0 flags:
[1:1][T] 10.21.120.187:61552->10.21.120.133:5201ESTABLISHED
 index: 1 cfg: TSO off flags: PSH pending timers: RETRANSMIT
 snd_una 272449534 snd_nxt 272574062 rcv_nxt 1 rcv_las 1
 snd_wnd 39995392 rcv_wnd 3488 rcv_wscale 10 snd_wl1 1 snd_wl2 272449534
 flight size 124528 out space 39870864 rcv_wnd_av 3488 tsval_recent 100737
 tsecr 100732 tsecr_last_ack 100732 tsval_recent_age 0 snd_mss 1448
 rto 200 rto_boff 0 srtt .9 us .299 rttvar 0.0 rtt_ts 100.7328 rtt_seq 272574062
 next_node 0 opaque 0x0 fib_index 0 sw_if_index 1
 cong:   none algo cubic cwnd 40006829 ssthresh 2147483647 bytes_acked 7240
 cc space 39870864 prev_cwnd 0 prev_ssthresh 0
 snd_cong 0 dupack 0 limited_tx 2449973586
 rxt_bytes 0 rxt_delivered 0 rxt_head 2449973586 rxt_ts 100732
 prr_start 2449973586 prr_delivered 0 prr space 0
 sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
 last_delivered 0 high_sacked 2449973586 is_reneging 0 reorder 3
 cur_rxt_hole 4294967295 high_rxt 2449973586 rescue_rxt 2449973586
 stats: in segs 1319281 dsegs 0 bytes 0 dupacks 0
out segs 6120520 dsegs 6120518 bytes 8862508653 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 18.824
err wnd data below 0 above 0 ack below 0 above 0
 pacer: rate 133386363422 bucket 516 t/p 133386.359 last_update 19 us burst 
62780
 transport: flags 0x1
 Rx fifo: cursize 0 nitems 4000 has_event 0 min_alloc 65536
  head 0 tail 0 segment manager 2
  vpp session 1 thread 1 app session 1 thread 0
  ooo pool 0 active elts newest 0
 Tx fifo: cursize 4000 nitems 4000 has_event 1 min_alloc 65536
  head 272449533 tail 312449533 segment manager 2
  vpp session 1 thread 1 app session 1 thread 0
  ooo pool 0 active elts newest 0
 session: state: ready opaque: 0x0 flags:
Thread 1: active sessions 2
root@ip-10-21-120-238:~# vppctl show session verbose 2
Thread 0: no sessions
[1:0][T] 10.21.120.187:38421->10.21.120.133:5201ESTABLISHED
 index: 0 cfg: TSO off flags: PSH pending timers:
 snd_una 142 snd_nxt 142 rcv_nxt 5 rcv_las 5
 snd_wnd 3488 rcv_wnd 3488 rcv_wscale 10 snd_wl1 4 snd_wl2 142
 flight size 0 out space 4485 rcv_wnd_av 3488 tsval_recent 81914
 tsecr 81905 tsecr_last_ack 81905 tsval_recent_age 20746 snd_mss 1448
 rto 724 rto_boff 0 srtt 157.7 us 157.731 rttvar 141.6 rtt_ts 0. rtt_seq 142
 next_node 0 opaque 0x0 fib_index 0 sw_if_index 1
 cong:   none algo cubic cwnd 4485 ssthresh 2147483647 bytes_acked 0
 cc space 4485 

Re: [vpp-dev] VPP Iperf3 test

2022-03-29 Thread Kunal Parikh
Thanks Florin.

I've attached output from the console of the iperf3 server and client.

I don't know what I should be looking for.

Can you please provide some pointers?

Many thanks,

Kunal
root@ip-10-21-120-175:~# vppctl show session verbose 2
[0:0][CT:T] 0.0.0.0:5201->0.0.0.0:0 LISTEN
[0:1][T] 0.0.0.0:5201->0.0.0.0:0LISTEN
Thread 0: active sessions 2
[1:0][T] 10.21.120.133:5201->10.21.120.187:8836 ESTABLISHED
 index: 0 cfg: TSO off flags: PSH pending timers:
 snd_una 5 snd_nxt 5 rcv_nxt 142 rcv_las 142
 snd_wnd 3999744 rcv_wnd 3999744 rcv_wscale 10 snd_wl1 142 snd_wl2 5
 flight size 0 out space 4348 rcv_wnd_av 3999744 tsval_recent 58931737
 tsecr 58931739 tsecr_last_ack 58931739 tsval_recent_age 3969 snd_mss 1448
 rto 200 rto_boff 0 srtt .3 us .237 rttvar .4 rtt_ts 0. rtt_seq 4
 next_node 0 opaque 0x0 fib_index 0 sw_if_index 1
 cong:   none algo cubic cwnd 4348 ssthresh 2147483647 bytes_acked 1
 cc space 4348 prev_cwnd 0 prev_ssthresh 0
 snd_cong 0 dupack 0 limited_tx 4124685482
 rxt_bytes 0 rxt_delivered 0 rxt_head 4124685482 rxt_ts 58935708
 prr_start 4124685482 prr_delivered 0 prr space 0
 sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
 last_delivered 0 high_sacked 4124685482 is_reneging 0 reorder 3
 cur_rxt_hole 4294967295 high_rxt 4124685482 rescue_rxt 4124685482
 stats: in segs 7 dsegs 2 bytes 141 dupacks 0
out segs 7 dsegs 4 bytes 4 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 3.972
err wnd data below 0 above 0 ack below 0 above 0
 pacer: rate 18333978 bucket 0 t/p 18.334 last_update 3.969 s burst 1460
 transport: flags 0x5
 Rx fifo: cursize 0 nitems 400 has_event 0 min_alloc 65536
  head 141 tail 141 segment manager 3
  vpp session 0 thread 1 app session 1 thread 0
  ooo pool 0 active elts newest 4294967295
 Tx fifo: cursize 0 nitems 400 has_event 0 min_alloc 65536
  head 4 tail 4 segment manager 3
  vpp session 0 thread 1 app session 1 thread 0
  ooo pool 0 active elts newest 0
 session: state: ready opaque: 0x0 flags:
[1:1][T] 10.21.120.133:5201->10.21.120.187:19475ESTABLISHED
 index: 1 cfg: TSO off flags:  timers:
 snd_una 1 snd_nxt 1 rcv_nxt 1894088294 rcv_las 1894088294
 snd_wnd 3999744 rcv_wnd 3999744 rcv_wscale 10 snd_wl1 1894086846 snd_wl2 1
 flight size 0 out space 4344 rcv_wnd_av 3999744 tsval_recent 58935706
 tsecr 58935708 tsecr_last_ack 58935708 tsval_recent_age 0 snd_mss 1448
 rto 200 rto_boff 0 srtt .1 us .100 rttvar 0.0 rtt_ts 0. rtt_seq 2780822854
 next_node 0 opaque 0x0 fib_index 0 sw_if_index 1
 cong:   none algo cubic cwnd 4344 ssthresh 2147483647 bytes_acked 0
 cc space 4344 prev_cwnd 0 prev_ssthresh 0
 snd_cong 0 dupack 0 limited_tx 2780822854
 rxt_bytes 0 rxt_delivered 0 rxt_head 2780822854 rxt_ts 58935708
 prr_start 2780822854 prr_delivered 0 prr space 0
 sboard: sacked 0 last_sacked 0 lost 0 last_lost 0 rxt_sacked 0
 last_delivered 0 high_sacked 2780822854 is_reneging 0 reorder 3
 cur_rxt_hole 4294967295 high_rxt 2780822854 rescue_rxt 2780822854
 stats: in segs 1308074 dsegs 1308073 bytes 1894088293 dupacks 0
out segs 306626 dsegs 0 bytes 0 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 3.970
err wnd data below 0 above 0 ack below 0 above 0
 pacer: rate 4344 bucket 0 t/p 43.439 last_update 48 us burst 1460
 transport: flags 0x5
 Rx fifo: cursize 0 nitems 400 has_event 0 min_alloc 65536
  head 1894088293 tail 1894088293 segment manager 3
  vpp session 1 thread 1 app session 2 thread 0
  ooo pool 0 active elts newest 4294967295
 Tx fifo: cursize 0 nitems 400 has_event 0 min_alloc 65536
  head 0 tail 0 segment manager 3
  vpp session 1 thread 1 app session 2 thread 0
  ooo pool 0 active elts newest 0
 session: state: ready opaque: 0x0 flags:
Thread 1: active sessions 2
Thread 2: no sessions
Thread 3: no sessions
Thread 4: no sessions
Thread 5: no sessions
Thread 6: no sessions
Thread 7: no sessions
root@ip-10-21-120-175:~# vppctl show session verbose 2
[0:0][CT:T] 0.0.0.0:5201->0.0.0.0:0 LISTEN
[0:1][T] 0.0.0.0:5201->0.0.0.0:0LISTEN
Thread 0: active sessions 2
[1:0][T] 10.21.120.133:5201->10.21.120.187:8836 ESTABLISHED
 index: 0 cfg: TSO off flags: PSH pending timers:
 snd_una 5 snd_nxt 5 rcv_nxt 142 rcv_las 142
 snd_wnd 3999744 rcv_wnd 3999744 rcv_wscale 10 snd_wl1 142 snd_wl2 5
 flight size 0 out space 4348 rcv_wnd_av 3999744 tsval_recent 58931737
 tsecr 58931739 tsecr_last_ack 58931739 tsval_recent_age 15785 snd_mss 1448
 rto 200 rto_boff 0 srtt .3 us .237 rttvar .4 rtt_ts 0. rtt_seq 4
 next_node 0 opaque 0x0 fib_index 0 sw_if_index 1
 cong:   none algo cubic cwnd 4348 ssthresh 2147483647 bytes_acked 1
 cc space 4348 prev_cwnd 0 

Re: [vpp-dev] VPP Iperf3 test

2022-03-28 Thread Kunal Parikh
Also, I do believe that write combining is enabled based on:

$ lspci -v -s 00:06.0
00:06.0 Ethernet controller: Amazon.com, Inc. Elastic Network Adapter (ENA)
Physical Slot: 6
Flags: bus master, fast devsel, latency 0
Memory at febf8000 (32-bit, non-prefetchable) [size=16K]
Memory at fe90 (32-bit, prefetchable) [size=1M]
Memory at febe (32-bit, non-prefetchable) [size=64K]
Capabilities: [70] Express Endpoint, MSI 00
Capabilities: [b0] MSI-X: Enable+ Count=9 Masked-
Kernel driver in use: vfio-pci
Kernel modules: ena

root@ip-10-21-120-175:~# cat /sys/kernel/debug/x86/pat_memtype_list | grep 
fe90
PAT: [mem 0xfe80-0xfe90] write-combining
PAT: [mem 0xfe90-0xfea0] uncached-minus
PAT: [mem 0xfe90-0xfea0] uncached-minus

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21124): https://lists.fd.io/g/vpp-dev/message/21124
Mute This Topic: https://lists.fd.io/mt/89961794/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] VPP Iperf3 test

2022-03-28 Thread Kunal Parikh
Thank you for your prompt responses Florin.

I'm taking over from Shankar here.

I re-built the environment with v22.02

Here is the output from show error:

It seems okay to me.

I'm running vpp and iperf3 on the same numa node (but separate CPUs).

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21123): https://lists.fd.io/g/vpp-dev/message/21123
Mute This Topic: https://lists.fd.io/mt/89961794/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-