Hi Florin,

Some progress, at least with the built-in echo app, thank you for all the 
suggestions so far! By adjusting the fifo-size and testing in half-duplex I was 
able to get close to 5 Gbps between the two openstack instances using the 
built-in test echo app:

vpp# test echo clients gbytes 1 no-return fifo-size 1000000 uri 
tcp://10.0.0.156/5555
1 three-way handshakes in .26 seconds 3.86/s
Test started at 745.163085
Test finished at 746.937343
1073741824 bytes (1024 mbytes, 1 gbytes) in 1.77 seconds
605177784.33 bytes/second half-duplex
4.8414 gbit/second half-duplex

I need to get closer to 10 Gbps but at least there is good proof that the issue 
is related to configuration / tuning. So, I switched back to iperf testing with 
VCL, and I'm back to 600 Mbps, even though I can confirm that the fifo sizes 
match what is configured in vcl.conf (note that in this test run I changed that 
to 8 Mb each for rx and tx from the previous 16, but results are the same when 
I use 16 Mb). I'm obviously missing something in the configuration but I can't 
imagine what that might be. Below is my exact startup.conf, vcl.conf and output 
from show session from this iperf run to give the full picture, hopefully 
something jumps out as missing in my configuration. Thank you for your patience 
and support with this, much appreciated!

*[root@vpp-test-1 centos]# cat vcl.conf*
vcl {
rx-fifo-size 8000000
tx-fifo-size 8000000
app-scope-local
app-scope-global
api-socket-name /tmp/vpp-api.sock
}

*[root@vpp-test-1 centos]# cat /etc/vpp/startup.conf*
unix {
nodaemon
log /var/log/vpp/vpp.log
full-coredump
cli-listen /run/vpp/cli.sock
gid vpp
interactive
}
dpdk {
dev 0000:00:03.0{
num-rx-desc 65535
num-tx-desc 65535
}
}
session { evt_qs_memfd_seg }
socksvr { socket-name /tmp/vpp-api.sock }
api-trace {
on
}
api-segment {
gid vpp
}
cpu {
main-core 7
corelist-workers 4-6
workers 3
}
buffers {
## Increase number of buffers allocated, needed only in scenarios with
## large number of interfaces and worker threads. Value is per numa node.
## Default is 16384 (8192 if running unpriviledged)
buffers-per-numa 128000

## Size of buffer data area
## Default is 2048
default data-size 8192
}

*vpp# sh session verbose 2*
Thread 0: no sessions
[1:0][T] 10.0.0.152:41737->10.0.0.156:5201        ESTABLISHED
index: 0 flags:  timers:
snd_una 124 snd_nxt 124 snd_una_max 124 rcv_nxt 5 rcv_las 5
snd_wnd 7999488 rcv_wnd 7999488 rcv_wscale 10 snd_wl1 4 snd_wl2 124
flight size 0 out space 4413 rcv_wnd_av 7999488 tsval_recent 12893009
tsecr 10757431 tsecr_last_ack 10757431 tsval_recent_age 1995 snd_mss 1428
rto 200 rto_boff 0 srtt 3 us 3.887 rttvar 2 rtt_ts 0.0000 rtt_seq 124
cong:   none algo newreno cwnd 4413 ssthresh 4194304 bytes_acked 0
cc space 4413 prev_cwnd 0 prev_ssthresh 0 rtx_bytes 0
snd_congestion 1736877166 dupack 0 limited_transmit 1736877166
sboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
last_bytes_delivered 0 high_sacked 1736877166 snd_una_adv 0
cur_rxt_hole 4294967295 high_rxt 1736877166 rescue_rxt 1736877166
stats: in segs 7 dsegs 4 bytes 4 dupacks 0
out segs 7 dsegs 2 bytes 123 dupacks 0
fr 0 tr 0 rxt segs 0 bytes 0 duration 2.484
err wnd data below 0 above 0 ack below 0 above 0
pacer: bucket 42459 tokens/period .685 last_update 61908201
Rx fifo: cursize 0 nitems 7999999 has_event 0
head 4 tail 4 segment manager 3
vpp session 0 thread 1 app session 0 thread 0
ooo pool 0 active elts newest 4294967295
Tx fifo: cursize 0 nitems 7999999 has_event 0
head 123 tail 123 segment manager 3
vpp session 0 thread 1 app session 0 thread 0
ooo pool 0 active elts newest 4294967295
[1:1][T] 10.0.0.152:53460->10.0.0.156:5201        ESTABLISHED
index: 1 flags: PSH pending timers: RETRANSMIT
snd_una 160482962 snd_nxt 160735718 snd_una_max 160735718 rcv_nxt 1 rcv_las 1
snd_wnd 7999488 rcv_wnd 7999488 rcv_wscale 10 snd_wl1 1 snd_wl2 160482962
flight size 252756 out space 714 rcv_wnd_av 7999488 tsval_recent 12895476
tsecr 10759907 tsecr_last_ack 10759907 tsval_recent_age 4294966825 snd_mss 1428
rto 200 rto_boff 0 srtt 1 us 3.418 rttvar 2 rtt_ts 42.0588 rtt_seq 160485818
cong:   none algo newreno cwnd 253470 ssthresh 187782 bytes_acked 2856
cc space 714 prev_cwnd 382704 prev_ssthresh 187068 rtx_bytes 0
snd_congestion 150237062 dupack 0 limited_transmit 817908495
sboard: sacked_bytes 0 last_sacked_bytes 0 lost_bytes 0
last_bytes_delivered 0 high_sacked 150242774 snd_una_adv 0
cur_rxt_hole 4294967295 high_rxt 150235634 rescue_rxt 149855785
stats: in segs 84958 dsegs 0 bytes 0 dupacks 1237
out segs 112747 dsegs 112746 bytes 160999897 dupacks 0
fr 5 tr 0 rxt segs 185 bytes 264180 duration 2.473
err wnd data below 0 above 0 ack below 0 above 0
pacer: bucket 22180207 tokens/period 117.979 last_update 61e173e5
Rx fifo: cursize 0 nitems 7999999 has_event 0
head 0 tail 0 segment manager 3
vpp session 1 thread 1 app session 1 thread 0
ooo pool 0 active elts newest 0
Tx fifo: cursize 7999999 nitems 7999999 has_event 1
head 482961 tail 482960 segment manager 3
vpp session 1 thread 1 app session 1 thread 0
ooo pool 0 active elts newest 4294967295
Thread 1: active sessions 2
Thread 2: no sessions
Thread 3: no sessions
vpp#
Regards,
Dom
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14827): https://lists.fd.io/g/vpp-dev/message/14827
Mute This Topic: https://lists.fd.io/mt/65863639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to