Hi Dom,
In addition to Florin’s questions, can you clarify what you mean by 
“…interfaces are assigned to DPDK/VPP” ? What driver are you using ?
Regards,
Jerome


De : <vpp-dev@lists.fd.io> au nom de Florin Coras <fcoras.li...@gmail.com>
Date : mercredi 4 décembre 2019 à 02:31
À : "dch...@akouto.com" <dch...@akouto.com>
Cc : "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
Objet : Re: [vpp-dev] VPP / tcp_echo performance

Hi Dom,

I’ve never tried to run the stack in a VM, so not sure about the expected 
performance, but here are a couple of comments:
- What fifo sizes are you using? Are they at least 4MB (see [1] for VCL 
configuration).
- I don’t think you need to configure more than 16k buffers/numa.

Additionally, to get more information on the issue:
- What does “show session verbose 2” report? Check the stats section for 
retransmit counts (tr - timer retransmit, fr - fast retansmit) which if 
non-zero indicate that packets are lost.
- Check interface rx/tx error counts with “show int”.
- Typically, for improved performance, you should write more than 1.4kB per 
call. But the fact that your average is less than 1.4kB suggests that you often 
find the fifo full or close to full. So probably the issue is not your sender 
app.

Regards,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf


On Dec 3, 2019, at 11:40 AM, dch...@akouto.com<mailto:dch...@akouto.com> wrote:

Hi all,
I've been running some performance tests and not quite getting the results I 
was hoping for, and have a couple of related questions I was hoping someone 
could provide some tips with. For context, here's a summary of the results of 
TCP tests I've run on two VMs (CentOS 7 OpenStack instances, host-1 is the 
client and host-2 is the server):
·         Running iperf3 natively before the interfaces are assigned to 
DPDK/VPP: 10 Gbps TCP throughput
·         Running iperf3 with VCL/HostStack: 3.5 Gbps TCP throughput
·         Running a modified version of the tcp_echo application (similar 
results with socket and svm api): 610 Mbps throughput
Things I've tried to improve performance:
·         Anything I could apply from 
https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning)
·         Added tcp { cc-algo cubic } to VPP startup config
·         Using isolcpu and VPP startup config options, allocated first 2, then 
4 and finally 6 of the 8 available cores to VPP main & worker threads
·         In VPP startup config set "buffers-per-numa 65536" and "default 
data-size 4096"
·         Updated grub boot options to include hugepagesz=1GB hugepages=64 
default_hugepagesz=1GB
My goal is to achieve at least the same throughput using VPP as I get when I 
run iperf3 natively on the same network interfaces (in this case 10 Gbps).

A couple of related questions:
·         Given the items above, do any VPP or kernel configuration items jump 
out that I may have missed that could justify the difference in native vs VPP 
performance or help get the two a bit closer?
·         In the modified tcp_echo application, n_sent = app_send_stream(...) 
is called in a loop always using the same length (1400 bytes) in my test 
version. The return value n_sent indicates that the average bytes sent is only 
around 130 bytes per call after some run time. Are there any parameters or 
options that might improve this?
Any tips or pointers to documentation that might shed some light would be 
hugely appreciated!

Regards,
Dom

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14772): https://lists.fd.io/g/vpp-dev/message/14772
Mute This Topic: https://lists.fd.io/mt/65863639/675152
Group Owner: vpp-dev+ow...@lists.fd.io<mailto:vpp-dev+ow...@lists.fd.io>
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[fcoras.li...@gmail.com<mailto:fcoras.li...@gmail.com>]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14779): https://lists.fd.io/g/vpp-dev/message/14779
Mute This Topic: https://lists.fd.io/mt/65863639/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to