I might have just missed it, but looking through the ongoing regression tests I 
can't see anything that explicitly tests for packet loss during CLI/API 
commands, so I'm wondering whether minimization of packet loss during 
configuration is viewed as a goal for vpp?

Many/most of the real world applications I've been exploring require the 
ability to reconfigure live systems without impacting the existing flows 
related to stable elements (route updates, tunnel add/remove, VM 
addition/removal), and it would be great to understand how this fit with vpp 
use cases.

Thanks again,

Colin.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Colin Tregenza Dancer via vpp-dev
Sent: 19 August 2017 12:17
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Packet loss on use of API & cmdline

Hi,

I've been doing some prototyping and load testing of the vpp dataplane, and 
have observed packet loss when I issue API requests or use the debug command 
line.  Is this to be expected given the use of the worker_thread_barrier, or 
might there be some way I could improve matters?

Currently I'm running a fairly modest 2Mpps throughput between a pair of 10G 
ports on an Intel X520 NIC, with baremetal Ubuntu 16, & vpp 17.01.

Thanks in advance,

Colin.
_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to