> On Oct 11, 2017, at 12:18 PM, Francois Ozog wrote:
>
> Hi Jim,
>
> This is good news, and as far as I have analyzed the setup, this is possible
> thanks to faster memory and uncore subsystem. But that is still maxing at 57%
> of line rate.
Remember, this was on an untuned setup using a p
Hi Jim,
This is good news, and as far as I have analyzed the setup, this is
possible thanks to faster memory and uncore subsystem. But that is still
maxing at 57% of line rate.
You may say there will be better CPUs and memory in the future to match the
performance. But in the future, network conn
Francois,
We did a bit of throughput testing with VPP back in April. The machines used
are detailed in a someone infamous 1 April blog post.
https://www.netgate.com/blog/building-a-behemoth-router.html
Basically a pair of boxes with i7-6950X, some water cooling, and Intel XL710
cards (we als
Hi Damjan,
When it comes to performance, the contiguity prevents DMA transaction
coalescing which is required to reach line rate for 64 byte packets at
25Gbps and above.
On a PCI gen 3 x8 lanes slot, you have 50Gbps and roughly 35M DMA
transactions per second. This allows reaching 47% of 64 byte
Francois,
Almost every VPP feature assumes that data is adjacent to vlib_buffer_t.
It will be huge rework to make this happen, and it will slow down performance as
it will introduce dependent read at many places in the code...
So to answer your question, we don’t have such plans.
Thanks,
Damja
Hi,
Hardware that is capable of 50Gbps and above (at 64 byte line rate)
place packets next to each other in large memory zones rather than in
individual memory buffers.
To handle packets without copy would require vlib_buffer_t to allow
packet data to be NOT consecutive to it.
Are there plans or