Thank you Dave or your email !
“The test replaces many, many keys on purpose.”
I did not pay attention to the above, rather I was counting the number of times
inserted and display wise.
Once I made keys unique all the time, its working as expected!
Thanks
Vijay
From: "Dave Barach (dbarach)"
Hi,
I'm trying to bring up VPP on OSX using Vagrant and I keep hitting the
following crash:
default: Building vom in /vpp/build-root/build-vpp-native/vom
default: make[3]: Entering directory '/vpp/build-root/build-vpp-native/vom'
default: Making all in vom
default:
Quick update on this: it only needs to have a fiber cable plugged into the SFP
for it to react normally. It doesn't matter that the other end of the cable be
connected to anything at all, just that something is present in the SFP. This
isn't a failure scenario I'm terribly concerned about and
I've been testing failure scenarios with VPP on a VM passing through a 2 port
10gE NIC. I'm seeing some major issues when one of the physical ports is
disconnected that manifests itself as follows:
- Ping tests are dropping at least 1/3 of the packets and half of them
that do go through
What is the maximum value of j at the start of the inner loop? Note that kv.key
= i. The test replaces many, many keys on purpose.
Set TESTS += test_bihash_template in vppinfra.am
From: Vijay Katamreddy (vkatamre)
Sent: Wednesday, August 8, 2018 3:47 PM
To: Dave Barach (dbarach)
Cc:
Hi Dave,
I took code from the below routine in test_bihash_template.c
test_bihash_vec64 (test_main_t * tm)
{
…
}
for (j = 0; j < 3; j++)
{
for (i = 1; i <= j * 1000 + 1; i++)
{
kv.key = i;
kv.value = 1;
BV (clib_bihash_add_del) (h, , 1 /* is_add */ );
No known issues at that level. Since this sounds like a test code, can you
share it?
Thanks... Dave
From: vpp-dev@lists.fd.io On Behalf Of Vijayabhaskar
Katamreddy via Lists.Fd.Io
Sent: Wednesday, August 8, 2018 3:14 PM
To: vpp-dev@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev]
Hi,
I am add/search and experimenting with bihash, and when I use format_bihash to
print the keys/values.. active_elements count is printing incorrectly as well
not printing all the values.
I am inserting 3003 elements in 16_8 bihash, but only 2001 elements are
printed, any known issues?
I
FYI,
Some help on this fix is necessary as build now fails with DPDK 18.02.2,
because of this patch
https://gerrit.fd.io/r/#/c/13954/
So that temporary workaround of using the old DPDK may not be possible now,
unless you want to use an old master HEAD of VPP.
Thank you
Sirshak Das
From: Lijian
Hi,
> I found a dpdk-based traffic generator here but I couldn't run it.
I am working with dpdk-pktgen, without issues.
What do you mean by "couldn't run it"?
Can you please post the
full command line you tried and the full log on console afterwards ?
Also - which version of dpdk-pktgen did you
Hello,
We are creating branches on weekly basis and CSIT being verified in weekly job.
So the question is if there is option to set "date" or "number" when limiting
repo.
Counting the cadence of up to 10 merges per day (artifacts posted on Nexus)
then means that safe value is around 100-120
Hello,
We are creating branches on weekly basis and CSIT being verified in weekly job.
So the question is if there is option to set "date" or "number" when limiting
repo.
Counting the cadence of up to 10 merges per day (artifacts posted on Nexus)
then means that safe value is around 100-120
Peter,
How many artifacts do you need us to retain for your testing?
Thank you,
Vanessa
On Mon Aug 06 04:53:29 2018, pmi...@cisco.com wrote:
> Hello Vanessa,
>
> For CSIT it is not about release or not. We would need to increase
> cadence on our weekly jobs to daily. Currently CSIT jobs are
Hi Yalei,
I guess there’s two solutions to that. Set min_chunk to something like clib_min
(16 << 10, fifo_size/2) or remove min_chunk entirely.
I’m fine with either of those. Feel free to push a patch!
Thanks,
Florin
> On Aug 8, 2018, at 12:17 AM, 汪亚雷 wrote:
>
> Hi Florin,
>
> These days
Dear Gulakh,
As you move forward, please be careful. An arbitrary 10g NIC may or may not
have adequate PCI bus bandwidth to handle 10gb line-rate, full-duplex @ 64 byte
pkts. Depopulated memory channels, incorrect NUMA placement, and a host of
other configuration errors may yield awful
Have a look at: https://wiki.fd.io/view/Trex :)
Ed
On August 8, 2018 at 8:07:45 AM, Gulakh (holoogul...@gmail.com) wrote:
Hi,
I have setup a VPLS scenario in VPP and now I want to test its performance
to see whether it is able to handle customer traffics of 10G/s. i.e.
customer is generating
Hi,
I have setup a VPLS scenario in VPP and now I want to test its performance
to see whether it is able to handle customer traffics of 10G/s. i.e.
customer is generating 10 G/s traffic
I found a dpdk-based traffic generator here
Add VPP mailing list.
On AArch64, VPP is not working with Mellanox DPDK driver 18.05, and while
Mellanox DPDK driver 18.02.2 works well with VPP.
Can anyone guide us to find the correct expert from Mellanox to fix this issue?
Thanks.
From: Lijian Zhang
Sent: Friday, August 3, 2018 4:11 PM
To:
Hi Florin,
These days I tested the tcp_echo of master branch, and found an issue: when
I set the fifo-size in client to value that lower then 16, it test will
hang because nothing send to server and of cause no recv too.
After some investigation, it is related to this lines in send_test_chunk.
19 matches
Mail list logo