Hi,
I have implemented a shaper as a poll node in VPP. worker.
The implementation is such that the shaper needs to send packets out
which are sitting/scheduled in a timer wheel with microsecond
granularity slots.
The shaper must invoke at a precise regular interval, say every 250
microseconds wher
I observe crash in vpp-2005 when IPv6 link local packet is received with the
below BT.
#0 0x2b51396a8387 in raise () from /lib64/libc.so.6
#1 0x2b51396a9a78 in abort () from /lib64/libc.so.6
#2 0x56126df9017e in os_exit (code=code@entry=1)
at vpp_2005/vpp_2005/src/vpp/vnet/main.c:3
Hi Ivan,
Thanks for the test. After modifying it a bit to run straight from binaries, I
managed to repro the issue. As expected, the proxy is not cleaning up the
sessions correctly (example apps do run out of sync ..). Here’s a quick patch
that solves some of the obvious issues [1] (note that
Concerning the CI: I'd be glad to add that test to "make test", but not
sure how to approach it. The test is not about containers but more about
using network namespaces and some tools like wrk to create a lot of TCP
connections to do some "stress testing" of VPP host stack (and as it was
noted, it
By no means, happens all the time! Glad it was solved!
Regards,
Florin
> On Jul 22, 2020, at 11:09 AM, Sebastiano Miano
> wrote:
>
> Hi Florin,
> what a fool I am, you are right ;)
>
> Just for reference, with the release image, the throughput increases to
> 11.4Gbps.
>
> Thanks again for y
Hi Florin,
what a fool I am, you are right ;)
Just for reference, with the release image, the throughput increases to
11.4Gbps.
Thanks again for your support.
Regards,
Sebastiano
Il giorno mer 22 lug 2020 alle ore 18:27 Florin Coras <
fcoras.li...@gmail.com> ha scritto:
> Hi Sebastiano,
>
> Yo
I missed the point about the CI in my other reply. If we can somehow integrate
some container based tests into the “make test” infra, I wouldn’t mind at all!
:-)
Regards,
Florin
> On Jul 22, 2020, at 4:17 AM, Ivan Shvedunov wrote:
>
> Hi,
> sadly the patch apparently didn't work. It should ha
Hi Christian,
Yes looks like your bottleneck is crypto, not IO...
I have no idea about AES performance on latest AMD compared to Intel, but looks
like you have your answer 😊
Best
ben
> -Original Message-
> From: Christian Hopps
> Sent: mercredi 22 juillet 2020 18:47
> To: Benoit Ganne
Thanks,ChbrisOn Jul 22, 2020, at 11:32 AM, Benoit Ganne (bganne) via lists.fd.io wrote:I tried setting that but didn't notice an issue, perhaps it's not an IObottleneck.Could you share the output of: - clear run/show run (it is important to clear 1st to capture "live"
Hi all,
I'm using VPP to develop my program.
Here is my scenario.
I want to use VPP to build a NAT gateway with only one Public IPv4, and all
traffic need to use this Public IP to internet. (for example: 1.1.1.1)
I only can allow only one port from external firewall
So I can only use 1.1.1.1:443
Hi Ivan,
Will try to reproduce but given the types of crashes, it could be that the
proxy app is not cleanly releasing the connections.
Regards,
Florin
> On Jul 22, 2020, at 8:29 AM, Ivan Shvedunov wrote:
>
> Some preliminary observations concerning the crashes in the proxy example:
> * !rb
Hi Sebastiano,
You’re running a debug image, so that is expected. Try to run a release image.
Regarding the proxy issue, it looks like the proxy did not close/reuse the
fifos accordingly. Will try to look into it.
Regards,
Florin
> On Jul 22, 2020, at 2:23 AM, Sebastiano Miano
> wrote:
>
> I tried setting that but didn't notice an issue, perhaps it's not an IO
> bottleneck.
Could you share the output of:
- clear run/show run (it is important to clear 1st to capture "live" stats)
- show err
- show threads
- show buffers
- show pci
While traffic is flowing?
To give some ideas,
Some preliminary observations concerning the crashes in the proxy example:
* !rb_tree_is_init(...) assertion failures are likely caused by
multiple active_open_connected_callback() invocations for the same
connection
* f_update_ooo_deq() SIGSEGV crash is possibly caused for late callbacks
for conne
I tried that, but
> On Jul 22, 2020, at 1:07 AM, Benoit Ganne (bganne) via lists.fd.io
> wrote:
>
> Hi Christian,
>
> Everything else being correct (VPP NUMA core placement vs NIC etc) and if you
> see a bottleneck on IO, you might need to set the NIC as 'Preferred IO' in
> the BIOS.
I tri
Most likely, this particular error (!rb_tree_is_init) may stem from the
fact that proxy's active_open_connected_callback() is invoked multiple
times for the same connection. I'm not sure it's supposed to happen this
way. Also, there seem to be other SVM FIFO issues, too
On Wed, Jul 22, 2020 at 4:2
Hi,
this SVM FIFO error looks like a crash that is mentioned in the ticket
related to a TCP timer bug [1].
I do sometimes get this exact error, too, it just happens less frequently
than the other kinds of the crash.
It can probably be reproduced using my test repo [2] that I have mentioned
in anoth
+1 ddio makes a first-order perf difference...
From: vpp-dev@lists.fd.io On Behalf Of Damjan Marion via
lists.fd.io
Sent: Wednesday, July 22, 2020 4:04 AM
To: Christian Hopps
Cc: vpp-dev
Subject: Re: [vpp-dev] AMD Epyc and vpp.
> On 22 Jul 2020, at 02:33, Christian Hopps
> mailto:cho...@ch
Hi,
sadly the patch apparently didn't work. It should have worked but for some
reason it didn't ...
On the bright side, I've made a test case [1] using fresh upstream VPP code
with no UPF that reproduces the issues I mentioned, including both timer
and TCP retransmit one along with some other poss
Hi Chuan,
From what I have seen the DPDK driver should disable HW LLDP by default unless
your XL710 firmware is not up-to-date and has a bug where disabling HW LLDP can
lockup the rx path (fixed from NVM 6.01(for X710 XL710 XXV710)/3.33(for X722)
or later according to the source).
If you use th
Hi Florin,
thanks for your reply.
Unfortunately, changing the "fifo size" to "4m" has not changed the
performance that much. I've only got 2Gbps instead of 1.5Gbps.
Moreover, I have checked both the "show errors" output and it looks like no
errors are shown [1].
The "show run" output looks fine, wh
> On 22 Jul 2020, at 02:33, Christian Hopps wrote:
>
> Hi vpp-dev,
>
> Has anyone done performance analysis with the new AMD epyc processors and VPP?
>
> Just naively running my normal build shows a 3GHz Epyc machine
> under-performing a 2.1GHz intel xeon.
AFAIK AMD doesn’t have DDIO so I
And any well behaved UDP application would have to implement
draft-ietf-tsvwg-datagram-plpmtud.
Cheers,
Ole
> On 21 Jul 2020, at 18:57, Florin Coras wrote:
>
> Hi,
>
> By default udp computes its mss starting from a 1500 mtu. You can avoid this
> by either changing the default, i.e., in sta
23 matches
Mail list logo