Hi VPP maintainers,
Recently VPP has upgraded the DPDK version to DPDK-21.08, which includes two
optimization patches[1][2] from Arm DPDK team. With the mbuf-fast-free flag,
the two patches add code segment to accelerate mbuf free in PMD TX path for
i40e driver, which shows quite obvious
[Edited Message Follows]
Hi,
There were 2 core files with the same signature and found that crash is
happening in the VPP source code while getting the dump of the interface in the
failure case.
Please, help me with the below crash traces.
Any help would be appreciated.
Thread 1 (Thread
Steven,
Thank you for the clarification.
Regards,
Srikanth
On Thu, Sep 16, 2021 at 10:31 AM Steven Luong (sluong)
wrote:
> Srikanth ,
>
>
>
> You are correct that dpdk bonding has been deprecated for a while. I don’t
> remember since when. The performance of VPP native bonding when compared
Srikanth ,
You are correct that dpdk bonding has been deprecated for a while. I don’t
remember since when. The performance of VPP native bonding when compared to
dpdk bonding is about the same. With VPP native bonding, you have an additional
option to configure LACP which was not supported
Hi Steven,
We are trying to evaluate bonding driver functionality in VPP and it seems
we have disabled the DPDK PMD driver by default from 19.08 onwards. Could
you share your experience on this ?
Also could you share the performance comparisons b/w these two drivers in
case it's available?
Any
Hi
I do not have access to the system for the next few days, but as soon as I
access it, I will send the requested configurations. For the startup.conf
file, I have configured something like this:
cpu {
main-core 0
corelist-workers 1-37
}
dpdk{
socket-mem 1024, 1024
Hi Artme,
I’d suggest you replace:
set int unnumbered memif0/0 use loop0
with
enable ip6 interface memif0/0
or add something like this:
https://gerrit.fd.io/r/c/vpp/+/32770/4/src/vnet/interface.c#1701
/neale
From: vpp-dev@lists.fd.io on behalf of Artem Glazychev
via lists.fd.io
Date:
Hi all,
just a gentle reminder: As per release plan [0], the 21.10 RC1
milestone is less than a week away - 22 September (Wednesday) at 12:00
UTC.
The newly pulled branch will be only accepting the bugfixes in
preparation to the release.
So, if you have any feature patches that need to go into
Yes, I try to ping from vpp1 to vpp2 via memif.
by the way, IPv4 works fine.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20145): https://lists.fd.io/g/vpp-dev/message/20145
Mute This Topic: https://lists.fd.io/mt/85649127/21656
Group Owner:
Hi Artem
What is your use case. Do you want to ping from vpp1 to vpp2 via memif or ?
*Regards*,
Mrityunjay Kumar.
Mobile: +91 - 9731528504
On Thu, Sep 16, 2021 at 4:55 PM Artem Glazychev
wrote:
> Hello,
>
> I have a problem - unnumbered IPv6 interface is not working.
>
> *Configuration*
>
>
Hello,
I have a problem - unnumbered IPv6 interface is not working.
*Configuration*
For example, let's create memifs, one of them will be IPv6 unnumbered.
*vpp1:*
>
>
>
> create interface memif id 0 master
> set int ip address memif0/0 fc00::1/120
> set int state memif0/0 up
>
>
*vpp2:*
Hi Mohsen,
We recently tested , we are good throughput for gtp-u ( non-ipsec ) case.
We had link limitation of 10 Gbps... We were doing Line rate for single cpu
and single gtp-u tunnel.
Can you configuration details ? Also
show hardware-interfaces
and startup.conf
Thanks
Regards
Vemu
On Thu,
*Regards*,
Mrityunjay Kumar.
Mobile: +91 - 9731528504
On Thu, Sep 16, 2021 at 1:43 PM Mohsen Meamarian
wrote:
> Hello to all dear friends,
>
> I have trouble getting high throughput from VPP. Where and how can I apply
> configurations that make the throughput as much as possible? I have
>
Hi,
Please, help me with the below crash traces.
Thread 1 (Thread 0x7f48c0e388c0 (LWP 261705)):
#0 0x7f48b92ff1f7 in raise () from /usr/lib64/libc.so.6
#1 0x7f48b93008e8 in abort () from /usr/lib64/libc.so.6
#2 0x7f48b9c05a95 in _ gnu_cxx:: _verbose_terminate_handler() () from
14 matches
Mail list logo