Hi All,
We are facing issue with X710 + SRIOV.
Some how packets coming to queue 0 only . Same Nic sriov are shared between
a vm using dpdk and other vm using Linux interfaces. Can anyone provide
any direction?
We are using vpp 2106 with dpdk 2102. I know itβs old but any hint
might help.
Thank
Hi,
We are using DPDK version β*21.02*β and facing issue while doing port start
of MLX (ConnectX-4 Lx Virtual Function)
*:0b:00.0 'MT27710 Family [ConnectX-4 Lx Virtual Function]* 1016'
if=eth2 drv=mlx5_core unused=
*Photon kernel version* - 4.19.277-3.ph3
*Firmware version β *
]#
Hi All,
Do we have a compatible sheet for E810 Nic
Environment - Vmware
Driver - native vmware
DPDK versions
Thanks,
Chetan
Hello DPDK mentors,
We are facing an issue, KNI core stuck after 4-5 days of traffic. Can
anybody guide us on this?
We have set isolcpu, rcu_nocbs in grub for given cores.
Aug 22 01:38:29 kernel: INFO: rcu_sched self-detected stall on CPU { 2}
(t=60001 jiffies g=426362199 c=426362198 q=150932)
Hello DPDK mentors,
We are facing an issue, KNI core stuck after 4-5 days of traffic. Can
anybody guide us on this?
We have set isolcpu, rcu_nocbs in grub for given cores.
Aug 22 01:38:29 kernel: INFO: rcu_sched self-detected stall on CPU { 2}
(t=60001 jiffies g=426362199 c=426362198 q=150932)
Hello Everyone,
Can anybody please share the compatibility matrix for i40en (Vmware native
driver) with the DPDK version.
I am trying XL710 nic card 10G dual port but facing issues.
2024/03/15 10:06:12:842 notice dpdk iavf_configure_queues():
RXDID[22] is not supported, request defa
Hello Everyone,
We can run dpdk app with KNI on container with below options -
* --network host --privileged* -v /sys/bus/pci/drivers:/sys/bus/pci/drivers
-v /sys/kernel/mm/hugepages:/sys/kernel/mm/hugepages -v
/sys/devices/system/node:/sys/devices/system/node -v /dev:/dev
Is there any way we cou
--config="(0,13,13,13)"*
* Thanks,*
*Chetan*
On Fri, May 28, 2021 at 11:59 AM chetan bhasin
wrote:
> Hello Everyone,
>
> We can run dpdk app with KNI on container with below options -
> * --network host --privileged* -v
> /sys/bus/pci/drivers:/sys/bus/pci/drivers -v
> /
Hi,
I am using DPDK 17.11.4 version . Do anybody have idea that DPDK is using
benefit of Transparent huge-pages in case of Madvise.
Thanks,
Chetan Bhasin
Thanks Anatoly
On Tue, Jan 8, 2019, 16:01 Burakov, Anatoly On 02-Jan-19 3:31 PM, chetan bhasin wrote:
> > Hi,
> >
> > I am using DPDK 17.11.4 version . Do anybody have idea that DPDK is using
> > benefit of Transparent huge-pages in case of Madvise.
> >
> >
Hi Dpdk Champs,
I am facing an issue while bringing bonding on VM having vmxnet3 interfaces.
Below error is coming in the log -
PMD: bond_ethdev_start(1959) - bonded port (2) failed to reconfigure slave
device (0).
Please suggest.
Thanks,
Chetan Bhasin
Can anybody suggest? Stuck right now!
On Sat, Oct 6, 2018, 17:02 chetan bhasin wrote:
> Hi Dpdk Champs,
>
> I am facing an issue while bringing bonding on VM having vmxnet3
> interfaces.
>
> Below error is coming in the log -
>
> PMD: bond_ethdev_start(1959) - b
D_INIT_LOG(ERR, "SCTP checksum offload not supported");
return -EINVAL;
}
Regards,
Chetan Bhasin
On Tue, Oct 9, 2018 at 7:11 AM Chas Williams <3ch...@gmail.com> wrote:
> Any other error messages? There's really only one path out of
> slave_configu
Thanks a lot
Let me check in vpp's DPDK plugin what value is passed.
Thanks,
Chetan Bhasin
On Tue, Oct 9, 2018 at 1:35 PM Ferruh Yigit wrote:
> On 10/9/2018 5:35 AM, chetan bhasin wrote:
> > Thanks for your reply .
> >
> > The issue coming in function &
for bonded device
eth_bond0
Oct 31 13:13:07 - INFO - vdev_probe(): failed to initialize eth_bond0
device
Thanks,
Chetan Bhasin
Hi Matan,
Thanks for your reply.
I have not removed Mellanox interfaces those were still up in linux domain .
Thanks,
Chetan Bhasin
On Thu, Nov 1, 2018 at 3:20 PM Matan Azrad wrote:
> Hi Chetan
>
> From: chetan bhasin
> > Hi,
> >
> > We are using Dpdk 17.11.4 a
. Is bonding just
not possible using the two interfaces on this device?
Thanks,
Chetan Bhasin
Hi,
Could anybody please provide a link/reference that mention X722 NIc driver
version , firmware version compatibility with Dpdk 17.11.5.
This would be a great help.
Thanks,
Chetan
Hi,
We are using DPDK underneath VPP . We are facing issue when we increase
buffers from 100k to 300k after upgrading vpp version (18.01--> 19.08).
As per log following error is seen
net_mlx5: port %u unable to find virtually contiguous chunk for address
(%p). rte_memseg_contig_walk() failed.\n%.0
Hi,
We are using VPP , which internally using dpdk.
*With vpp 18.01(dpdk 17.11.4)* : Whenever we set MTU of interface as 1500 ,
we can get 1500 size of untagged and tagged packet.
*With vpp 19.0.5* (dpdk 19.05) : Whenever we set mtu as 1500 , we have seen
issue with vlan packets, those are not e
ailover.
Appreciate your response .
Thanks,
Chetan Bhasin
ody please provide a direction towards which patch I should look
into , as there are lots of changes between dpdk-17.11.4 and dpdk-18.11.2.
I have tried some patches related to i40e driver but no success yet.
Thanks & Regards,
Chetan Bhasin
On Thu, Jul 4, 2019 at 7:51 PM chetan bhasin
w
Hello folks,
I am using dpdk 17.11.4 version , it's a bare metal server with intel X722
nic with firm-ware version 4.0.
The problem is switch-over is not working, looks like dpdk is not able to
get link down trigger.
Can anybody guide us please .
Thanks,
Chetan Bhasin
Hi Souvik,
I am also facing this issue on DPDK 17.11.4 but not sure whether it is
because of fragmented traffic or not.
Are you able to get the direction regarding this issue? Can you please
guide.
Thanks,
Chetan Bhasin
On Sat, Jun 15, 2019 at 12:00 AM Dey, Souvik wrote:
> Hi
Hello Everyone,
I have a use case where one of the buffer in a chain has packet-len 0 , Can
anybody suggest whether it will cause a problem while doing a TX ?
Thanks,
Chetan Bhasin
Hello DevTeam members,
We have a dpdk application where we are using multiple KNI's as slaves
under a linux virtual interface which is acting as master. When process got
SIGSEGV , we are seeing that process remains in defunct state and linux
kernel is trying to release those slave interfaces in ba
26 matches
Mail list logo