Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-21 Thread Damjan Marion via Lists.Fd.Io


> On 21 Feb 2020, at 11:48, chetan bhasin  wrote:
> 
> Thanks a lot Damjan for quick response !
> 
> We will try latest stable/1908 that has the given patch.
> 
> With Mellanox Technologies MT27710 Family [ConnectX-4 Lx] :
> 1) stable/vpp1908 : If we configure buffers (250k) and have 2048  huge pages 
> of 2MB (total 4GB), we see issue with traffic. "l3 mac mismatch"
> 2) stable/vpp1908 :If we configure 4 huge pages of 1GB via grub parameters , 
> vpp works even with 400K buffers.
> 
> Could you please guide us what's the best approach here ?
> 
> For point 1) we see following logs in one of the vpp thread -
> 
> #5  0x7f3375afbae2 in rte_vlog (level=, logtype=77,
> format=0x7f3376768df8 "net_mlx5: port %u unable to find virtually 
> contiguous chunk for address (%p). rte_memseg_contig_walk() failed.\n%.0s", 
> ap=ap@entry=0x7f3379c4fac8)
> at 
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:427
> #6  0x7f3375ab2c12 in rte_log (level=level@entry=5, logtype= out>,
> format=format@entry=0x7f3376768df8 "net_mlx5: port %u unable to find 
> virtually contiguous chunk for address (%p). rte_memseg_contig_walk() 
> failed.\n%.0s")
> at 
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:443
> #7  0x7f3375dc47fa in mlx5_mr_create_primary 
> (dev=dev@entry=0x7f3376e9d940 ,
> entry=entry@entry=0x7ef5c00d02ca, addr=addr@entry=69384463936)
> at 
> /nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_mr.c:627

No idea about mlx5 PMD, it is a bit special and we encourage people to use 
rdma-core plugin instead currently performance is lower but
we will have DirectVerbs code merged soon...

— 
Damjan

> 
> 
> Thanks,
> Chetan
> 
> 
> On Fri, Feb 21, 2020 at 3:13 PM Damjan Marion  <mailto:dmar...@me.com>> wrote:
> 
> 
>> On 21 Feb 2020, at 10:31, chetan bhasin > <mailto:chetan.bhasin...@gmail.com>> wrote:
>> 
>> Hi Nitin,Damjan,
>> 
>> For 40G XL710 buffers : 537600  (500K+)
>> 1) vpp 19.08 (sept 2019) : it worked with vpp 19.08 (sept release) after 
>> removing intel_iommu=on from Grub params.
>> 2) stable/vpp2001(latest) :  It worked even we have "intel_iommu=on" in Grub 
>> params
>> 
>> 
>> On stable/vpp2001 , I found a check-in before which it did not work with " 
>> intel_iommu=on " as grub params, but after the below change-list it work 
>> even with grub params.
>> commit 45495480c8165090722389b08075df06ccfcd7ef
>> Author: Yulong Pei mailto:yulong@intel.com>>
>> Date:   Thu Oct 17 18:41:52 2019 +0800
>> vlib: linux: fix wrong iommu_group value issue when using dpdk-plugin
>> 
>> Before above change in vpp 20.01 , when we bring up vpp with vfio-pci, vpp 
>> change  /sys/module/vfio/parameters/enable_unsafe_noiommu_mode to "Y" , and 
>> we face issue with traffic  but after the change  sys file value remain as  
>> "N"  in /sys/module/vfio/parameters/enable_unsafe_noiommu_mode and traffic 
>> works fine.
>> 
>> As it is bare metal so we can remove intel_iommu=on from grub to make it 
>> work without any patches . Any suggestions?
> 
> IOMMU gives you following:
>  - protection and security - it prevents misbehaving NIC to read/write 
> intentionally or unintentionally memory it is not supposed to access
>  - VA -> PA translation
> 
> If you are running bare-metal, single tenant security is probably not 
> concern, but still it can protect NIC from doing something bad eventually 
> because of driver issues.
> VA -> PA translation helps with performance, as driver doesn’t need to lookup 
> for PA when submitting descriptors but this is not critical perf issue.
> 
> So it is up to you to decide, work without IOMMU or patch your old VPP 
> version….
> 
>> 
>> Regards,
>> Chetan
>> 
>> On Tue, Feb 18, 2020 at 1:07 PM Nitin Saxena > <mailto:nsax...@marvell.com>> wrote:
>> HI Chethan,
>> 
>>
>> 
>> Your packet trace shows that packet data is all 0 and that’s why you are 
>> running into l3 mac mismatch.
>> 
>> I am guessing something messed with IOMMU due to which translation is not 
>> happening. Although packet length is correct.
>> 
>> You can try out AVF plugin to iron out where 

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-21 Thread chetan bhasin
Thanks a lot Damjan for quick response !

We will try latest stable/1908 that has the given patch.

*With Mellanox Technologies MT27710 Family [ConnectX-4 Lx] :*
1) stable/vpp1908 : If we configure buffers (250k) and have 2048  huge
pages of 2MB (total 4GB), we see issue with traffic. "l3 mac mismatch"
2) stable/vpp1908 :If we configure 4 huge pages of 1GB via grub parameters
, vpp works even with 400K buffers.

Could you please guide us what's the best approach here ?

For point 1) we see following logs in one of the vpp thread -

#5  0x7f3375afbae2 in rte_vlog (level=, logtype=77,
format=0x7f3376768df8 *"net_mlx5: port %u unable to find virtually
contiguous chunk for address (%p). rte_memseg_contig_walk() failed.\n%.0s",*
ap=ap@entry=0x7f3379c4fac8)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:427
#6  0x7f3375ab2c12 in rte_log (level=level@entry=5, logtype=,
format=format@entry=0x7f3376768df8 "net_mlx5: port %u unable to find
virtually contiguous chunk for address (%p). rte_memseg_contig_walk()
failed.\n%.0s")
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/lib/librte_eal/common/eal_common_log.c:443
#7  0x7f3375dc47fa in mlx5_mr_create_primary (dev=dev@entry=0x7f3376e9d940
,
entry=entry@entry=0x7ef5c00d02ca, addr=addr@entry=69384463936)
at
/nfs-bfs/workspace/gkeown/integra/mainline/ngp/mainline/third-party/vpp/vanilla_1908/build-root/build-vpp-native/external/dpdk-19.05/drivers/net/mlx5/mlx5_mr.c:627


Thanks,
Chetan


On Fri, Feb 21, 2020 at 3:13 PM Damjan Marion  wrote:

>
>
> On 21 Feb 2020, at 10:31, chetan bhasin 
> wrote:
>
> Hi Nitin,Damjan,
>
> For 40G *XL710* buffers : 537600  (500K+)
> 1) vpp 19.08 (sept 2019) : it worked with vpp 19.08 (sept release) after
> removing intel_iommu=on from Grub params.
> 2) stable/vpp2001(latest) :  It worked even we have "intel_iommu=on" in
> Grub params
>
>
> On stable/vpp2001 , I found a check-in before which it did not work with "
> intel_iommu=on " as grub params, but after the below change-list it work
> even with grub params.
> commit 45495480c8165090722389b08075df06ccfcd7ef
> Author: Yulong Pei 
> Date:   Thu Oct 17 18:41:52 2019 +0800
> vlib: linux: fix wrong iommu_group value issue when using dpdk-plugin
>
> Before above change in vpp 20.01 , when we bring up vpp with vfio-pci, vpp
> change  /sys/module/vfio/parameters/enable_unsafe_noiommu_mode to "Y" ,
> and we face issue with traffic  but after the change  sys file value remain
> as  "N"  in /sys/module/vfio/parameters/enable_unsafe_noiommu_mode and
> traffic works fine.
>
> As it is bare metal so we can remove intel_iommu=on from grub to make it
> work without any patches . Any suggestions?
>
>
> IOMMU gives you following:
>  - protection and security - it prevents misbehaving NIC to read/write
> intentionally or unintentionally memory it is not supposed to access
>  - VA -> PA translation
>
> If you are running bare-metal, single tenant security is probably not
> concern, but still it can protect NIC from doing something bad eventually
> because of driver issues.
> VA -> PA translation helps with performance, as driver doesn’t need to
> lookup for PA when submitting descriptors but this is not critical perf
> issue.
>
> So it is up to you to decide, work without IOMMU or patch your old VPP
> version….
>
>
> Regards,
> Chetan
>
> On Tue, Feb 18, 2020 at 1:07 PM Nitin Saxena  wrote:
>
>> HI Chethan,
>>
>>
>>
>> Your packet trace shows that packet data is all 0 and that’s why you are
>> running into l3 mac mismatch.
>>
>> I am guessing something messed with IOMMU due to which translation is not
>> happening. Although packet length is correct.
>>
>> You can try out AVF plugin to iron out where problem exists, in dpdk
>> plugin or vlib
>>
>>
>>
>> Thanks,
>>
>> Nitin
>>
>>
>>
>> *From:* chetan bhasin 
>> *Sent:* Tuesday, February 18, 2020 12:50 PM
>> *To:* me 
>> *Cc:* Nitin Saxena ; vpp-dev 
>> *Subject:* Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
>>
>>
>>
>> Hi,
>>
>> One more finding related to intel nic and number of buffers (537600)
>>
>>
>>
>> vpp branch
>>
>> driver
>>
>> card
>>
>> buffers
>>
>> Traffic
>>
>> Err
>>
>> stable/1908
>>
>> uio_pci_genric
>>
>&

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-21 Thread Damjan Marion via Lists.Fd.Io


> On 21 Feb 2020, at 10:31, chetan bhasin  wrote:
> 
> Hi Nitin,Damjan,
> 
> For 40G XL710 buffers : 537600  (500K+)
> 1) vpp 19.08 (sept 2019) : it worked with vpp 19.08 (sept release) after 
> removing intel_iommu=on from Grub params.
> 2) stable/vpp2001(latest) :  It worked even we have "intel_iommu=on" in Grub 
> params
> 
> 
> On stable/vpp2001 , I found a check-in before which it did not work with " 
> intel_iommu=on " as grub params, but after the below change-list it work even 
> with grub params.
> commit 45495480c8165090722389b08075df06ccfcd7ef
> Author: Yulong Pei mailto:yulong@intel.com>>
> Date:   Thu Oct 17 18:41:52 2019 +0800
> vlib: linux: fix wrong iommu_group value issue when using dpdk-plugin
> 
> Before above change in vpp 20.01 , when we bring up vpp with vfio-pci, vpp 
> change  /sys/module/vfio/parameters/enable_unsafe_noiommu_mode to "Y" , and 
> we face issue with traffic  but after the change  sys file value remain as  
> "N"  in /sys/module/vfio/parameters/enable_unsafe_noiommu_mode and traffic 
> works fine.
> 
> As it is bare metal so we can remove intel_iommu=on from grub to make it work 
> without any patches . Any suggestions?

IOMMU gives you following:
 - protection and security - it prevents misbehaving NIC to read/write 
intentionally or unintentionally memory it is not supposed to access
 - VA -> PA translation

If you are running bare-metal, single tenant security is probably not concern, 
but still it can protect NIC from doing something bad eventually because of 
driver issues.
VA -> PA translation helps with performance, as driver doesn’t need to lookup 
for PA when submitting descriptors but this is not critical perf issue.

So it is up to you to decide, work without IOMMU or patch your old VPP version….

> 
> Regards,
> Chetan
> 
> On Tue, Feb 18, 2020 at 1:07 PM Nitin Saxena  <mailto:nsax...@marvell.com>> wrote:
> HI Chethan,
> 
>
> 
> Your packet trace shows that packet data is all 0 and that’s why you are 
> running into l3 mac mismatch.
> 
> I am guessing something messed with IOMMU due to which translation is not 
> happening. Although packet length is correct.
> 
> You can try out AVF plugin to iron out where problem exists, in dpdk plugin 
> or vlib
> 
>
> 
> Thanks,
> 
> Nitin
> 
>
> 
> From: chetan bhasin  <mailto:chetan.bhasin...@gmail.com>> 
> Sent: Tuesday, February 18, 2020 12:50 PM
> To: me mailto:chetan.bhasin...@gmail.com>>
> Cc: Nitin Saxena mailto:nsax...@marvell.com>>; vpp-dev 
> mailto:vpp-dev@lists.fd.io>>
> Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> 
>
> 
> Hi,
> 
> One more finding related to intel nic and number of buffers (537600)
> 
>
> 
> vpp branch
> 
> driver
> 
> card
> 
> buffers
> 
> Traffic
> 
> Err
> 
> stable/1908
> 
> uio_pci_genric
> 
> X722(10G)
> 
> 537600
> 
>  Working
> 
>
> 
> stable/1908
> 
> vfio-pci
> 
> XL710(40G)
> 
> 537600 
> 
> Not Working
> 
> l3 mac mismatch
> 
> stable/2001
> 
> uio_pci_genric
> 
> X722(10G)
> 
> 537600
> 
>  Working
> 
>
> 
> stable/2001
> 
> vfio-pci
> 
> XL710(40G)
> 
> 537600
> 
>  Working
> 
>
> 
>
> 
>
> 
> Thanks,
> 
> Chetan
> 
>
> 
> On Mon, Feb 17, 2020 at 7:17 PM chetan bhasin via Lists.Fd.Io 
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__Lists.Fd.Io=DwMFaQ=nKjWec2b6R0mOyPaz7xtfQ=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg=ffS1Y8GHllzjueMUVW31gwrVEIK1HVSNTKk2yA-VjG8=>
>  mailto:gmail@lists.fd.io>> 
> wrote:
> 
> Hi Nitin,
> 
>
> 
> https://github.com/FDio/vpp/commits/stable/2001/src/vlib 
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_FDio_vpp_commits_stable_2001_src_vlib=DwMFaQ=nKjWec2b6R0mOyPaz7xtfQ=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg=LljqKCmXwjl4uzuLM_oB-jhjYV5xVGFpHPDomTZwKAU=>
> As per stable/2001 branch , the given change is checked-in around Oct 28 2019.
> 
>
> 
> df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of 
> b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
> Yes (branch vpp 20.01)
> 
>
> 
> Thanks,
> 
> Chetan Bhasin
> 
>
> 
> On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena  <mailto:nsax...@marvell.com>> wrote:
> 
> Hi Damjan,
> 
> >> if you read Chetan’s email bellow, you will see that this one is already 
> >> excluded…
> Sorry I missed that part. Afte

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-21 Thread chetan bhasin
Hi Nitin,Damjan,

For 40G *XL710* buffers : 537600  (500K+)
1) vpp 19.08 (sept 2019) : it worked with vpp 19.08 (sept release) after
removing intel_iommu=on from Grub params.
2) stable/vpp2001(latest) :  It worked even we have "intel_iommu=on" in
Grub params


On stable/vpp2001 , I found a check-in before which it did not work with "
intel_iommu=on " as grub params, but after the below change-list it work
even with grub params.
commit 45495480c8165090722389b08075df06ccfcd7ef
Author: Yulong Pei 
Date:   Thu Oct 17 18:41:52 2019 +0800
vlib: linux: fix wrong iommu_group value issue when using dpdk-plugin

Before above change in vpp 20.01 , when we bring up vpp with vfio-pci, vpp
change  /sys/module/vfio/parameters/enable_unsafe_noiommu_mode to "Y" , and
we face issue with traffic  but after the change  sys file value remain as
"N"  in /sys/module/vfio/parameters/enable_unsafe_noiommu_mode and traffic
works fine.

As it is bare metal so we can remove intel_iommu=on from grub to make it
work without any patches . Any suggestions?

Regards,
Chetan

On Tue, Feb 18, 2020 at 1:07 PM Nitin Saxena  wrote:

> HI Chethan,
>
>
>
> Your packet trace shows that packet data is all 0 and that’s why you are
> running into l3 mac mismatch.
>
> I am guessing something messed with IOMMU due to which translation is not
> happening. Although packet length is correct.
>
> You can try out AVF plugin to iron out where problem exists, in dpdk
> plugin or vlib
>
>
>
> Thanks,
>
> Nitin
>
>
>
> *From:* chetan bhasin 
> *Sent:* Tuesday, February 18, 2020 12:50 PM
> *To:* me 
> *Cc:* Nitin Saxena ; vpp-dev 
> *Subject:* Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
>
>
>
> Hi,
>
> One more finding related to intel nic and number of buffers (537600)
>
>
>
> vpp branch
>
> driver
>
> card
>
> buffers
>
> Traffic
>
> Err
>
> stable/1908
>
> uio_pci_genric
>
> X722(10G)
>
> 537600
>
>  Working
>
>
>
> *stable/1908*
>
> *vfio-pci*
>
> *XL710(40G)*
>
> *537600 *
>
> *Not Working*
>
> *l3 mac mismatch*
>
> stable/2001
>
> uio_pci_genric
>
> X722(10G)
>
> 537600
>
>  Working
>
>
>
> stable/2001
>
> vfio-pci
>
> XL710(40G)
>
> 537600
>
>  Working
>
>
>
>
>
>
>
> Thanks,
>
> Chetan
>
>
>
> On Mon, Feb 17, 2020 at 7:17 PM chetan bhasin via Lists.Fd.Io
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__Lists.Fd.Io=DwMFaQ=nKjWec2b6R0mOyPaz7xtfQ=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg=ffS1Y8GHllzjueMUVW31gwrVEIK1HVSNTKk2yA-VjG8=>
>  wrote:
>
> Hi Nitin,
>
>
>
> https://github.com/FDio/vpp/commits/stable/2001/src/vlib
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_FDio_vpp_commits_stable_2001_src_vlib=DwMFaQ=nKjWec2b6R0mOyPaz7xtfQ=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg=LljqKCmXwjl4uzuLM_oB-jhjYV5xVGFpHPDomTZwKAU=>
>
> As per stable/2001 branch , the given change is checked-in around Oct 28
> 2019.
>
>
>
> df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of
> b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
> Yes (branch vpp 20.01)
>
>
>
> Thanks,
>
> Chetan Bhasin
>
>
>
> On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena  wrote:
>
> Hi Damjan,
>
> >> if you read Chetan’s email bellow, you will see that this one is
> already excluded…
> Sorry I missed that part. After seeing diffs between stable/1908 and
> stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only
> visible git commit in dpdk plugin which is playing with mempool buffers. If
> it does not solve the problem then I suspect problem lies outside dpdk
> plugin. I am guessing DPDK-19.08 is being used here with VPP-19.08
>
> Hi Chetan,
> > > 3) I took previous commit of  "vlib: don't use vector for keeping
> buffer
> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
> > Everything looks fine with Buffers 537600.
> In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is
> previous commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
>
> Thanks,
> Nitin
> > -Original Message-
> > From: Damjan Marion 
> > Sent: Monday, February 17, 2020 3:47 PM
> > To: Nitin Saxena 
> > Cc: chetan bhasin ; vpp-dev@lists.fd.io
> > Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> >
> >
> > Dear Nitin,
> >
> > if you read Chetan’s email bellow, you will see that this one is already
> > exclu

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Nitin Saxena
HI Chethan,

Your packet trace shows that packet data is all 0 and that’s why you are 
running into l3 mac mismatch.
I am guessing something messed with IOMMU due to which translation is not 
happening. Although packet length is correct.
You can try out AVF plugin to iron out where problem exists, in dpdk plugin or 
vlib

Thanks,
Nitin

From: chetan bhasin 
Sent: Tuesday, February 18, 2020 12:50 PM
To: me 
Cc: Nitin Saxena ; vpp-dev 
Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

Hi,
One more finding related to intel nic and number of buffers (537600)

vpp branch
driver
card
buffers
Traffic
Err
stable/1908
uio_pci_genric
X722(10G)
537600
 Working

stable/1908
vfio-pci
XL710(40G)
537600
Not Working
l3 mac mismatch
stable/2001
uio_pci_genric
X722(10G)
537600
 Working

stable/2001
vfio-pci
XL710(40G)
537600
 Working



Thanks,
Chetan

On Mon, Feb 17, 2020 at 7:17 PM chetan bhasin via 
Lists.Fd.Io<https://urldefense.proofpoint.com/v2/url?u=http-3A__Lists.Fd.Io=DwMFaQ=nKjWec2b6R0mOyPaz7xtfQ=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg=ffS1Y8GHllzjueMUVW31gwrVEIK1HVSNTKk2yA-VjG8=>
 mailto:gmail@lists.fd.io>> wrote:
Hi Nitin,

https://github.com/FDio/vpp/commits/stable/2001/src/vlib<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_FDio_vpp_commits_stable_2001_src_vlib=DwMFaQ=nKjWec2b6R0mOyPaz7xtfQ=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg=LljqKCmXwjl4uzuLM_oB-jhjYV5xVGFpHPDomTZwKAU=>
As per stable/2001 branch , the given change is checked-in around Oct 28 2019.

df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of 
b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
Yes (branch vpp 20.01)

Thanks,
Chetan Bhasin

On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena 
mailto:nsax...@marvell.com>> wrote:
Hi Damjan,

>> if you read Chetan’s email bellow, you will see that this one is already 
>> excluded…
Sorry I missed that part. After seeing diffs between stable/1908 and 
stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only 
visible git commit in dpdk plugin which is playing with mempool buffers. If it 
does not solve the problem then I suspect problem lies outside dpdk plugin. I 
am guessing DPDK-19.08 is being used here with VPP-19.08

Hi Chetan,
> > 3) I took previous commit of  "vlib: don't use vector for keeping buffer
> indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
> Everything looks fine with Buffers 537600.
In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is previous 
commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?

Thanks,
Nitin
> -Original Message-
> From: Damjan Marion mailto:dmar...@me.com>>
> Sent: Monday, February 17, 2020 3:47 PM
> To: Nitin Saxena mailto:nsax...@marvell.com>>
> Cc: chetan bhasin 
> mailto:chetan.bhasin...@gmail.com>>; 
> vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
> Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
>
>
> Dear Nitin,
>
> if you read Chetan’s email bellow, you will see that this one is already
> excluded…
>
> Also, it will not be easy to explain how this patch blows tx function in dpdk
> mlx5 pmd…
>
> —
> Damjan
>
> > On 17 Feb 2020, at 11:12, Nitin Saxena 
> > mailto:nsax...@marvell.com>> wrote:
> >
> > Hi Prashant/Chetan,
> >
> > I would try following change first to solve the problem in 1908
> >
> > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > Author: Damjan Marion mailto:damar...@cisco.com>>
> > Date:   Tue Mar 12 18:14:15 2019 +0100
> >
> > vlib: don't use vector for keeping buffer indices in
> >
> > Type: refactor
> >
> > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > Signed-off-by: Damjan Marion 
> > damar...@cisco.com<mailto:damar...@cisco.com>
> >
> > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> branch to stable/1908
> >
> > Thanks,
> > Nitin
> >
> > From: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> 
> > mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan
> Marion via 
> Lists.Fd.Io<https://urldefense.proofpoint.com/v2/url?u=http-3A__Lists.Fd.Io=DwMFaQ=nKjWec2b6R0mOyPaz7xtfQ=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0=qxJrqbz5sNlCrzJTOZjaJ0jHeaW077bX6ZxmV308jfg=ffS1Y8GHllzjueMUVW31gwrVEIK1HVSNTKk2yA-VjG8=>
> > Sent: Monday, February 17, 2020 1:52 PM
> > To: chetan bhasin 
> > mailto:chetan.bhasin...@gmail.com>>
> > Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
> > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> >
> > External Email
> >
> > On 17 Feb 2020

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread chetan bhasin
Hi,
One more finding related to intel nic and number of buffers (537600)

vpp branch driver card buffers Traffic Err
stable/1908 uio_pci_genric X722(10G) 537600  Working
*stable/1908* *vfio-pci* *XL710(40G)* *537600 * *Not Working* *l3 mac
mismatch*
stable/2001 uio_pci_genric X722(10G) 537600  Working
stable/2001 vfio-pci XL710(40G) 537600  Working

Thanks,
Chetan

On Mon, Feb 17, 2020 at 7:17 PM chetan bhasin via Lists.Fd.Io
 wrote:

> Hi Nitin,
>
> https://github.com/FDio/vpp/commits/stable/2001/src/vlib
> As per stable/2001 branch , the given change is checked-in around Oct 28
> 2019.
>
> df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of
> b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
> Yes (branch vpp 20.01)
>
> Thanks,
> Chetan Bhasin
>
> On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena  wrote:
>
>> Hi Damjan,
>>
>> >> if you read Chetan’s email bellow, you will see that this one is
>> already excluded…
>> Sorry I missed that part. After seeing diffs between stable/1908 and
>> stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only
>> visible git commit in dpdk plugin which is playing with mempool buffers. If
>> it does not solve the problem then I suspect problem lies outside dpdk
>> plugin. I am guessing DPDK-19.08 is being used here with VPP-19.08
>>
>> Hi Chetan,
>> > > 3) I took previous commit of  "vlib: don't use vector for keeping
>> buffer
>> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
>> > Everything looks fine with Buffers 537600.
>> In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is
>> previous commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
>>
>> Thanks,
>> Nitin
>> > -Original Message-----
>> > From: Damjan Marion 
>> > Sent: Monday, February 17, 2020 3:47 PM
>> > To: Nitin Saxena 
>> > Cc: chetan bhasin ; vpp-dev@lists.fd.io
>> > Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
>> >
>> >
>> > Dear Nitin,
>> >
>> > if you read Chetan’s email bellow, you will see that this one is already
>> > excluded…
>> >
>> > Also, it will not be easy to explain how this patch blows tx function
>> in dpdk
>> > mlx5 pmd…
>> >
>> > —
>> > Damjan
>> >
>> > > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
>> > >
>> > > Hi Prashant/Chetan,
>> > >
>> > > I would try following change first to solve the problem in 1908
>> > >
>> > > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
>> > > Author: Damjan Marion 
>> > > Date:   Tue Mar 12 18:14:15 2019 +0100
>> > >
>> > > vlib: don't use vector for keeping buffer indices in
>> > >
>> > > Type: refactor
>> > >
>> > > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
>> > > Signed-off-by: Damjan Marion damar...@cisco.com
>> > >
>> > > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
>> > branch to stable/1908
>> > >
>> > > Thanks,
>> > > Nitin
>> > >
>> > > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
>> > Marion via Lists.Fd.Io
>> > > Sent: Monday, February 17, 2020 1:52 PM
>> > > To: chetan bhasin 
>> > > Cc: vpp-dev@lists.fd.io
>> > > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
>> > >
>> > > External Email
>> > >
>> > > On 17 Feb 2020, at 07:37, chetan bhasin 
>> > wrote:
>> > >
>> > > Bottom line is stable/vpp 908 does not work with higher number of
>> buffers
>> > but stable/vpp2001 does. Could you please advise which area we can look
>> at
>> > ,as it would be difficult for us to move to vpp2001 at this time.
>> > >
>> > > I really don’t have idea what caused this problem to disappear.
>> > > You may try to use “git bisect” to find out which commit fixed it….
>> > >
>> > > —
>> > > Damjan
>> > >
>> > >
>> > >
>> > > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
>> >  wrote:
>> > > Thanks Damjan for the reply!
>> > >
>> > > Following are my observations on Intel X710/XL710 pci-
>> > > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
>> > ethernet-input l

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread chetan bhasin
Hi Nitin,

https://github.com/FDio/vpp/commits/stable/2001/src/vlib
As per stable/2001 branch , the given change is checked-in around Oct 28
2019.

df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of
b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
Yes (branch vpp 20.01)

Thanks,
Chetan Bhasin

On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena  wrote:

> Hi Damjan,
>
> >> if you read Chetan’s email bellow, you will see that this one is
> already excluded…
> Sorry I missed that part. After seeing diffs between stable/1908 and
> stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only
> visible git commit in dpdk plugin which is playing with mempool buffers. If
> it does not solve the problem then I suspect problem lies outside dpdk
> plugin. I am guessing DPDK-19.08 is being used here with VPP-19.08
>
> Hi Chetan,
> > > 3) I took previous commit of  "vlib: don't use vector for keeping
> buffer
> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
> > Everything looks fine with Buffers 537600.
> In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is
> previous commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
>
> Thanks,
> Nitin
> > -Original Message-
> > From: Damjan Marion 
> > Sent: Monday, February 17, 2020 3:47 PM
> > To: Nitin Saxena 
> > Cc: chetan bhasin ; vpp-dev@lists.fd.io
> > Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> >
> >
> > Dear Nitin,
> >
> > if you read Chetan’s email bellow, you will see that this one is already
> > excluded…
> >
> > Also, it will not be easy to explain how this patch blows tx function in
> dpdk
> > mlx5 pmd…
> >
> > —
> > Damjan
> >
> > > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> > >
> > > Hi Prashant/Chetan,
> > >
> > > I would try following change first to solve the problem in 1908
> > >
> > > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > > Author: Damjan Marion 
> > > Date:   Tue Mar 12 18:14:15 2019 +0100
> > >
> > > vlib: don't use vector for keeping buffer indices in
> > >
> > > Type: refactor
> > >
> > > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > > Signed-off-by: Damjan Marion damar...@cisco.com
> > >
> > > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> > branch to stable/1908
> > >
> > > Thanks,
> > > Nitin
> > >
> > > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
> > Marion via Lists.Fd.Io
> > > Sent: Monday, February 17, 2020 1:52 PM
> > > To: chetan bhasin 
> > > Cc: vpp-dev@lists.fd.io
> > > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> > >
> > > External Email
> > >
> > > On 17 Feb 2020, at 07:37, chetan bhasin 
> > wrote:
> > >
> > > Bottom line is stable/vpp 908 does not work with higher number of
> buffers
> > but stable/vpp2001 does. Could you please advise which area we can look
> at
> > ,as it would be difficult for us to move to vpp2001 at this time.
> > >
> > > I really don’t have idea what caused this problem to disappear.
> > > You may try to use “git bisect” to find out which commit fixed it….
> > >
> > > —
> > > Damjan
> > >
> > >
> > >
> > > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
> >  wrote:
> > > Thanks Damjan for the reply!
> > >
> > > Following are my observations on Intel X710/XL710 pci-
> > > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
> > ethernet-input l3 mac mismatch"
> > > With Buffers 537600
> > > vpp# show buffers
> > |
> > > Pool NameIndex NUMA  Size  Data Size  Total  Avail
> Cached   Used
> > > default-numa-0 0 0   2496 2048   537600 510464   1319
>   25817
> > > default-numa-1 1 1   2496 2048   537600 528896390
>   8314
> > >
> > > vpp# show hardware-interfaces
> > >   NameIdx   Link  Hardware
> > > BondEthernet0  3 up   BondEthernet0
> > >   Link speed: unknown
> > >   Ethernet address 3c:fd:fe:b5:5e:40
> > > FortyGigabitEthernet12/0/0 1 up
>  FortyGigabitEthernet12/0/0
> > >   Link speed: 40 Gbps
> > >   Ethernet address 3c:f

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Nitin Saxena
>> I am guessing DPDK-19.08 is being used here with VPP-19.08
Typo, dpdk-19.05 and not dpdk-19.08

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Nitin Saxena
> Sent: Monday, February 17, 2020 5:34 PM
> To: Damjan Marion 
> Cc: chetan bhasin ; vpp-dev@lists.fd.io
> Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> 
> Hi Damjan,
> 
> >> if you read Chetan’s email bellow, you will see that this one is
> >> already excluded…
> Sorry I missed that part. After seeing diffs between stable/1908 and
> stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the
> only visible git commit in dpdk plugin which is playing with mempool buffers.
> If it does not solve the problem then I suspect problem lies outside dpdk
> plugin. I am guessing DPDK-19.08 is being used here with VPP-19.08
> 
> Hi Chetan,
> > > 3) I took previous commit of  "vlib: don't use vector for keeping
> > > buffer
> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
> > Everything looks fine with Buffers 537600.
> In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is
> previous commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
> 
> Thanks,
> Nitin
> > -Original Message-
> > From: Damjan Marion 
> > Sent: Monday, February 17, 2020 3:47 PM
> > To: Nitin Saxena 
> > Cc: chetan bhasin ; vpp-dev@lists.fd.io
> > Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> >
> >
> > Dear Nitin,
> >
> > if you read Chetan’s email bellow, you will see that this one is
> > already excluded…
> >
> > Also, it will not be easy to explain how this patch blows tx function
> > in dpdk
> > mlx5 pmd…
> >
> > —
> > Damjan
> >
> > > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> > >
> > > Hi Prashant/Chetan,
> > >
> > > I would try following change first to solve the problem in 1908
> > >
> > > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > > Author: Damjan Marion 
> > > Date:   Tue Mar 12 18:14:15 2019 +0100
> > >
> > > vlib: don't use vector for keeping buffer indices in
> > >
> > > Type: refactor
> > >
> > > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > >     Signed-off-by: Damjan Marion damar...@cisco.com
> > >
> > > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> > branch to stable/1908
> > >
> > > Thanks,
> > > Nitin
> > >
> > > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
> > Marion via Lists.Fd.Io
> > > Sent: Monday, February 17, 2020 1:52 PM
> > > To: chetan bhasin 
> > > Cc: vpp-dev@lists.fd.io
> > > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> > >
> > > External Email
> > >
> > > On 17 Feb 2020, at 07:37, chetan bhasin 
> > wrote:
> > >
> > > Bottom line is stable/vpp 908 does not work with higher number of
> > > buffers
> > but stable/vpp2001 does. Could you please advise which area we can
> > look at ,as it would be difficult for us to move to vpp2001 at this time.
> > >
> > > I really don’t have idea what caused this problem to disappear.
> > > You may try to use “git bisect” to find out which commit fixed it….
> > >
> > > —
> > > Damjan
> > >
> > >
> > >
> > > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
> >  wrote:
> > > Thanks Damjan for the reply!
> > >
> > > Following are my observations on Intel X710/XL710 pci-
> > > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
> > ethernet-input l3 mac mismatch"
> > > With Buffers 537600 vpp# show buffers
> > |
> > > Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached   
> > > Used
> > > default-numa-0 0 0   2496 2048   537600 510464   1319
> > > 25817
> > > default-numa-1 1 1   2496 2048   537600 528896390
> > > 8314
> > >
> > > vpp# show hardware-interfaces
> > >   NameIdx   Link  Hardware
> > > BondEthernet0  3 up   BondEthernet0
> > >   Link speed: unknown
> > >   Ethernet address 3c:fd:fe:b5:5e:40
> > > FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
> >

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Nitin Saxena
Hi Damjan,

>> if you read Chetan’s email bellow, you will see that this one is already 
>> excluded…
Sorry I missed that part. After seeing diffs between stable/1908 and 
stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only 
visible git commit in dpdk plugin which is playing with mempool buffers. If it 
does not solve the problem then I suspect problem lies outside dpdk plugin. I 
am guessing DPDK-19.08 is being used here with VPP-19.08

Hi Chetan,
> > 3) I took previous commit of  "vlib: don't use vector for keeping buffer
> indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
> Everything looks fine with Buffers 537600.
In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is previous 
commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?

Thanks,
Nitin
> -Original Message-
> From: Damjan Marion 
> Sent: Monday, February 17, 2020 3:47 PM
> To: Nitin Saxena 
> Cc: chetan bhasin ; vpp-dev@lists.fd.io
> Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> 
> 
> Dear Nitin,
> 
> if you read Chetan’s email bellow, you will see that this one is already
> excluded…
> 
> Also, it will not be easy to explain how this patch blows tx function in dpdk
> mlx5 pmd…
> 
> —
> Damjan
> 
> > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> >
> > Hi Prashant/Chetan,
> >
> > I would try following change first to solve the problem in 1908
> >
> > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > Author: Damjan Marion 
> > Date:   Tue Mar 12 18:14:15 2019 +0100
> >
> > vlib: don't use vector for keeping buffer indices in
> >
> > Type: refactor
> >
> > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > Signed-off-by: Damjan Marion damar...@cisco.com
> >
> > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> branch to stable/1908
> >
> > Thanks,
> > Nitin
> >
> > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
> Marion via Lists.Fd.Io
> > Sent: Monday, February 17, 2020 1:52 PM
> > To: chetan bhasin 
> > Cc: vpp-dev@lists.fd.io
> > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> >
> > External Email
> >
> > On 17 Feb 2020, at 07:37, chetan bhasin 
> wrote:
> >
> > Bottom line is stable/vpp 908 does not work with higher number of buffers
> but stable/vpp2001 does. Could you please advise which area we can look at
> ,as it would be difficult for us to move to vpp2001 at this time.
> >
> > I really don’t have idea what caused this problem to disappear.
> > You may try to use “git bisect” to find out which commit fixed it….
> >
> > —
> > Damjan
> >
> >
> >
> > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
>  wrote:
> > Thanks Damjan for the reply!
> >
> > Following are my observations on Intel X710/XL710 pci-
> > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
> ethernet-input l3 mac mismatch"
> > With Buffers 537600
> > vpp# show buffers
> |
> > Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached   
> > Used
> > default-numa-0 0 0   2496 2048   537600 510464   1319
> > 25817
> > default-numa-1 1 1   2496 2048   537600 528896390
> > 8314
> >
> > vpp# show hardware-interfaces
> >   NameIdx   Link  Hardware
> > BondEthernet0  3 up   BondEthernet0
> >   Link speed: unknown
> >   Ethernet address 3c:fd:fe:b5:5e:40
> > FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
> >   Link speed: 40 Gbps
> >   Ethernet address 3c:fd:fe:b5:5e:40
> >   Intel X710/XL710 Family
> > carrier up full duplex mtu 9206
> > flags: admin-up pmd rx-ip4-cksum
> > rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> > tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> > pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa
> 0
> > max rx packet len: 9728
> > promiscuous: unicast off all-multicast on
> > vlan offload: strip off filter off qinq off
> > rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
> >outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
> >scatter keep-crc
> > rx offload active: ipv4-cksum
> > tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-ck

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread chetan bhasin
Thanks Damjan and Nikhil for your time.

I also find below logs via dmesg (Intel X710/XL710 )

[root@bfs-dl360g10-25 vpp]# uname -a
Linux bfs-dl360g10-25 3.10.0-957.5.1.el7.x86_64 #1 SMP Wed Dec 19 10:46:58
EST 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@bfs-dl360g10-25 vpp]# uname -r
3.10.0-957.5.1.el7.x86_64


Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 400
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 402
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec7f31000 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 502
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec804 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec53be000 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 700
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 702
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec6f24000 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Write] Request device
[12:00.0] fault addr 5ec60eb000 [fault reason 05] PTE Write access is not
set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Write] Request device
[12:00.0] fault addr 5ec6684000 [fault reason 05] PTE Write access is not
set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Write] Request device
[12:00.0] fault addr 5ec607d000 [fault reason 05] PTE Write access is not
set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 300
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 302

Thanks,
Chetan

On Mon, Feb 17, 2020 at 3:47 PM Damjan Marion  wrote:

>
> Dear Nitin,
>
> if you read Chetan’s email bellow, you will see that this one is already
> excluded…
>
> Also, it will not be easy to explain how this patch blows tx function in
> dpdk mlx5 pmd…
>
> —
> Damjan
>
> > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> >
> > Hi Prashant/Chetan,
> >
> > I would try following change first to solve the problem in 1908
> >
> > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > Author: Damjan Marion 
> > Date:   Tue Mar 12 18:14:15 2019 +0100
> >
> > vlib: don't use vector for keeping buffer indices in
> >
> > Type: refactor
> >
> > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > Signed-off-by: Damjan Marion damar...@cisco.com
> >
> > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> branch to stable/1908
> >
> > Thanks,
> > Nitin
> >
> > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
> Marion via Lists.Fd.Io
> > Sent: Monday, February 17, 2020 1:52 PM
> > To: chetan bhasin 
> > Cc: vpp-dev@lists.fd.io
> > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> >
> > External Email
> >
> > On 17 Feb 2020, at 07:37, chetan bhasin 
> wrote:
> >
> > Bottom line is stable/vpp 908 does not work with higher number of
> buffers but stable/vpp2001 does. Could you please advise which area we can
> look at ,as it would be difficult for us to move to vpp2001 at this time.
> >
> > I really don’t have idea what caused this problem to disappear.
> > You may try to use “git bisect” to find out which commit fixed it….
> >
> > —
> > Damjan
> >
> >
> >
> > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
>  wrote:
> > Thanks Damjan for the reply!
> >
> > Following are my observations on Intel X710/XL710 pci-
> > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
> ethernet-input l3 mac mismatch"
> > With Buffers 537600
> > vpp# show buffers
>|
> > Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached
>  Used
> > default-numa-0 0 0   2496 2048   537600 510464   1319
> 25817
> > default-numa-1 1 1   2496 2048   537600 528896390
> 8314
> >
> > vpp# show hardware-interfaces
> >   NameIdx   Link  Hardware
> > BondEthernet0  3 up   BondEthernet0
> >   Link speed: unknown
> >   Ethernet address 3c:fd:fe:b5:5e:40
> > FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
> >   Link 

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Damjan Marion via Lists.Fd.Io

Dear Nitin,

if you read Chetan’s email bellow, you will see that this one is already 
excluded…

Also, it will not be easy to explain how this patch blows tx function in dpdk 
mlx5 pmd…

— 
Damjan

> On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> 
> Hi Prashant/Chetan,
>
> I would try following change first to solve the problem in 1908
>
> commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> Author: Damjan Marion 
> Date:   Tue Mar 12 18:14:15 2019 +0100
>
> vlib: don't use vector for keeping buffer indices in
>
> Type: refactor
>
> Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> Signed-off-by: Damjan Marion damar...@cisco.com
>
> You can also try copying src/plugins/dpdk/buffer.c from stable/2001 branch to 
> stable/1908
>
> Thanks,
> Nitin
>
> From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion 
> via Lists.Fd.Io
> Sent: Monday, February 17, 2020 1:52 PM
> To: chetan bhasin 
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
>
> External Email
>
> On 17 Feb 2020, at 07:37, chetan bhasin  wrote:
>
> Bottom line is stable/vpp 908 does not work with higher number of buffers but 
> stable/vpp2001 does. Could you please advise which area we can look at ,as it 
> would be difficult for us to move to vpp2001 at this time. 
>
> I really don’t have idea what caused this problem to disappear.
> You may try to use “git bisect” to find out which commit fixed it….
>
> — 
> Damjan
> 
> 
>
> On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io 
>  wrote:
> Thanks Damjan for the reply!
>
> Following are my observations on Intel X710/XL710 pci-
> 1) I took latest code base from stable/vpp19.08  : Seeing error as " 
> ethernet-input l3 mac mismatch"
> With Buffers 537600
> vpp# show buffers 
>   |
> Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached   Used
> default-numa-0 0 0   2496 2048   537600 510464   131925817
> default-numa-1 1 1   2496 2048   537600 5288963908314
>
> vpp# show hardware-interfaces
>   NameIdx   Link  Hardware
> BondEthernet0  3 up   BondEthernet0
>   Link speed: unknown
>   Ethernet address 3c:fd:fe:b5:5e:40
> FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 
> ipv6-frag
>ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag 
> ipv6-tcp
>ipv6-udp ipv6-other
> tx burst function: i40e_xmit_pkts_vec_avx2
> rx burst function: i40e_recv_pkts_vec_avx2
> tx errors 17
> rx frames ok4585
> rx bytes ok   391078
> extended stats:
>   rx good packets   4585
>   rx good bytes   391078
>   tx errors   17
>   rx multicast packets  4345
>   rx broadcast packets   243
>   rx unknown protocol packets   4588
>   rx size 65 to 127 packets 4529
>   rx size 128 to 255 packets  32
>   rx size 256 to 511 packets   

Re: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Nitin Saxena
Hi Prashant/Chetan,

I would try following change first to solve the problem in 1908

commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
Author: Damjan Marion 
Date:   Tue Mar 12 18:14:15 2019 +0100

vlib: don't use vector for keeping buffer indices in

Type: refactor

Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
Signed-off-by: Damjan Marion damar...@cisco.com<mailto:damar...@cisco.com>

You can also try copying src/plugins/dpdk/buffer.c from stable/2001 branch to 
stable/1908

Thanks,
Nitin

From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion via 
Lists.Fd.Io
Sent: Monday, February 17, 2020 1:52 PM
To: chetan bhasin 
Cc: vpp-dev@lists.fd.io
Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter

External Email


On 17 Feb 2020, at 07:37, chetan bhasin 
mailto:chetan.bhasin...@gmail.com>> wrote:

Bottom line is stable/vpp 908 does not work with higher number of buffers but 
stable/vpp2001 does. Could you please advise which area we can look at ,as it 
would be difficult for us to move to vpp2001 at this time.

I really don’t have idea what caused this problem to disappear.
You may try to use “git bisect” to find out which commit fixed it….

—
Damjan



On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via 
Lists.Fd.Io<https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.fd.io_=DwMFaQ=nKjWec2b6R0mOyPaz7xtfQ=S4H7jibYAtA5YOvfL3IkGduCfk9LbZMPOAecQGDzWV0=Qo505QU_yRsArojCYDKWEB_NgP4_qCsp7SlPoEIOepw=bIjckc3cck_ydYvr1IV6yoyxM28XZf68iCmKkkwhOm4=>
 mailto:gmail@lists.fd.io>> wrote:
Thanks Damjan for the reply!

Following are my observations on Intel X710/XL710 pci-
1) I took latest code base from stable/vpp19.08  : Seeing error as " 
ethernet-input l3 mac mismatch"
With Buffers 537600
vpp# show buffers   
|
Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached   Used
default-numa-0 0 0   2496 2048   537600 510464   131925817
default-numa-1 1 1   2496 2048   537600 5288963908314

vpp# show hardware-interfaces
  NameIdx   Link  Hardware
BondEthernet0  3 up   BondEthernet0
  Link speed: unknown
  Ethernet address 3c:fd:fe:b5:5e:40
FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
  Link speed: 40 Gbps
  Ethernet address 3c:fd:fe:b5:5e:40
  Intel X710/XL710 Family
carrier up full duplex mtu 9206
flags: admin-up pmd rx-ip4-cksum
rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa 0
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
   scatter keep-crc
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
   mbuf-fast-free
tx offload active: none
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 
ipv6-frag
   ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag ipv6-tcp
   ipv6-udp ipv6-other
tx burst function: i40e_xmit_pkts_vec_avx2
rx burst function: i40e_recv_pkts_vec_avx2
tx errors 17
rx frames ok4585
rx bytes ok   391078
extended stats:
  rx good packets   4585
  rx good bytes   391078
  tx errors   17
  rx multicast packets  4345
  rx broadcast packets   243
  rx unknown protocol packets   4588
  rx size 65 to 127 packets 4529
  rx size 128 to 255 packets  32
  rx size 256 to 511 packets  26
  rx size 1024 to 1522 packets 1
  tx size 65 to 127 packets   33
FortyGigabitEthernet12/0/1 2 up   FortyGigabitEthernet12/0/1
  Link speed: 40 Gbps
  Ethernet address 3c:fd:fe:b5:5e:40
  Intel X710/XL710 Family
carrier up full duplex mtu 9206
flags: admin-up pmd rx-ip4-cksum
rx: queues 16 (ma

Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Damjan Marion via Lists.Fd.Io

> On 17 Feb 2020, at 07:37, chetan bhasin  wrote:
> 
> Bottom line is stable/vpp 908 does not work with higher number of buffers but 
> stable/vpp2001 does. Could you please advise which area we can look at ,as it 
> would be difficult for us to move to vpp2001 at this time. 

I really don’t have idea what caused this problem to disappear.
You may try to use “git bisect” to find out which commit fixed it….

— 
Damjan

> 
> On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io 
>   > wrote:
> Thanks Damjan for the reply!
> 
> Following are my observations on Intel X710/XL710 pci-
> 1) I took latest code base from stable/vpp19.08  : Seeing error as " 
> ethernet-input l3 mac mismatch"
> With Buffers 537600
> vpp# show buffers 
>   |
> Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached   Used
> default-numa-0 0 0   2496 2048   537600 510464   131925817
> default-numa-1 1 1   2496 2048   537600 5288963908314
> 
> vpp# show hardware-interfaces
>   NameIdx   Link  Hardware
> BondEthernet0  3 up   BondEthernet0
>   Link speed: unknown
>   Ethernet address 3c:fd:fe:b5:5e:40
> FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 
> ipv6-frag
>ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag 
> ipv6-tcp
>ipv6-udp ipv6-other
> tx burst function: i40e_xmit_pkts_vec_avx2
> rx burst function: i40e_recv_pkts_vec_avx2
> tx errors 17
> rx frames ok4585
> rx bytes ok   391078
> extended stats:
>   rx good packets   4585
>   rx good bytes   391078
>   tx errors   17
>   rx multicast packets  4345
>   rx broadcast packets   243
>   rx unknown protocol packets   4588
>   rx size 65 to 127 packets 4529
>   rx size 128 to 255 packets  32
>   rx size 256 to 511 packets  26
>   rx size 1024 to 1522 packets 1
>   tx size 65 to 127 packets   33
> FortyGigabitEthernet12/0/1 2 up   FortyGigabitEthernet12/0/1
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086: address :12:00.01 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp 

Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-16 Thread chetan bhasin
Bottom line is stable/vpp 908 does not work with higher number of buffers
but stable/vpp2001 does. Could you please advise which area we can look at
,as it would be difficult for us to move to vpp2001 at this time.

On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
 wrote:

> Thanks Damjan for the reply!
>
> Following are my observations on Intel X710/XL710 pci-
> 1) I took latest code base from stable/vpp19.08  : Seeing error as " 
> *ethernet-input
> l3 mac mismatch*"
> *With Buffers 537600*
>
>
>
> *vpp# show
> buffers
> |Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached
> Useddefault-numa-0 0 0   2496 2048   537600 510464
> 131925817default-numa-1 1 1   2496 2048   537600
> 5288963908314*
>
>
> *vpp# show hardware-interfaces*  NameIdx
> Link  Hardware
> BondEthernet0  3 up   BondEthernet0
>   Link speed: unknown
>   Ethernet address 3c:fd:fe:b5:5e:40
> FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum
> sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
> ipv6-frag
>ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag
> ipv6-tcp
>ipv6-udp ipv6-other
> tx burst function: i40e_xmit_pkts_vec_avx2
> rx burst function: i40e_recv_pkts_vec_avx2
> tx errors 17
> rx frames ok4585
> rx bytes ok   391078
> extended stats:
>   rx good packets   4585
>   rx good bytes   391078
>   tx errors   17
>   rx multicast packets  4345
>   rx broadcast packets   243
>   rx unknown protocol packets   4588
>   rx size 65 to 127 packets 4529
>   rx size 128 to 255 packets  32
>   rx size 256 to 511 packets  26
>   rx size 1024 to 1522 packets 1
>   tx size 65 to 127 packets   33
> FortyGigabitEthernet12/0/1 2 up   FortyGigabitEthernet12/0/1
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086: address :12:00.01 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum
> sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
> ipv6-frag
>ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag
> ipv6-tcp
>ipv6-udp ipv6-other
> tx burst function: i40e_xmit_pkts_vec_avx2
> rx burst function: i40e_recv_pkts_vec_avx2
> rx frames ok

Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-16 Thread chetan bhasin
Thanks Damjan for the reply!

Following are my observations on Intel X710/XL710 pci-
1) I took latest code base from stable/vpp19.08  : Seeing error as "
*ethernet-input
l3 mac mismatch*"
*With Buffers 537600*



*vpp# show
buffers
|Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached
Useddefault-numa-0 0 0   2496 2048   537600 510464
131925817default-numa-1 1 1   2496 2048   537600
5288963908314*


*vpp# show hardware-interfaces*  NameIdx
Link  Hardware
BondEthernet0  3 up   BondEthernet0
  Link speed: unknown
  Ethernet address 3c:fd:fe:b5:5e:40
FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
  Link speed: 40 Gbps
  Ethernet address 3c:fd:fe:b5:5e:40
  Intel X710/XL710 Family
carrier up full duplex mtu 9206
flags: admin-up pmd rx-ip4-cksum
rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa 0
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
   scatter keep-crc
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
   mbuf-fast-free
tx offload active: none
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
ipv6-frag
   ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag
ipv6-tcp
   ipv6-udp ipv6-other
tx burst function: i40e_xmit_pkts_vec_avx2
rx burst function: i40e_recv_pkts_vec_avx2
tx errors 17
rx frames ok4585
rx bytes ok   391078
extended stats:
  rx good packets   4585
  rx good bytes   391078
  tx errors   17
  rx multicast packets  4345
  rx broadcast packets   243
  rx unknown protocol packets   4588
  rx size 65 to 127 packets 4529
  rx size 128 to 255 packets  32
  rx size 256 to 511 packets  26
  rx size 1024 to 1522 packets 1
  tx size 65 to 127 packets   33
FortyGigabitEthernet12/0/1 2 up   FortyGigabitEthernet12/0/1
  Link speed: 40 Gbps
  Ethernet address 3c:fd:fe:b5:5e:40
  Intel X710/XL710 Family
carrier up full duplex mtu 9206
flags: admin-up pmd rx-ip4-cksum
rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
pci: device 8086:1583 subsystem 8086: address :12:00.01 numa 0
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
   scatter keep-crc
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
   mbuf-fast-free
tx offload active: none
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
ipv6-frag
   ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag
ipv6-tcp
   ipv6-udp ipv6-other
tx burst function: i40e_xmit_pkts_vec_avx2
rx burst function: i40e_recv_pkts_vec_avx2
rx frames ok4585
rx bytes ok   391078
extended stats:
  rx good packets   4585
  rx good bytes   391078
  rx multicast packets  4344
  rx broadcast packets   243
  rx unknown protocol packets
4587|
  rx size 65 to 127 packets

Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-12 Thread Damjan Marion via Lists.Fd.Io

Shouldn’t be too hard to checkout commit prior to that one and test if problem 
is still there…

— 
Damjan


> On 12 Feb 2020, at 14:50, chetan bhasin  wrote:
> 
> Hi,
> 
> Looking into the changes in vpp 20.1 , the below change looks good important 
> related to buffer indices .
> 
> vlib: don't use vector for keeping buffer indices in the pool
> Type: refactor
> 
>  
> Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> 
> Signed-off-by: Damjan Marion > 
> 
>  
> https://github.com/FDio/vpp/commit/b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b#diff-2260a8080303fbcc30ef32f782b4d6df
>  
> 
> 
> Can anybody suggest  ?
> 
Shouldn’t be too hard to checkout commit prior to that one and test if problem 
is still there…

— 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15386): https://lists.fd.io/g/vpp-dev/message/15386
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-12 Thread chetan bhasin
Hi,

Looking into the changes in vpp 20.1 , the below change looks good
important related to buffer indices .

*vlib: don't use vector for keeping buffer indices in the pool *

Type: refactor



Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4

Signed-off-by: Damjan Marion 



https://github.com/FDio/vpp/commit/b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b#diff-2260a8080303fbcc30ef32f782b4d6df
<https://url10.mailanyone.net/v1/?m=1j1reR-fQ-5y=57e1b682=nJZ-BXH5lshb2jA0XjZcJfV589cXx2IknVPOvIfeZzkHN0-1aiqoxkznIe6cMM1Q36XZK9v-i6Rhciwdfyj3g0j5HWsUCsAptLO9zuiQAUmOYUrK1p2_6frehR05g36O6OEk7t1RALQ_8k5obWKPc1_zGGk7sAXIm8hlot1JYDk8Ws8lQq0gFnUcbL4gBsWrDIf5U2-aedLh9p5BR5EWP_jwcQ0qrkyaCJBngVK3ZdTeur5m1tCcUh9RH_Aup9qg9LMelskGtWqpvOOOxBX2sGn3JlsJHk6r56933BJuIKhr7uoUtg4QXyBmbJJjoob40spvLJ4ZLn6oI5GCDZoAWg>


Can anybody suggest  ?

Thanks,
Chetan

On Tue, Feb 11, 2020 at 1:17 PM chetan bhasin via Lists.Fd.Io
 wrote:

> Hi,
>
> Any direction regarding the crash when we increase vlib_buffer count ? We
> are using vpp
> vpp# show version verbose
> Version:  v19.08.1-155~g09ca6fa-dirty
> Compiled by:  cbhasin
> Compile host: bfs-dl360g10-14-vm25
> Compile date: Tue 11 Feb 05:52:33 GMT 2020
> Compile location:
> /nfs-bfs/workspace/odc/cbhasin/ngp/mainline/third-party/vpp/vpp_1908
> Compiler: GCC 7.3.1 20180303 (Red Hat 7.3.1-5)
>
> Back-trace are provided by Prashant https://pastebin.com/1YS3ZWeb
>
> Thanks,
> Chetan Bhasin
>
> On Tue, Feb 4, 2020 at 3:07 PM Prashant Upadhyaya 
> wrote:
>
>> Thanks Benoit.
>> I don't have the core files at the moment (still taming the huge cores
>> that are generated, so they were disabled on the setup)
>> Backtraces are present at (with indicated config of the parameter) --
>> https://pastebin.com/1YS3ZWeb
>> It is a dual numa setup.
>>
>> Regards
>> -Prashant
>>
>>
>> On Tue, Feb 4, 2020 at 1:55 PM Benoit Ganne (bganne) 
>> wrote:
>> >
>> > Hi Prashant,
>> >
>> > Can you share your configuration and at least a backtrace of the crash?
>> Or even better a corefile:
>> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html
>> >
>> > Best
>> > ben
>> >
>> > > -Original Message-
>> > > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
>> > > Upadhyaya
>> > > Sent: mardi 4 février 2020 09:15
>> > > To: vpp-dev@lists.fd.io
>> > > Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
>> > >
>> > >  Woops, my mistake. I think I multiplied by 1024 extra.
>> > > Mbuf's are 2KB's, not 2 MB's (that's the huge page size)
>> > >
>> > > But the fact remains that my usecase is unstable at higher configured
>> > > buffers but is stable at lower values like 10 (this can by all
>> > > means be my usecase/code specific issue)
>> > >
>> > > If anybody else facing issues with higher configured buffers, please
>> do
>> > > share.
>> > >
>> > > Regards
>> > > -Prashant
>> > >
>> > >
>> > > On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
>> > >  wrote:
>> > > >
>> > > > Hi,
>> > > >
>> > > > I am using DPDK Plugin with VPP19.08.
>> > > > When I set the buffers-per-numa parameter to a high value, say,
>> > > > 25, I am seeing crashes in the system.
>> > > >
>> > > > (The corresponding parameter controlling number of mbufs in VPP18.01
>> > > > used to work well. This was in dpdk config section as num-mbufs)
>> > > >
>> > > > I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
>> > > > which are uwords :-
>> > > >  uword buffer_mem_start;
>> > > >   uword buffer_mem_size;
>> > > >
>> > > > Is it a mem size overflow in case the buffers-per-numa parameter is
>> > > > set to a high value ?
>> > > > I do need a high number of DPDK mbuf's in my usecase.
>> > > >
>> > > > Regards
>> > > > -Prashant
>>
>> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15383): https://lists.fd.io/g/vpp-dev/message/15383
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-10 Thread chetan bhasin
Hi,

Any direction regarding the crash when we increase vlib_buffer count ? We
are using vpp
vpp# show version verbose
Version:  v19.08.1-155~g09ca6fa-dirty
Compiled by:  cbhasin
Compile host: bfs-dl360g10-14-vm25
Compile date: Tue 11 Feb 05:52:33 GMT 2020
Compile location:
/nfs-bfs/workspace/odc/cbhasin/ngp/mainline/third-party/vpp/vpp_1908
Compiler: GCC 7.3.1 20180303 (Red Hat 7.3.1-5)

Back-trace are provided by Prashant https://pastebin.com/1YS3ZWeb

Thanks,
Chetan Bhasin

On Tue, Feb 4, 2020 at 3:07 PM Prashant Upadhyaya 
wrote:

> Thanks Benoit.
> I don't have the core files at the moment (still taming the huge cores
> that are generated, so they were disabled on the setup)
> Backtraces are present at (with indicated config of the parameter) --
> https://pastebin.com/1YS3ZWeb
> It is a dual numa setup.
>
> Regards
> -Prashant
>
>
> On Tue, Feb 4, 2020 at 1:55 PM Benoit Ganne (bganne) 
> wrote:
> >
> > Hi Prashant,
> >
> > Can you share your configuration and at least a backtrace of the crash?
> Or even better a corefile:
> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html
> >
> > Best
> > ben
> >
> > > -Original Message-
> > > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
> > > Upadhyaya
> > > Sent: mardi 4 février 2020 09:15
> > > To: vpp-dev@lists.fd.io
> > > Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
> > >
> > >  Woops, my mistake. I think I multiplied by 1024 extra.
> > > Mbuf's are 2KB's, not 2 MB's (that's the huge page size)
> > >
> > > But the fact remains that my usecase is unstable at higher configured
> > > buffers but is stable at lower values like 10 (this can by all
> > > means be my usecase/code specific issue)
> > >
> > > If anybody else facing issues with higher configured buffers, please do
> > > share.
> > >
> > > Regards
> > > -Prashant
> > >
> > >
> > > On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
> > >  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I am using DPDK Plugin with VPP19.08.
> > > > When I set the buffers-per-numa parameter to a high value, say,
> > > > 25, I am seeing crashes in the system.
> > > >
> > > > (The corresponding parameter controlling number of mbufs in VPP18.01
> > > > used to work well. This was in dpdk config section as num-mbufs)
> > > >
> > > > I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
> > > > which are uwords :-
> > > >  uword buffer_mem_start;
> > > >   uword buffer_mem_size;
> > > >
> > > > Is it a mem size overflow in case the buffers-per-numa parameter is
> > > > set to a high value ?
> > > > I do need a high number of DPDK mbuf's in my usecase.
> > > >
> > > > Regards
> > > > -Prashant
> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15377): https://lists.fd.io/g/vpp-dev/message/15377
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-04 Thread Prashant Upadhyaya
Thanks Dave for the tip on core compression.

I was able to solve the issue of huge VSZ resulting into huge cores
afterall -- the culprit is DPDK.
There is a parameter in DPDK called CONFIG_RTE_MAX_MEM_MB which can be
set to a lower value than the default.

Regards
-Prashant

On Tue, Feb 4, 2020 at 5:22 PM Dave Barach (dbarach)  wrote:
>
> As Ben wrote, please check out: 
> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html
>
> Note the section(s) on core file handling; in particular, how to set up 
> on-the-fly core file compression...:
>
> Depending on operational requirements, it’s possible to compress corefiles as 
> they are generated. Please note that it takes several seconds’ worth of 
> wall-clock time to compress a vpp core file on the fly, during which all 
> packet processing activities are suspended.
>
> To create compressed core files on the fly, create the following script, e.g. 
> in /usr/local/bin/compressed_corefiles, owned by root, executable:
>
> #!/bin/sh
> exec /bin/gzip -f - >"/tmp/dumps/core-$1.$2.gz"
>
> Adjust the kernel core file pattern as shown:
>
> sysctl -w kernel.core_pattern="|/usr/local/bin/compressed_corefiles %e %t"
>
> HTH... Dave
>
> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant 
> Upadhyaya
> Sent: Tuesday, February 4, 2020 4:38 AM
> To: Benoit Ganne (bganne) 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
>
> Thanks Benoit.
> I don't have the core files at the moment (still taming the huge cores that 
> are generated, so they were disabled on the setup) Backtraces are present at 
> (with indicated config of the parameter) -- https://pastebin.com/1YS3ZWeb It 
> is a dual numa setup.
>
> Regards
> -Prashant
>
>
> On Tue, Feb 4, 2020 at 1:55 PM Benoit Ganne (bganne)  wrote:
> >
> > Hi Prashant,
> >
> > Can you share your configuration and at least a backtrace of the
> > crash? Or even better a corefile:
> > https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportin
> > gissues.html
> >
> > Best
> > ben
> >
> > > -Original Message-
> > > From: vpp-dev@lists.fd.io  On Behalf Of
> > > Prashant Upadhyaya
> > > Sent: mardi 4 février 2020 09:15
> > > To: vpp-dev@lists.fd.io
> > > Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
> > >
> > >  Woops, my mistake. I think I multiplied by 1024 extra.
> > > Mbuf's are 2KB's, not 2 MB's (that's the huge page size)
> > >
> > > But the fact remains that my usecase is unstable at higher
> > > configured buffers but is stable at lower values like 10 (this
> > > can by all means be my usecase/code specific issue)
> > >
> > > If anybody else facing issues with higher configured buffers, please
> > > do share.
> > >
> > > Regards
> > > -Prashant
> > >
> > >
> > > On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
> > >  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I am using DPDK Plugin with VPP19.08.
> > > > When I set the buffers-per-numa parameter to a high value, say,
> > > > 25, I am seeing crashes in the system.
> > > >
> > > > (The corresponding parameter controlling number of mbufs in
> > > > VPP18.01 used to work well. This was in dpdk config section as
> > > > num-mbufs)
> > > >
> > > > I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
> > > > which are uwords :-  uword buffer_mem_start;
> > > >   uword buffer_mem_size;
> > > >
> > > > Is it a mem size overflow in case the buffers-per-numa parameter
> > > > is set to a high value ?
> > > > I do need a high number of DPDK mbuf's in my usecase.
> > > >
> > > > Regards
> > > > -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15329): https://lists.fd.io/g/vpp-dev/message/15329
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-04 Thread Dave Barach via Lists.Fd.Io
As Ben wrote, please check out: 
https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html

Note the section(s) on core file handling; in particular, how to set up 
on-the-fly core file compression...:

Depending on operational requirements, it’s possible to compress corefiles as 
they are generated. Please note that it takes several seconds’ worth of 
wall-clock time to compress a vpp core file on the fly, during which all packet 
processing activities are suspended.

To create compressed core files on the fly, create the following script, e.g. 
in /usr/local/bin/compressed_corefiles, owned by root, executable:

#!/bin/sh
exec /bin/gzip -f - >"/tmp/dumps/core-$1.$2.gz"

Adjust the kernel core file pattern as shown:

sysctl -w kernel.core_pattern="|/usr/local/bin/compressed_corefiles %e %t"

HTH... Dave

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Prashant Upadhyaya
Sent: Tuesday, February 4, 2020 4:38 AM
To: Benoit Ganne (bganne) 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter

Thanks Benoit.
I don't have the core files at the moment (still taming the huge cores that are 
generated, so they were disabled on the setup) Backtraces are present at (with 
indicated config of the parameter) -- https://pastebin.com/1YS3ZWeb It is a 
dual numa setup.

Regards
-Prashant


On Tue, Feb 4, 2020 at 1:55 PM Benoit Ganne (bganne)  wrote:
>
> Hi Prashant,
>
> Can you share your configuration and at least a backtrace of the 
> crash? Or even better a corefile: 
> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportin
> gissues.html
>
> Best
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of 
> > Prashant Upadhyaya
> > Sent: mardi 4 février 2020 09:15
> > To: vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
> >
> >  Woops, my mistake. I think I multiplied by 1024 extra.
> > Mbuf's are 2KB's, not 2 MB's (that's the huge page size)
> >
> > But the fact remains that my usecase is unstable at higher 
> > configured buffers but is stable at lower values like 10 (this 
> > can by all means be my usecase/code specific issue)
> >
> > If anybody else facing issues with higher configured buffers, please 
> > do share.
> >
> > Regards
> > -Prashant
> >
> >
> > On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya 
> >  wrote:
> > >
> > > Hi,
> > >
> > > I am using DPDK Plugin with VPP19.08.
> > > When I set the buffers-per-numa parameter to a high value, say, 
> > > 25, I am seeing crashes in the system.
> > >
> > > (The corresponding parameter controlling number of mbufs in 
> > > VPP18.01 used to work well. This was in dpdk config section as 
> > > num-mbufs)
> > >
> > > I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields 
> > > which are uwords :-  uword buffer_mem_start;
> > >   uword buffer_mem_size;
> > >
> > > Is it a mem size overflow in case the buffers-per-numa parameter 
> > > is set to a high value ?
> > > I do need a high number of DPDK mbuf's in my usecase.
> > >
> > > Regards
> > > -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15328): https://lists.fd.io/g/vpp-dev/message/15328
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-04 Thread Prashant Upadhyaya
Thanks Benoit.
I don't have the core files at the moment (still taming the huge cores
that are generated, so they were disabled on the setup)
Backtraces are present at (with indicated config of the parameter) --
https://pastebin.com/1YS3ZWeb
It is a dual numa setup.

Regards
-Prashant


On Tue, Feb 4, 2020 at 1:55 PM Benoit Ganne (bganne)  wrote:
>
> Hi Prashant,
>
> Can you share your configuration and at least a backtrace of the crash? Or 
> even better a corefile: 
> https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html
>
> Best
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Prashant
> > Upadhyaya
> > Sent: mardi 4 février 2020 09:15
> > To: vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
> >
> >  Woops, my mistake. I think I multiplied by 1024 extra.
> > Mbuf's are 2KB's, not 2 MB's (that's the huge page size)
> >
> > But the fact remains that my usecase is unstable at higher configured
> > buffers but is stable at lower values like 10 (this can by all
> > means be my usecase/code specific issue)
> >
> > If anybody else facing issues with higher configured buffers, please do
> > share.
> >
> > Regards
> > -Prashant
> >
> >
> > On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
> >  wrote:
> > >
> > > Hi,
> > >
> > > I am using DPDK Plugin with VPP19.08.
> > > When I set the buffers-per-numa parameter to a high value, say,
> > > 25, I am seeing crashes in the system.
> > >
> > > (The corresponding parameter controlling number of mbufs in VPP18.01
> > > used to work well. This was in dpdk config section as num-mbufs)
> > >
> > > I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
> > > which are uwords :-
> > >  uword buffer_mem_start;
> > >   uword buffer_mem_size;
> > >
> > > Is it a mem size overflow in case the buffers-per-numa parameter is
> > > set to a high value ?
> > > I do need a high number of DPDK mbuf's in my usecase.
> > >
> > > Regards
> > > -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15327): https://lists.fd.io/g/vpp-dev/message/15327
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-04 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hi Prashant,

Can you share your configuration and at least a backtrace of the crash? Or even 
better a corefile: 
https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Prashant
> Upadhyaya
> Sent: mardi 4 février 2020 09:15
> To: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Regarding buffers-per-numa parameter
> 
>  Woops, my mistake. I think I multiplied by 1024 extra.
> Mbuf's are 2KB's, not 2 MB's (that's the huge page size)
> 
> But the fact remains that my usecase is unstable at higher configured
> buffers but is stable at lower values like 10 (this can by all
> means be my usecase/code specific issue)
> 
> If anybody else facing issues with higher configured buffers, please do
> share.
> 
> Regards
> -Prashant
> 
> 
> On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
>  wrote:
> >
> > Hi,
> >
> > I am using DPDK Plugin with VPP19.08.
> > When I set the buffers-per-numa parameter to a high value, say,
> > 25, I am seeing crashes in the system.
> >
> > (The corresponding parameter controlling number of mbufs in VPP18.01
> > used to work well. This was in dpdk config section as num-mbufs)
> >
> > I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
> > which are uwords :-
> >  uword buffer_mem_start;
> >   uword buffer_mem_size;
> >
> > Is it a mem size overflow in case the buffers-per-numa parameter is
> > set to a high value ?
> > I do need a high number of DPDK mbuf's in my usecase.
> >
> > Regards
> > -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15326): https://lists.fd.io/g/vpp-dev/message/15326
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-04 Thread Prashant Upadhyaya
 Woops, my mistake. I think I multiplied by 1024 extra.
Mbuf's are 2KB's, not 2 MB's (that's the huge page size)

But the fact remains that my usecase is unstable at higher configured
buffers but is stable at lower values like 10 (this can by all
means be my usecase/code specific issue)

If anybody else facing issues with higher configured buffers, please do share.

Regards
-Prashant


On Tue, Feb 4, 2020 at 1:31 PM Prashant Upadhyaya
 wrote:
>
> Hi,
>
> I am using DPDK Plugin with VPP19.08.
> When I set the buffers-per-numa parameter to a high value, say,
> 25, I am seeing crashes in the system.
>
> (The corresponding parameter controlling number of mbufs in VPP18.01
> used to work well. This was in dpdk config section as num-mbufs)
>
> I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
> which are uwords :-
>  uword buffer_mem_start;
>   uword buffer_mem_size;
>
> Is it a mem size overflow in case the buffers-per-numa parameter is
> set to a high value ?
> I do need a high number of DPDK mbuf's in my usecase.
>
> Regards
> -Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15325): https://lists.fd.io/g/vpp-dev/message/15325
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Regarding buffers-per-numa parameter

2020-02-04 Thread Prashant Upadhyaya
Hi,

I am using DPDK Plugin with VPP19.08.
When I set the buffers-per-numa parameter to a high value, say,
25, I am seeing crashes in the system.

(The corresponding parameter controlling number of mbufs in VPP18.01
used to work well. This was in dpdk config section as num-mbufs)

I quickly checked in VPP19.08 that vlib_buffer_main_t uses fields
which are uwords :-
 uword buffer_mem_start;
  uword buffer_mem_size;

Is it a mem size overflow in case the buffers-per-numa parameter is
set to a high value ?
I do need a high number of DPDK mbuf's in my usecase.

Regards
-Prashant
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15324): https://lists.fd.io/g/vpp-dev/message/15324
Mute This Topic: https://lists.fd.io/mt/70968414/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-