[dpdk-dev] PDCP Ciphering / deciphering

2021-08-25 Thread Venumadhav Josyula
Hi,

i) Is pdcp ciphering / deciphering using rte_security API for PDCP possible
using Intel Crypto buffer ? Or it is required h/w offload to cryptodev
which supports PDCP ciphering/deciphering ? I understand that API sequence
given here mentio h/w offload ?
ii) In other words want to check, is there s/w support for that in some
form via pmd ?

https://static.sched.com/hosted_files/dpdkuserspace2018/a8/am%2002%20dpdk-Dublin-2018-PDCP.pdf
( slide no 22 )

Thanks,
Regards,
Venu


Re: [dpdk-dev] Adding to mailing list

2020-10-28 Thread Venumadhav Josyula
Hi,

Please use this link for subscription

https://mails.dpdk.org/listinfo/dev#:~:text=To%20post%20a%20message%20to,subscription%2C%20in%20the%20sections%20below.&text=Subscribe%20to%20dev%20by%20filling,others%20from%20gratuitously%20subscribing%20you
.

or google for dpdk dev subscription.

Cheers,
Venu

On Wed, 28 Oct 2020 at 14:52, Nandini Rangaswamy 
wrote:

> Hi,
> I would like to subscribe myself to mailing and contributor’s list of
> DPDK. Kindly add my email id nrangasw...@juniper.net nrangasw...@juniper.net>.
> Regards,
> Nandini
>
>
> Juniper Business Use Only
>


Re: [dpdk-dev] Adding to mailing list

2020-10-29 Thread Venumadhav Josyula
Hi Nandini,

You can click on the following link ( this has / had worked for me ), i am
subscribed to couple of mailing list.
https://www.dpdk.org/contribute/#mailing-lists
There different mailing lists are there, you can click register against
whichever you want. The process is you might need to respond to the email
which you received first for each mailing list.

Cheers,
Venu


On Thu, 29 Oct 2020 at 23:48, Nandini Rangaswamy 
wrote:

> Hi Venu,
>
> I had tried using the link below to subscribe multiple times.
>
> I did not receive any response yet confirming my subscription request.
>
> Can you let me know if my request can be confirmed ?
>
> Regards,
>
> Nandini
>
>
>
> *From: *Venumadhav Josyula 
> *Date: *Wednesday, October 28, 2020 at 9:34 PM
> *To: *Nandini Rangaswamy 
> *Cc: *dev@dpdk.org , Sachchidanand Vaidya <
> vaidy...@juniper.net>
> *Subject: *Re: [dpdk-dev] Adding to mailing list
>
> *[External Email. Be cautious of content]*
>
>
>
> Hi,
>
>
>
> Please use this link for subscription
>
>
>
>
> https://mails.dpdk.org/listinfo/dev#:~:text=To%20post%20a%20message%20to,subscription%2C%20in%20the%20sections%20below.&text=Subscribe%20to%20dev%20by%20filling,others%20from%20gratuitously%20subscribing%20you
> <https://urldefense.com/v3/__https:/mails.dpdk.org/listinfo/dev*:*:text=To*20post*20a*20message*20to,subscription*2C*20in*20the*20sections*20below.&text=Subscribe*20to*20dev*20by*20filling,others*20from*20gratuitously*20subscribing*20you__;I34lJSUlJSUlJSUlJSUlJSUlJQ!!NEt6yMaO-gk!V16M3_rOtkcSGPD8Z_EGPjA2ITCP4cROYimvlLmclbAMgB_ULSO1onmWTxr_s1yQKk4$>
> .
>
>
>
> or google for dpdk dev subscription.
>
>
>
> Cheers,
>
> Venu
>
>
>
> On Wed, 28 Oct 2020 at 14:52, Nandini Rangaswamy 
> wrote:
>
> Hi,
> I would like to subscribe myself to mailing and contributor’s list of
> DPDK. Kindly add my email id nrangasw...@juniper.net nrangasw...@juniper.net>.
> Regards,
> Nandini
>
>
> Juniper Business Use Only
>
>
> Juniper Business Use Only
>


Re: [dpdk-dev] Api in dpdk to get total free physical memory

2018-03-09 Thread Venumadhav Josyula
Hi Anatoly,

Like we have api, rte_eal_get_physmem_size, which returns the total memory 
physical Ram memory. This is eal_common_memory.c, we need following following 
which would return free memory. For that that I was referring we need to api 
get the free physical ram memory ‘rte_eal_get_physmem_free’ for the we wanted 
to submit patch.

Thanks,
Regards
Venumadhav

From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Burakov, Anatoly
Sent: Friday, March 09, 2018 2:36 PM
To: Venumadhav Josyula ; dev@dpdk.org
Subject: Re: [dpdk-dev] Api in dpdk to get total free physical memory

On 08-Mar-18 9:36 PM, Venumadhav Josyula wrote:
> Hi All,
>
>
>
> Like ‘rte_eal_get_physmem_size’ api to the total size of the physical
> memory. Is there an API to get to get total free memory physical memory
> available ?
>
>
>
> We want such API we are planning to implement such API for the same
>
>
>
> /* get the total size of memory */
>
> uint64_t
>
> rte_eal_get_physmem_free(int socket_id)
>
> {
>
> const struct rte_mem_config *mcfg;
>
> unsigned i = 0;
>
> uint64_t total_len = 0;
>
>
>
> /* get pointer to global configuration */
>
> mcfg = rte_eal_get_configuration()->mem_config;
>
>
>
> for (i=0; i
> if (mcfg->free_memseg[i].addr == NULL)
>
> break;
>
>
>
> if (mcfg->free_memseg[i].len == 0)
>
> continue;
>
>
>
> /* bad socket ID */
>
> if (socket_id != SOCKET_ID_ANY &&
>
> mcfg->free_memseg[i].socket_id != SOCKET_ID_ANY &&
>
> socket_id != mcfg->free_memseg[i].socket_id)
>
> continue;
>
>
>
> total_len += mcfg->free_memseg[i].len;
>
> }
>
>
>
> return total_len;
>
> }
>
>
>
> Thanks,
>
> Regards
>
> Venu
>

All memory is registered on the heap, so you might want to look at heap
stats to get the same information :) It would also arguably be more
useful because just the size of memory will not tell you how much you
can allocate, because memory may be heavily fragmented, and heap stats
will also tell you biggest free memory block size.

Bear in mind, however, that there is work in progress [1] to enable
mapping/unmapping hugepages at runtime, which would make such an API
more or less useless - just because you don't have much free space *now*
doesn't mean you can't allocate more :)

[1] 
http://dpdk.org/ml/archives/dev/2018-March/092070.html<http://dpdk.org/ml/archives/dev/2018-March/092070.html>

--
Thanks,
Anatoly


[dpdk-dev] API in dpdk to get total free physical memory

2017-10-04 Thread Venumadhav Josyula
Hi All,

Like 'rte_eal_get_physmem_size' api to the total size of the physical memory. 
Is there an API to get to get total free memory physical memory available ?

We want such API we are planning to implement such API for the same

/* get the total size of memory */
uint64_t
rte_eal_get_physmem_free(int socket_id)
{
const struct rte_mem_config *mcfg;
unsigned i = 0;
uint64_t total_len = 0;

/* get pointer to global configuration */
mcfg = rte_eal_get_configuration()->mem_config;

for (i=0; ifree_memseg[i].addr == NULL)
break;

if (mcfg->free_memseg[i].len == 0)
continue;

/* bad socket ID */
if (socket_id != SOCKET_ID_ANY &&
mcfg->free_memseg[i].socket_id != 
SOCKET_ID_ANY &&
socket_id != mcfg->free_memseg[i].socket_id)
continue;

total_len += mcfg->free_memseg[i].len;
}

return total_len;
}

Thanks,
Regards
Venu


[dpdk-dev] Api in dpdk to get total free physical memory

2018-03-08 Thread Venumadhav Josyula
Hi All,



Like ‘rte_eal_get_physmem_size’ api to the total size of the physical
memory. Is there an API to get to get total free memory physical memory
available ?



We want such API we are planning to implement such API for the same



/* get the total size of memory */

uint64_t

rte_eal_get_physmem_free(int socket_id)

{

const struct rte_mem_config *mcfg;

unsigned i = 0;

uint64_t total_len = 0;



/* get pointer to global configuration */

mcfg = rte_eal_get_configuration()->mem_config;



for (i=0; ifree_memseg[i].addr == NULL)

break;



if (mcfg->free_memseg[i].len == 0)

continue;



/* bad socket ID */

if (socket_id != SOCKET_ID_ANY &&

mcfg->free_memseg[i].socket_id != SOCKET_ID_ANY &&

socket_id != mcfg->free_memseg[i].socket_id)

continue;



total_len += mcfg->free_memseg[i].len;

}



return total_len;

}



Thanks,

Regards

Venu


[dpdk-dev] Using rte_lpm as standalone library w/o mempools or dpdk infra

2020-04-02 Thread Venumadhav Josyula
Hi All,

Idea is following
   - create lpm in one process where only rte_lpm and bare minimum is
acessible. Addition into this table will also happen in this process
context.
   - Now in the packet processing context based ip of packet the lookup
will happen in the lpm created.

Is above possible. Please task / Process in which lpm created or lpm
entries added, does not do anything with respect to rte_eal.

Any pointers, will really be appreciated.

Cheers,
Venu


[dpdk-dev] time taken for allocation of mempool.

2019-11-12 Thread Venumadhav Josyula
Hi ,
We are using 'rte_mempool_create' for allocation of flow memory. This has
been there for a while. We just migrated to dpdk-18.11 from dpdk-17.05. Now
here is problem statement

Problem statement :
In new dpdk ( 18.11 ), the 'rte_mempool_create' take approximately ~4.4 sec
for allocation compared to older dpdk (17.05). We have som 8-9 mempools for
our entire product. We do upfront allocation for all of them ( i.e. when
dpdk application is coming up). Our application is run to completion model.

Questions:-
i)  is that acceptable / has anybody seen such a thing ?
ii) What has changed between two dpdk versions ( 18.11 v/s 17.05 ) from
memory perspective ?

Any pointer are welcome.

Thanks & regards
Venu


Re: [dpdk-dev] time taken for allocation of mempool.

2019-11-12 Thread Venumadhav Josyula
Hi,

Few more points

Operating system  : Centos 7.6
Logging mechanism : syslog

We have logged using syslog before the call and syslog after the call.

Thanks & Regards
Venu

On Wed, 13 Nov 2019 at 10:37, Venumadhav Josyula  wrote:

> Hi ,
> We are using 'rte_mempool_create' for allocation of flow memory. This has
> been there for a while. We just migrated to dpdk-18.11 from dpdk-17.05. Now
> here is problem statement
>
> Problem statement :
> In new dpdk ( 18.11 ), the 'rte_mempool_create' take approximately ~4.4
> sec for allocation compared to older dpdk (17.05). We have som 8-9 mempools
> for our entire product. We do upfront allocation for all of them ( i.e.
> when dpdk application is coming up). Our application is run to completion
> model.
>
> Questions:-
> i)  is that acceptable / has anybody seen such a thing ?
> ii) What has changed between two dpdk versions ( 18.11 v/s 17.05 ) from
> memory perspective ?
>
> Any pointer are welcome.
>
> Thanks & regards
> Venu
>


Re: [dpdk-dev] time taken for allocation of mempool.

2019-11-13 Thread Venumadhav Josyula
Hi Oliver,



*> Could you give some more details about you use case? (hugepage size,
number of objects, object size, additional mempool flags, ...)*

Ours in telecom product, we support multiple rats. Let us take example of
4G case where we act as an gtpu proxy.

·Hugepage size :- 2 Mb

·*rte_mempool_create in param*

o{ name=”gtpu-mem”,

o   n=150,

o   elt_size=224,

o   cache_size=0,

o   private_data_size=0,

o   mp_init=NULL,

o   mp_init_arg=NULL,

o   obj_init=NULL,

o   obj_init_arg=NULL,

o   socket_id=rte_socket_id(),

o   flags=MEMPOOL_F_SP_PUT }



*> Did you manage to reproduce it in a small test example? We could do some
profiling to investigate.*

No I would love to try that ? Are there examples ?



Thanks,

Regards,

Venu

On Wed, 13 Nov 2019 at 14:02, Olivier Matz  wrote:

> Hi Venu,
>
> On Wed, Nov 13, 2019 at 10:42:07AM +0530, Venumadhav Josyula wrote:
> > Hi,
> >
> > Few more points
> >
> > Operating system  : Centos 7.6
> > Logging mechanism : syslog
> >
> > We have logged using syslog before the call and syslog after the call.
> >
> > Thanks & Regards
> > Venu
> >
> > On Wed, 13 Nov 2019 at 10:37, Venumadhav Josyula 
> wrote:
> >
> > > Hi ,
> > > We are using 'rte_mempool_create' for allocation of flow memory. This
> has
> > > been there for a while. We just migrated to dpdk-18.11 from
> dpdk-17.05. Now
> > > here is problem statement
> > >
> > > Problem statement :
> > > In new dpdk ( 18.11 ), the 'rte_mempool_create' take approximately ~4.4
> > > sec for allocation compared to older dpdk (17.05). We have som 8-9
> mempools
> > > for our entire product. We do upfront allocation for all of them ( i.e.
> > > when dpdk application is coming up). Our application is run to
> completion
> > > model.
> > >
> > > Questions:-
> > > i)  is that acceptable / has anybody seen such a thing ?
> > > ii) What has changed between two dpdk versions ( 18.11 v/s 17.05 ) from
> > > memory perspective ?
>
> Could you give some more details about you use case? (hugepage size, number
> of objects, object size, additional mempool flags, ...)
>
> Did you manage to reproduce it in a small test example? We could do some
> profiling to investigate.
>
> Thanks for the feedback.
> Olivier
>


Re: [dpdk-dev] time taken for allocation of mempool.

2019-11-13 Thread Venumadhav Josyula
Hi Anatoly,

By default w/o specifying --iova-mode option is iova-mode=pa by default ?

Thanks
Venu

On Wed, 13 Nov, 2019, 10:56 pm Burakov, Anatoly, 
wrote:

> On 13-Nov-19 9:19 AM, Bruce Richardson wrote:
> > On Wed, Nov 13, 2019 at 10:37:57AM +0530, Venumadhav Josyula wrote:
> >> Hi ,
> >> We are using 'rte_mempool_create' for allocation of flow memory. This
> has
> >> been there for a while. We just migrated to dpdk-18.11 from dpdk-17.05.
> Now
> >> here is problem statement
> >>
> >> Problem statement :
> >> In new dpdk ( 18.11 ), the 'rte_mempool_create' take approximately ~4.4
> sec
> >> for allocation compared to older dpdk (17.05). We have som 8-9 mempools
> for
> >> our entire product. We do upfront allocation for all of them ( i.e. when
> >> dpdk application is coming up). Our application is run to completion
> model.
> >>
> >> Questions:-
> >> i)  is that acceptable / has anybody seen such a thing ?
> >> ii) What has changed between two dpdk versions ( 18.11 v/s 17.05 ) from
> >> memory perspective ?
> >>
> >> Any pointer are welcome.
> >>
> > Hi,
> >
> > from 17.05 to 18.11 there was a change in default memory model for DPDK.
> In
> > 17.05 all DPDK memory was allocated statically upfront and that used for
> > the memory pools. With 18.11, no large blocks of memory are allocated at
> > init time, instead the memory is requested from the kernel as it is
> needed
> > by the app. This will make the initial startup of an app faster, but the
> > allocation of new objects like mempools slower, and it could be this you
> > are seeing.
> >
> > Some things to try:
> > 1. Use "--socket-mem" EAL flag to do an upfront allocation of memory for
> use
> > by your memory pools and see if it improves things.
> > 2. Try using "--legacy-mem" flag to revert to the old memory model.
> >
> > Regards,
> > /Bruce
> >
>
> I would also add to this the fact that the mempool will, by default,
> attempt to allocate IOVA-contiguous memory, with a fallback to non-IOVA
> contiguous memory whenever getting IOVA-contiguous memory isn't possible.
>
> If you are running in IOVA as PA mode (such as would be the case if you
> are using igb_uio kernel driver), then, since it is now impossible to
> preallocate large PA-contiguous chunks in advance, what will likely
> happen in this case is, mempool will try to allocate IOVA-contiguous
> memory, fail and retry with non-IOVA contiguous memory (essentially
> allocating memory twice). For large mempools (or large number of
> mempools) that can take a bit of time.
>
> The obvious workaround is using VFIO and IOVA as VA mode. This will
> cause the allocator to be able to get IOVA-contiguous memory at the
> outset, and allocation will complete faster.
>
> The other two alternatives, already suggested in this thread by Bruce
> and Olivier, are:
>
> 1) use bigger page sizes (such as 1G)
> 2) use legacy mode (and lose out on all of the benefits provided by the
> new memory model)
>
> The recommended solution is to use VFIO/IOMMU, and IOVA as VA mode.
>
> --
> Thanks,
> Anatoly
>


Re: [dpdk-dev] time taken for allocation of mempool.

2019-11-14 Thread Venumadhav Josyula
Hi Oliver,Bruce,


   - we were using --SOCKET-MEM Eal flag.
   - We did not wanted to avoid going back to legacy mode.
   - we also wanted to avoid 1G huge-pages.

Thanks for your inputs.

Hi Anatoly,

We were using vfio with iommu, but by default it s iova-mode=pa, after
changing to iova-mode=va via EAL it kind of helped us to bring down
allocation time(s) for mempools drastically. The time taken was brought
from ~4.4 sec to 0.165254 sec.

Thanks and regards
Venu


On Wed, 13 Nov 2019 at 22:56, Burakov, Anatoly 
wrote:

> On 13-Nov-19 9:19 AM, Bruce Richardson wrote:
> > On Wed, Nov 13, 2019 at 10:37:57AM +0530, Venumadhav Josyula wrote:
> >> Hi ,
> >> We are using 'rte_mempool_create' for allocation of flow memory. This
> has
> >> been there for a while. We just migrated to dpdk-18.11 from dpdk-17.05.
> Now
> >> here is problem statement
> >>
> >> Problem statement :
> >> In new dpdk ( 18.11 ), the 'rte_mempool_create' take approximately ~4.4
> sec
> >> for allocation compared to older dpdk (17.05). We have som 8-9 mempools
> for
> >> our entire product. We do upfront allocation for all of them ( i.e. when
> >> dpdk application is coming up). Our application is run to completion
> model.
> >>
> >> Questions:-
> >> i)  is that acceptable / has anybody seen such a thing ?
> >> ii) What has changed between two dpdk versions ( 18.11 v/s 17.05 ) from
> >> memory perspective ?
> >>
> >> Any pointer are welcome.
> >>
> > Hi,
> >
> > from 17.05 to 18.11 there was a change in default memory model for DPDK.
> In
> > 17.05 all DPDK memory was allocated statically upfront and that used for
> > the memory pools. With 18.11, no large blocks of memory are allocated at
> > init time, instead the memory is requested from the kernel as it is
> needed
> > by the app. This will make the initial startup of an app faster, but the
> > allocation of new objects like mempools slower, and it could be this you
> > are seeing.
> >
> > Some things to try:
> > 1. Use "--socket-mem" EAL flag to do an upfront allocation of memory for
> use
> > by your memory pools and see if it improves things.
> > 2. Try using "--legacy-mem" flag to revert to the old memory model.
> >
> > Regards,
> > /Bruce
> >
>
> I would also add to this the fact that the mempool will, by default,
> attempt to allocate IOVA-contiguous memory, with a fallback to non-IOVA
> contiguous memory whenever getting IOVA-contiguous memory isn't possible.
>
> If you are running in IOVA as PA mode (such as would be the case if you
> are using igb_uio kernel driver), then, since it is now impossible to
> preallocate large PA-contiguous chunks in advance, what will likely
> happen in this case is, mempool will try to allocate IOVA-contiguous
> memory, fail and retry with non-IOVA contiguous memory (essentially
> allocating memory twice). For large mempools (or large number of
> mempools) that can take a bit of time.
>
> The obvious workaround is using VFIO and IOVA as VA mode. This will
> cause the allocator to be able to get IOVA-contiguous memory at the
> outset, and allocation will complete faster.
>
> The other two alternatives, already suggested in this thread by Bruce
> and Olivier, are:
>
> 1) use bigger page sizes (such as 1G)
> 2) use legacy mode (and lose out on all of the benefits provided by the
> new memory model)
>
> The recommended solution is to use VFIO/IOMMU, and IOVA as VA mode.
>
> --
> Thanks,
> Anatoly
>


Re: [dpdk-dev] time taken for allocation of mempool.

2019-11-14 Thread Venumadhav Josyula
Hi Anatoly,

Thanks for quick response. We want to understand, if there will be
performance implications because of iova-mode being va. We want to
understand,  specifically in terms following

   - cache misses
   - Branch misses etc
   - translation of va addr -> phy addr when packet is receieved

 Thanks and regards
Venu

On Thu, 14 Nov 2019 at 15:14, Burakov, Anatoly 
wrote:

> On 13-Nov-19 9:01 PM, Venumadhav Josyula wrote:
> > Hi Anatoly,
> >
> > By default w/o specifying --iova-mode option is iova-mode=pa by default ?
> >
> > Thanks
> > Venu
> >
>
> In 18.11, there is a very specific set of circumstances that will
> default to IOVA as VA mode. Future releases have become more aggressive,
> to the point of IOVA as VA mode being the default unless asked
> otherwise. So yes, it is highly likely that in your case, IOVA as PA is
> picked as the default.
>
> --
> Thanks,
> Anatoly
>


Re: [dpdk-dev] time taken for allocation of mempool.

2019-11-14 Thread Venumadhav Josyula
Hi Anatoly,

> I would also suggest using --limit-mem if you desire to limit the
> maximum amount of memory DPDK will be able to allocate.
We are already using that.

Thanks and regards,
Venu

On Thu, 14 Nov 2019 at 15:19, Burakov, Anatoly 
wrote:

> On 14-Nov-19 8:12 AM, Venumadhav Josyula wrote:
> > Hi Oliver,Bruce,
> >
> >   * we were using --SOCKET-MEM Eal flag.
> >   * We did not wanted to avoid going back to legacy mode.
> >   * we also wanted to avoid 1G huge-pages.
> >
> > Thanks for your inputs.
> >
> > Hi Anatoly,
> >
> > We were using vfio with iommu, but by default it s iova-mode=pa, after
> > changing to iova-mode=va via EAL it kind of helped us to bring down
> > allocation time(s) for mempools drastically. The time taken was brought
> > from ~4.4 sec to 0.165254 sec.
> >
> > Thanks and regards
> > Venu
>
> That's great to hear.
>
> As a final note, --socket-mem is no longer necessary, because 18.11 will
> allocate memory as needed. It is however still advisable to use it if
> you see yourself end up in a situation where the runtime allocation
> could conceivably fail (such as if you have other applications running
> on your system, and DPDK has to compete for hugepage memory).
>
> I would also suggest using --limit-mem if you desire to limit the
> maximum amount of memory DPDK will be able to allocate. This will make
> DPDK behave similarly to older releases in that it will not attempt to
> allocate more memory than you allow it.
>
> >
> >
> > On Wed, 13 Nov 2019 at 22:56, Burakov, Anatoly
> > mailto:anatoly.bura...@intel.com>> wrote:
> >
> > On 13-Nov-19 9:19 AM, Bruce Richardson wrote:
> >  > On Wed, Nov 13, 2019 at 10:37:57AM +0530, Venumadhav Josyula
> wrote:
> >  >> Hi ,
> >  >> We are using 'rte_mempool_create' for allocation of flow memory.
> > This has
> >  >> been there for a while. We just migrated to dpdk-18.11 from
> > dpdk-17.05. Now
> >  >> here is problem statement
> >  >>
> >  >> Problem statement :
> >  >> In new dpdk ( 18.11 ), the 'rte_mempool_create' take
> > approximately ~4.4 sec
> >  >> for allocation compared to older dpdk (17.05). We have som 8-9
> > mempools for
> >  >> our entire product. We do upfront allocation for all of them (
> > i.e. when
> >  >> dpdk application is coming up). Our application is run to
> > completion model.
> >  >>
> >  >> Questions:-
> >  >> i)  is that acceptable / has anybody seen such a thing ?
> >  >> ii) What has changed between two dpdk versions ( 18.11 v/s 17.05
> > ) from
> >  >> memory perspective ?
> >  >>
> >  >> Any pointer are welcome.
> >  >>
> >  > Hi,
> >  >
> >  > from 17.05 to 18.11 there was a change in default memory model
> > for DPDK. In
> >  > 17.05 all DPDK memory was allocated statically upfront and that
> > used for
> >  > the memory pools. With 18.11, no large blocks of memory are
> > allocated at
> >  > init time, instead the memory is requested from the kernel as it
> > is needed
> >  > by the app. This will make the initial startup of an app faster,
> > but the
> >  > allocation of new objects like mempools slower, and it could be
> > this you
> >  > are seeing.
> >  >
> >  > Some things to try:
> >  > 1. Use "--socket-mem" EAL flag to do an upfront allocation of
> > memory for use
> >  > by your memory pools and see if it improves things.
> >  > 2. Try using "--legacy-mem" flag to revert to the old memory
> model.
> >  >
> >  > Regards,
> >  > /Bruce
> >  >
> >
> > I would also add to this the fact that the mempool will, by default,
> > attempt to allocate IOVA-contiguous memory, with a fallback to
> non-IOVA
> > contiguous memory whenever getting IOVA-contiguous memory isn't
> > possible.
> >
> > If you are running in IOVA as PA mode (such as would be the case if
> you
> > are using igb_uio kernel driver), then, since it is now impossible to
> > preallocate large PA-contiguous chunks in advance, what will likely
> > happen in this case is, mempool will try to allocate IOVA-contiguous
> >

Re: [dpdk-dev] time taken for allocation of mempool.

2019-11-18 Thread Venumadhav Josyula
Hi Anatoly,

After using iova-mode=va, i see my ports are not getting detected ? I
thought it's working but I see following problem

what could be the problem?
i) I see allocation is faster
ii) But my ports are not getting detected
I take my word back that it entirely working..

Thanks,
Regards,
Venu

On Thu, 14 Nov 2019 at 15:27, Burakov, Anatoly 
wrote:

> On 14-Nov-19 9:50 AM, Venumadhav Josyula wrote:
> > Hi Anatoly,
> >
> > Thanks for quick response. We want to understand, if there will be
> > performance implications because of iova-mode being va. We want to
> > understand,  specifically in terms following
> >
> >   * cache misses
> >   * Branch misses etc
> >   * translation of va addr -> phy addr when packet is receieved
> >
>
> There will be no impact whatsoever. You mentioned that you were already
> using VFIO, so you were already making use of IOMMU*. Cache/branch
> misses are independent of IOVA layout, and translations are done by the
> hardware (in either IOVA as PA or IOVA as VA case - IOMMU doesn't care
> what you program it with, it still does the translation, even if it's a
> 1:1 IOVA-to-PA mapping), so there is nothing that can cause degradation.
>
> In fact, under some circumstances, using IOVA as VA mode can be used to
> get performance /gains/, because the code can take advantage of the fact
> that there are large IOVA-contiguous segments and no page-by-page
> allocations. Some drivers (IIRC octeontx mempool?) even refuse to work
> in IOVA as PA mode due to huge overheads of page-by-page buffer offset
> tracking.
>
> TL;DR you'll be fine :)
>
> * Using an IOMMU can /theoretically/ affect performance due to hardware
> IOVA->PA translation and IOTLB cache misses. In practice, i have never
> been able to observe /any/ effect whatsoever on performance when using
> IOMMU vs. without using IOMMU, so this appears to not be a concern /in
> practice/.
>
> >   Thanks and regards
> > Venu
> >
> > On Thu, 14 Nov 2019 at 15:14, Burakov, Anatoly
> > mailto:anatoly.bura...@intel.com>> wrote:
> >
> > On 13-Nov-19 9:01 PM, Venumadhav Josyula wrote:
> >  > Hi Anatoly,
> >  >
> >  > By default w/o specifying --iova-mode option is iova-mode=pa by
> > default ?
> >  >
> >  > Thanks
> >  > Venu
> >  >
> >
> > In 18.11, there is a very specific set of circumstances that will
> > default to IOVA as VA mode. Future releases have become more
> > aggressive,
> > to the point of IOVA as VA mode being the default unless asked
> > otherwise. So yes, it is highly likely that in your case, IOVA as PA
> is
> > picked as the default.
> >
> > --
> > Thanks,
> > Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>


Re: [dpdk-dev] time taken for allocation of mempool.

2019-11-18 Thread Venumadhav Josyula
PL note I am using dpdk 18-11...

On Wed, 13 Nov, 2019, 10:37 am Venumadhav Josyula, 
wrote:

> Hi ,
> We are using 'rte_mempool_create' for allocation of flow memory. This has
> been there for a while. We just migrated to dpdk-18.11 from dpdk-17.05. Now
> here is problem statement
>
> Problem statement :
> In new dpdk ( 18.11 ), the 'rte_mempool_create' take approximately ~4.4
> sec for allocation compared to older dpdk (17.05). We have som 8-9 mempools
> for our entire product. We do upfront allocation for all of them ( i.e.
> when dpdk application is coming up). Our application is run to completion
> model.
>
> Questions:-
> i)  is that acceptable / has anybody seen such a thing ?
> ii) What has changed between two dpdk versions ( 18.11 v/s 17.05 ) from
> memory perspective ?
>
> Any pointer are welcome.
>
> Thanks & regards
> Venu
>


[dpdk-dev] seeing a problem dpdk startup

2019-11-27 Thread Venumadhav Josyula
We are seeing following error, no device is detected
==
EAL: Detected 8 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device :04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1528 net_ixgbe
EAL:   :04:00.0 failed to select IOMMU type
EAL: Requested device :04:00.0 cannot be used
EAL: PCI device :04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1528 net_ixgbe
EAL:   :04:00.1 failed to select IOMMU type
EAL: Requested device :04:00.1 cannot be used

==
dmesg

[55455.259493] vfio-pci :04:00.0: Device is ineligible for IOMMU domain
attach due to platform RMRR requirement.  Contact your platform vendor.
[55455.260947] vfio-pci :04:00.1: Device is ineligible for IOMMU domain
attach due to platform RMRR requirement.  Contact your platform vendor.

==
BOOT_IMAGE=/boot/vmlinuz-3.10.0-957.el7.pw.1.x86_64 root=/dev/sda3
intel_iommu=on


Can anybody suggest how to work around the issue ?

Thanks,
Regards,
Venu


Re: [dpdk-dev] seeing a problem dpdk startup

2019-11-27 Thread Venumadhav Josyula
Hi Gavin,

It is
[root@localhost scripts]# ./dpdk-devbind.py --status-dev net

Network devices using DPDK-compatible driver

:04:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' drv=vfio-pci
unused=ixgbe
:04:00.1 'Ethernet Controller 10-Gigabit X540-AT2 1528' drv=vfio-pci
unused=ixgbe

Network devices using kernel driver
===
:02:00.0 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno1 drv=tg3
unused=vfio-pci *Active*
:02:00.1 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno2 drv=tg3
unused=vfio-pci
:02:00.2 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno3 drv=tg3
unused=vfio-pci
:02:00.3 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno4 drv=tg3
unused=vfio-pci
[root@localhost scripts]#

I understand from the that IOMMU type selection is failing, is it possible
to somehow default it to either igb_uio ?

Thanks,
Regards,
Venu

On Thu, 28 Nov 2019 at 13:09, Gavin Hu (Arm Technology China) <
gavin...@arm.com> wrote:

> Hi Venumadhav,
>
> > -Original Message-
> > From: dev  On Behalf Of Venumadhav Josyula
> > Sent: Thursday, November 28, 2019 3:01 PM
> > To: dev@dpdk.org; us...@dpdk.org
> > Cc: Venumadhav Josyula 
> > Subject: [dpdk-dev] seeing a problem dpdk startup
> >
> > We are seeing following error, no device is detected
> > ==
> > EAL: Detected 8 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > EAL: VFIO support initialized
> > EAL: PCI device :04:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL:   :04:00.0 failed to select IOMMU type
> > EAL: Requested device :04:00.0 cannot be used
> > EAL: PCI device :04:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL:   :04:00.1 failed to select IOMMU type
> > EAL: Requested device :04:00.1 cannot be used
> >
> > ==
> > dmesg
> >
> > [55455.259493] vfio-pci :04:00.0: Device is ineligible for IOMMU
> domain
> > attach due to platform RMRR requirement.  Contact your platform vendor.
> > [55455.260947] vfio-pci :04:00.1: Device is ineligible for IOMMU
> domain
> > attach due to platform RMRR requirement.  Contact your platform vendor.
> >
> > ==
> > BOOT_IMAGE=/boot/vmlinuz-3.10.0-957.el7.pw.1.x86_64 root=/dev/sda3
> > intel_iommu=on
> >
> >
> > Can anybody suggest how to work around the issue ?
> Did you check if the device is bound to DPDK?
> Here is an example on my host:
> gavin@net-arm-thunderx2-01:~/dpdk$ ./usertools/dpdk-devbind.py -s
>
> Network devices using DPDK-compatible driver
> 
> :0f:00.0 'Ethernet Controller XL710 for 40GbE QSFP+ 1583' drv=vfio-pci
> unused=i40e
> :8a:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b'
> drv=vfio-pci unused=i40e
> :8a:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b'
> drv=vfio-pci unused=i40e
> >
> > Thanks,
> > Regards,
> > Venu
>


Re: [dpdk-dev] seeing a problem dpdk startup

2019-11-27 Thread Venumadhav Josyula
Hi Hui,

I understand that and we are exploring that.

But is there no work around there where in it defaults to igb_uio driver
something like that. Basic idea is we should not be situation of no
port detected at all under this situation ?

Thanks,
Regards,
Venu

On Thu, 28 Nov 2019 at 13:15, Hui Wei  wrote:

>
> >We are seeing following error, no device is detected
> >==
> >EAL: Detected 8 lcore(s)
> >EAL: Detected 2 NUMA nodes
> >EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >EAL: No free hugepages reported in hugepages-1048576kB
> >EAL: Probing VFIO support...
> >EAL: VFIO support initialized
> >EAL: PCI device :04:00.0 on NUMA socket 0
> >EAL:   probe driver: 8086:1528 net_ixgbe
> >EAL:   :04:00.0 failed to select IOMMU type
> >EAL: Requested device :04:00.0 cannot be used
> >EAL: PCI device :04:00.1 on NUMA socket 0
> >EAL:   probe driver: 8086:1528 net_ixgbe
> >EAL:   :04:00.1 failed to select IOMMU type
> >EAL: Requested device :04:00.1 cannot be used
> >
> >==
> >dmesg
> >
> >[55455.259493] vfio-pci :04:00.0: Device is ineligible for IOMMU
> domain
> >attach due to platform RMRR requirement.  Contact your platform vendor.
> >[55455.260947] vfio-pci :04:00.1: Device is ineligible for IOMMU
> domain
> >attach due to platform RMRR requirement.  Contact your platform vendor.
>
> Intel Vt-d document addresses RMRR, but not that clearly. Check BIOS
> setup,
> no link share, for example, a nic detect by linux kernel, it was shared by
> BIOS for
> IPMI. If change BIOS setup don't solve your problem, contact platform
> vendor update
> nic firmware, a server made by HP, check HP's website for firmware.
>
> >
> >==
> >BOOT_IMAGE=/boot/vmlinuz-3.10.0-957.el7.pw.1.x86_64 root=/dev/sda3
> >intel_iommu=on
> >
> >
> >Can anybody suggest how to work around the issue ?
> >
> >Thanks,
> >Regards,
> >Venu


Re: [dpdk-dev] seeing a problem dpdk startup

2019-11-28 Thread Venumadhav Josyula
Hi Roberts,

Thanks for your reply. Basically, our BIOS is old we do not have that
feature in our BIOS , we need to upgrade it.

But after enabling will it work, can you confirm that based on your
experience or somebody informing you success story.

Thanks,
Regards,
Venu

On Thu, 28 Nov 2019 at 19:45, Roberts, Lee A.  wrote:

> Please see the HPE customer advisory at
> https://support.hpe.com/hpsc/doc/public/display?sp4ts.oid=7271241&docId=emr_na-c04781229&docLocale=en_US
> .
> You will need to disable the shared memory features for the NIC card.
>
>  - Lee
>
> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Hui Wei
> Sent: Thursday, November 28, 2019 1:01 AM
> To: Venumadhav Josyula 
> Cc: dev ; users ; Venumadhav Josyula <
> vjosy...@parallelwireless.com>
> Subject: Re: [dpdk-dev] seeing a problem dpdk startup
>
> I think  BIOS turn off vt-d, use igb_uio can work around.
>
> >Hi Hui,
> >
> >I understand that and we are exploring that.
> >
> >But is there no work around there where in it defaults to igb_uio driver
> >something like that. Basic idea is we should not be situation of no
> >port detected at all under this situation ?
> >
> >Thanks,
> >Regards,
> >Venu
> >
> >On Thu, 28 Nov 2019 at 13:15, Hui Wei  wrote:
> >
> >>
> >> >We are seeing following error, no device is detected
> >> >==
> >> >EAL: Detected 8 lcore(s)
> >> >EAL: Detected 2 NUMA nodes
> >> >EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >> >EAL: No free hugepages reported in hugepages-1048576kB
> >> >EAL: Probing VFIO support...
> >> >EAL: VFIO support initialized
> >> >EAL: PCI device :04:00.0 on NUMA socket 0
> >> >EAL:   probe driver: 8086:1528 net_ixgbe
> >> >EAL:   :04:00.0 failed to select IOMMU type
> >> >EAL: Requested device :04:00.0 cannot be used
> >> >EAL: PCI device :04:00.1 on NUMA socket 0
> >> >EAL:   probe driver: 8086:1528 net_ixgbe
> >> >EAL:   :04:00.1 failed to select IOMMU type
> >> >EAL: Requested device :04:00.1 cannot be used
> >> >
> >> >==
> >> >dmesg
> >> >
> >> >[55455.259493] vfio-pci :04:00.0: Device is ineligible for IOMMU
> >> domain
> >> >attach due to platform RMRR requirement.  Contact your platform vendor.
> >> >[55455.260947] vfio-pci :04:00.1: Device is ineligible for IOMMU
> >> domain
> >> >attach due to platform RMRR requirement.  Contact your platform vendor.
> >>
> >> Intel Vt-d document addresses RMRR, but not that clearly. Check BIOS
> >> setup,
> >> no link share, for example, a nic detect by linux kernel, it was shared
> by
> >> BIOS for
> >> IPMI. If change BIOS setup don't solve your problem, contact platform
> >> vendor update
> >> nic firmware, a server made by HP, check HP's website for firmware.
> >>
> >> >
> >> >==
> >> >BOOT_IMAGE=/boot/vmlinuz-3.10.0-957.el7.pw.1.x86_64 root=/dev/sda3
> >> >intel_iommu=on
> >> >
> >> >
> >> >Can anybody suggest how to work around the issue ?
> >> >
> >> >Thanks,
> >> >Regards,
> >> >Venu
>


Re: [dpdk-dev] time taken for allocation of mempool.

2019-12-06 Thread Venumadhav Josyula
Hi Anatoly,

I was able to resolve the problem, which problem in our script.

Thanks and regards
Venu

On Fri, 6 Dec 2019 at 16:17, Burakov, Anatoly 
wrote:

> On 18-Nov-19 4:43 PM, Venumadhav Josyula wrote:
> > Hi Anatoly,
> >
> > After using iova-mode=va, i see my ports are not getting detected ? I
> > thought it's working but I see following problem
> >
> > what could be the problem?
> > i) I see allocation is faster
> > ii) But my ports are not getting detected
> > I take my word back that it entirely working..
> >
> > Thanks,
> > Regards,
> > Venu
> >
>
> "Ports are not getting detected" is a pretty vague description of the
> problem. Could you please post the EAL initialization log (preferably
> with --log-level=eal,8 added, so that there's more output)?
>
> --
> Thanks,
> Anatoly
>


[dpdk-dev] Drops seen with virtio pmd driver

2019-08-28 Thread Venumadhav Josyula
Hi All,

We are observing packet drops ~@90Mbs with virtio pmd driver. These packets
are not been
queued in the tx descriptors, the function 'rte_eth_tx_burst' is returning
the less than n.

So questions are following
i)  are there any issues seen ?

Observations in our case :-
i) packets are dropped and tx packets donot get incremented.
ii) i) happens for 30 secs after that it recovers.
iii) After sometime we see this issue in i) & ii) again.

Any clues / pointers will help.

Thanks,
Regards,
Venu