[dpdk-dev] issues with packets bigger than 1500 bytes

2015-03-31 Thread Newman Poborsky
Hi again,

I tried setting MTU to larger value and I get really weird behaviour - 
statistics now show that there are no more errors, but after just a few minutes 
I get a segfault while calling rte_pktmbuf_free():
#0  0x004202fc in rte_atomic16_add_return (v=0xff94, 
inc=-1) at /opt/dpdk-1.8.0/build/include/generic/rte_atomic.h:247
247 return __sync_add_and_fetch(&v->cnt, inc);
(gdb) bt
#0  0x004202fc in rte_atomic16_add_return (v=0xff94, 
inc=-1) at /opt/dpdk-1.8.0/build/include/generic/rte_atomic.h:247
#1  0x00420414 in rte_mbuf_refcnt_update (m=0xff80, 
value=-1) at /opt/dpdk-1.8.0/build/include/rte_mbuf.h:371
#2  0x004205e4 in __rte_pktmbuf_prefree_seg (m=0x7f773ff1ce80) at 
/opt/dpdk-1.8.0/build/include/rte_mbuf.h:770
#3  rte_pktmbuf_free_seg (m=0x7f773ff1ce80) at 
/opt/dpdk-1.8.0/build/include/rte_mbuf.h:793
#4  rte_pktmbuf_free (m=0x7f773ff1ce80) at 
/opt/dpdk-1.8.0/build/include/rte_mbuf.h:816

Is there any reason why changing MTU would cause this?  

With packets smaller than 1500 bytes and standard MTU, application is stable 
and there are no problems with calls to rte_pktmbuf_free(). Application is also 
stable with packets larger than 1500 bytes and standard MTU, but as I said, 
there are a lot of received errors. 

BR,
Newman

On Mon, Mar 30, 2015 at 12:52:22PM +0200, Newman Poborsky wrote:
> Hi,
> 
> I'm having some problems with dpdk on links that have packets bigger
> than 1500bytes. Some packets that are receieved are around 4K, and
> some are 9k jumbo frames.
> 
> When using testpmd app, I can see a lot of RX-errors (both RX-missed and 
> RX-badlen). When I set max packet length to 9000, RX-badlen counter stops 
> increasing, but RX-missed still keeps growing.
> 
> What is the proper way to deal with jumbo frames?
> 
> I tried setting MTU to 9k, but this fails. From what I can see, you have to 
> pass additional parameter to mp_init callback in rte_mempool_create(). Is 
> this right?
> 
> I'm not sure should I just set max packet length (like in testpmd example 
> app), or should I (also) set MTU? Is this actually related to the original 
> problem of having a lot of RX-missed packets?
> 
> If I missed some documentation related to jumbo frames and MTU, I apologize, 
> please point me to it.
> 
> Any help is appreciated.
> 
> Thank you,
> 
> Newman P.


[dpdk-dev] issues with packets bigger than 1500 bytes

2015-03-30 Thread Newman Poborsky
Hi,

I'm having some problems with dpdk on links that have packets bigger
than 1500bytes. Some packets that are receieved are around 4K, and
some are 9k jumbo frames.

When using testpmd app, I can see a lot of RX-errors (both RX-missed and 
RX-badlen). When I set max packet length to 9000, RX-badlen counter stops 
increasing, but RX-missed still keeps growing.

What is the proper way to deal with jumbo frames?

I tried setting MTU to 9k, but this fails. From what I can see, you have to 
pass additional parameter to mp_init callback in rte_mempool_create(). Is this 
right?

I'm not sure should I just set max packet length (like in testpmd example app), 
or should I (also) set MTU? Is this actually related to the original problem of 
having a lot of RX-missed packets?

If I missed some documentation related to jumbo frames and MTU, I apologize, 
please point me to it.

Any help is appreciated.

Thank you,

Newman P.


[dpdk-dev] rte_mempool_create fails with ENOMEM

2015-01-08 Thread Newman Poborsky
I finally found the time to try this and I noticed that on a server
with 1 NUMA node, this works, but if  server has 2 NUMA nodes than by
default memory policy, reserved hugepages are divided on each node and
again DPDK test app fails for the reason already mentioned. I found
out that 'solution' for this is to deallocate hugepages on node1
(after boot) and leave them only on node0:
echo 0 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages

Could someone please explain what changes when there are hugepages on
both nodes? Does this cause some memory fragmentation so that there
aren't enough contiguous segments? If so, how?

Thanks!

Newman

On Mon, Dec 22, 2014 at 11:48 AM, Newman Poborsky  
wrote:
> On Sat, Dec 20, 2014 at 2:34 AM, Stephen Hemminger
>  wrote:
>> You can reserve hugepages on the kernel cmdline (GRUB).
>
> Great, thanks, I'll try that!
>
> Newman
>
>>
>> On Fri, Dec 19, 2014 at 12:13 PM, Newman Poborsky 
>> wrote:
>>>
>>> On Thu, Dec 18, 2014 at 9:03 PM, Ananyev, Konstantin <
>>> konstantin.ananyev at intel.com> wrote:
>>>
>>> >
>>> >
>>> > > -Original Message-
>>> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Ananyev,
>>> > > Konstantin
>>> > > Sent: Thursday, December 18, 2014 5:43 PM
>>> > > To: Newman Poborsky; dev at dpdk.org
>>> > > Subject: Re: [dpdk-dev] rte_mempool_create fails with ENOMEM
>>> > >
>>> > > Hi
>>> > >
>>> > > > -Original Message-
>>> > > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Newman 
>>> > > > Poborsky
>>> > > > Sent: Thursday, December 18, 2014 1:26 PM
>>> > > > To: dev at dpdk.org
>>> > > > Subject: [dpdk-dev] rte_mempool_create fails with ENOMEM
>>> > > >
>>> > > > Hi,
>>> > > >
>>> > > > could someone please provide any explanation why sometimes mempool
>>> > creation
>>> > > > fails with ENOMEM?
>>> > > >
>>> > > > I run my test app several times without any problems and then I
>>> > > > start
>>> > > > getting ENOMEM error when creating mempool that are used for
>>> > > > packets.
>>> > I try
>>> > > > to delete everything from /mnt/huge, I increase the number of huge
>>> > pages,
>>> > > > remount /mnt/huge but nothing helps.
>>> > > >
>>> > > > There is more than enough memory on server. I tried to debug
>>> > > > rte_mempool_create() call and it seems that after server is
>>> > > > restarted
>>> > free
>>> > > > mem segments are bigger than 2MB, but after running test app for
>>> > several
>>> > > > times, it seems that all free mem segments have a size of 2MB, and
>>> > since I
>>> > > > am requesting 8MB for my packet mempool, this fails.  I'm not really
>>> > sure
>>> > > > that this conclusion is correct.
>>> > >
>>> > > Yes,rte_mempool_create uses  rte_memzone_reserve() to allocate
>>> > > single physically continuous chunk of memory.
>>> > > If no such chunk exist, then it would fail.
>>> > > Why physically continuous?
>>> > > Main reason - to make things easier for us, as in that case we don't
>>> > have to worry
>>> > > about situation when mbuf crosses page boundary.
>>> > > So you can overcome that problem like that:
>>> > > Allocate max amount of memory you would need to hold all mbufs in
>>> > > worst
>>> > case (all pages physically disjoint)
>>> > > using rte_malloc().
>>> >
>>> > Actually my wrong: rte_malloc()s wouldn't help you here.
>>> > You probably need to allocate some external (not managed by EAL) memory
>>> > in
>>> > that case,
>>> > may be mmap() with MAP_HUGETLB, or something similar.
>>> >
>>> > > Figure out it's physical mappings.
>>> > > Call  rte_mempool_xmem_create().
>>> > > You can look at: app/test-pmd/mempool_anon.c as a reference.
>>> > > It uses same approach to create mempool over 4K pages.
>>> > >
>>> > > We probably add similar function into mempool API
>>>

[dpdk-dev] rte_mempool_create fails with ENOMEM

2014-12-22 Thread Newman Poborsky
On Sat, Dec 20, 2014 at 2:34 AM, Stephen Hemminger
 wrote:
> You can reserve hugepages on the kernel cmdline (GRUB).

Great, thanks, I'll try that!

Newman

>
> On Fri, Dec 19, 2014 at 12:13 PM, Newman Poborsky 
> wrote:
>>
>> On Thu, Dec 18, 2014 at 9:03 PM, Ananyev, Konstantin <
>> konstantin.ananyev at intel.com> wrote:
>>
>> >
>> >
>> > > -Original Message-
>> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Ananyev,
>> > > Konstantin
>> > > Sent: Thursday, December 18, 2014 5:43 PM
>> > > To: Newman Poborsky; dev at dpdk.org
>> > > Subject: Re: [dpdk-dev] rte_mempool_create fails with ENOMEM
>> > >
>> > > Hi
>> > >
>> > > > -Original Message-
>> > > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Newman Poborsky
>> > > > Sent: Thursday, December 18, 2014 1:26 PM
>> > > > To: dev at dpdk.org
>> > > > Subject: [dpdk-dev] rte_mempool_create fails with ENOMEM
>> > > >
>> > > > Hi,
>> > > >
>> > > > could someone please provide any explanation why sometimes mempool
>> > creation
>> > > > fails with ENOMEM?
>> > > >
>> > > > I run my test app several times without any problems and then I
>> > > > start
>> > > > getting ENOMEM error when creating mempool that are used for
>> > > > packets.
>> > I try
>> > > > to delete everything from /mnt/huge, I increase the number of huge
>> > pages,
>> > > > remount /mnt/huge but nothing helps.
>> > > >
>> > > > There is more than enough memory on server. I tried to debug
>> > > > rte_mempool_create() call and it seems that after server is
>> > > > restarted
>> > free
>> > > > mem segments are bigger than 2MB, but after running test app for
>> > several
>> > > > times, it seems that all free mem segments have a size of 2MB, and
>> > since I
>> > > > am requesting 8MB for my packet mempool, this fails.  I'm not really
>> > sure
>> > > > that this conclusion is correct.
>> > >
>> > > Yes,rte_mempool_create uses  rte_memzone_reserve() to allocate
>> > > single physically continuous chunk of memory.
>> > > If no such chunk exist, then it would fail.
>> > > Why physically continuous?
>> > > Main reason - to make things easier for us, as in that case we don't
>> > have to worry
>> > > about situation when mbuf crosses page boundary.
>> > > So you can overcome that problem like that:
>> > > Allocate max amount of memory you would need to hold all mbufs in
>> > > worst
>> > case (all pages physically disjoint)
>> > > using rte_malloc().
>> >
>> > Actually my wrong: rte_malloc()s wouldn't help you here.
>> > You probably need to allocate some external (not managed by EAL) memory
>> > in
>> > that case,
>> > may be mmap() with MAP_HUGETLB, or something similar.
>> >
>> > > Figure out it's physical mappings.
>> > > Call  rte_mempool_xmem_create().
>> > > You can look at: app/test-pmd/mempool_anon.c as a reference.
>> > > It uses same approach to create mempool over 4K pages.
>> > >
>> > > We probably add similar function into mempool API
>> > (create_scatter_mempool or something)
>> > > or just add a new flag (USE_SCATTER_MEM) into rte_mempool_create().
>> > > Though right now it is not there.
>> > >
>> > > Another quick alternative - use 1G pages.
>> > >
>> > > Konstantin
>> >
>>
>>
>> Ok, thanks for the explanation. I understand that this is probably an OS
>> question more than DPDK, but is there a way to again allocate a contiguous
>> memory for n-th run of my test app?  It seems that hugepages get
>> divded/separated to individual 2MB hugepage. Shouldn't OS's memory
>> management system try to group those hupages back to one contiguous chunk
>> once my app/process is done?   Again, I know very little about Linux
>> memory
>> management and hugepages, so forgive me if this is a stupid question.
>> Is rebooting the OS the only way to deal with this problem?  Or should I
>> just try to use 1GB hugepages?
>>
>> p.s. Konstantin, sorry for the double reply, I accidentally forgot to
>> include dev list in my first reply  :)
>>
>> Newman
>>
>> >
>> > > >
>> > > > Does anybody have any idea what to check and how running my test app
>> > > > several times affects hugepages?
>> > > >
>> > > > For me, this doesn't make any since because after test app exits,
>> > resources
>> > > > should be freed, right?
>> > > >
>> > > > This has been driving me crazy for days now. I tried reading a bit
>> > > > more
>> > > > theory about hugepages, but didn't find out anything that could help
>> > me.
>> > > > Maybe it's something else and completely trivial, but I can't figure
>> > > > it
>> > > > out, so any help is appreciated.
>> > > >
>> > > > Thank you!
>> > > >
>> > > > BR,
>> > > > Newman P.
>> >
>
>


[dpdk-dev] rte_mempool_create fails with ENOMEM

2014-12-19 Thread Newman Poborsky
On Thu, Dec 18, 2014 at 9:03 PM, Ananyev, Konstantin <
konstantin.ananyev at intel.com> wrote:

>
>
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Ananyev, Konstantin
> > Sent: Thursday, December 18, 2014 5:43 PM
> > To: Newman Poborsky; dev at dpdk.org
> > Subject: Re: [dpdk-dev] rte_mempool_create fails with ENOMEM
> >
> > Hi
> >
> > > -Original Message-
> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Newman Poborsky
> > > Sent: Thursday, December 18, 2014 1:26 PM
> > > To: dev at dpdk.org
> > > Subject: [dpdk-dev] rte_mempool_create fails with ENOMEM
> > >
> > > Hi,
> > >
> > > could someone please provide any explanation why sometimes mempool
> creation
> > > fails with ENOMEM?
> > >
> > > I run my test app several times without any problems and then I start
> > > getting ENOMEM error when creating mempool that are used for packets.
> I try
> > > to delete everything from /mnt/huge, I increase the number of huge
> pages,
> > > remount /mnt/huge but nothing helps.
> > >
> > > There is more than enough memory on server. I tried to debug
> > > rte_mempool_create() call and it seems that after server is restarted
> free
> > > mem segments are bigger than 2MB, but after running test app for
> several
> > > times, it seems that all free mem segments have a size of 2MB, and
> since I
> > > am requesting 8MB for my packet mempool, this fails.  I'm not really
> sure
> > > that this conclusion is correct.
> >
> > Yes,rte_mempool_create uses  rte_memzone_reserve() to allocate
> > single physically continuous chunk of memory.
> > If no such chunk exist, then it would fail.
> > Why physically continuous?
> > Main reason - to make things easier for us, as in that case we don't
> have to worry
> > about situation when mbuf crosses page boundary.
> > So you can overcome that problem like that:
> > Allocate max amount of memory you would need to hold all mbufs in worst
> case (all pages physically disjoint)
> > using rte_malloc().
>
> Actually my wrong: rte_malloc()s wouldn't help you here.
> You probably need to allocate some external (not managed by EAL) memory in
> that case,
> may be mmap() with MAP_HUGETLB, or something similar.
>
> > Figure out it's physical mappings.
> > Call  rte_mempool_xmem_create().
> > You can look at: app/test-pmd/mempool_anon.c as a reference.
> > It uses same approach to create mempool over 4K pages.
> >
> > We probably add similar function into mempool API
> (create_scatter_mempool or something)
> > or just add a new flag (USE_SCATTER_MEM) into rte_mempool_create().
> > Though right now it is not there.
> >
> > Another quick alternative - use 1G pages.
> >
> > Konstantin
>


Ok, thanks for the explanation. I understand that this is probably an OS
question more than DPDK, but is there a way to again allocate a contiguous
memory for n-th run of my test app?  It seems that hugepages get
divded/separated to individual 2MB hugepage. Shouldn't OS's memory
management system try to group those hupages back to one contiguous chunk
once my app/process is done?   Again, I know very little about Linux memory
management and hugepages, so forgive me if this is a stupid question.
Is rebooting the OS the only way to deal with this problem?  Or should I
just try to use 1GB hugepages?

p.s. Konstantin, sorry for the double reply, I accidentally forgot to
include dev list in my first reply  :)

Newman

>
> > >
> > > Does anybody have any idea what to check and how running my test app
> > > several times affects hugepages?
> > >
> > > For me, this doesn't make any since because after test app exits,
> resources
> > > should be freed, right?
> > >
> > > This has been driving me crazy for days now. I tried reading a bit more
> > > theory about hugepages, but didn't find out anything that could help
> me.
> > > Maybe it's something else and completely trivial, but I can't figure it
> > > out, so any help is appreciated.
> > >
> > > Thank you!
> > >
> > > BR,
> > > Newman P.
>


[dpdk-dev] rte_mempool_create fails with ENOMEM

2014-12-18 Thread Newman Poborsky
Hi,

could someone please provide any explanation why sometimes mempool creation
fails with ENOMEM?

I run my test app several times without any problems and then I start
getting ENOMEM error when creating mempool that are used for packets. I try
to delete everything from /mnt/huge, I increase the number of huge pages,
remount /mnt/huge but nothing helps.

There is more than enough memory on server. I tried to debug
rte_mempool_create() call and it seems that after server is restarted free
mem segments are bigger than 2MB, but after running test app for several
times, it seems that all free mem segments have a size of 2MB, and since I
am requesting 8MB for my packet mempool, this fails.  I'm not really sure
that this conclusion is correct.

Does anybody have any idea what to check and how running my test app
several times affects hugepages?

For me, this doesn't make any since because after test app exits, resources
should be freed, right?

This has been driving me crazy for days now. I tried reading a bit more
theory about hugepages, but didn't find out anything that could help me.
Maybe it's something else and completely trivial, but I can't figure it
out, so any help is appreciated.

Thank you!

BR,
Newman P.


[dpdk-dev] one worker reading multiple ports

2014-11-21 Thread Newman Poborsky
Nice guess :)  After adding check with rte_mempool_empty(), as soon as I
enable second port for reading, it shows that the mempool is empty. Thank
you for help!

On Fri, Nov 21, 2014 at 3:44 PM, Bruce Richardson <
bruce.richardson at intel.com> wrote:

> On Fri, Nov 21, 2014 at 03:03:25PM +0100, Newman Poborsky wrote:
> > So, since mempool is multi-consumer (by default), if one is used to
> > configure queues on multiple NICs that have different socket owners, then
> > mbuf allocation will fail? But if 2 NICs have the socket owner,
> everything
> > should work fine?  Since I'm talking about 2 ports on the same NIC, they
> > must have the same owner, RX receive should work with RX queues
> configured
> > with the same mempool, right? But it my case it doesn't so I guess I'm
> > missing something.
>
> Actually, the mempools will work with NICs on multiple sockets - it's just
> that performance is likely to suffer due to QPI usage. The mempools being
> on
> one socket or the other is not going to break your application.
>
> >
> > Any idea how can I troubleshoot why allocation fails with one mempool and
> > works fine with each queue having its own mempool?
>
> At a guess, I'd say that your mempools just aren't bit enough. Try
> doubling the
> size of th mempool in the single-pool case and see if it helps things.
>
> /Bruce
>
> >
> > Thank you,
> >
> > Newman
> >
> > On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall 
> > wrote:
> >
> > > On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote:
> > > > Thank you for your answer.
> > > >
> > > > I just realized that the reason the rte_eth_rx_burst() returns 0 is
> > > because
> > > > inside ixgbe_recv_pkts() this fails:
> > > > nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL
> > > >
> > > > Does this mean that every RX queue should have its own rte_mempool?
> If
> > > so,
> > > > are there any optimal values for: number of RX descriptors, per-queue
> > > > rte_mempool size, number of hugepages (from what I understand, these
> 3
> > > are
> > > > correlated)?
> > > >
> > > > If I'm wrong, please explain why.
> > > >
> > > > Thanks!
> > > >
> > > > BR,
> > > > Newman
> > >
> > > Newman,
> > >
> > > Mempools are created per NUMA node (ordinarily this means per processor
> > > socket
> > > if sockets > 1).
> > >
> > > When doing Tx / Rx Queue Setup, one should determine the socket which
> owns
> > > the
> > > given PCI NIC, and try to use memory on that same socket to handle
> traffic
> > > for
> > > that NIC and Queues.
> > >
> > > So, for N cards with Q * N Tx / Rx queues, you only need S mempools.
> > >
> > > Then each of the Q * N queues will use the mempool from the socket
> closest
> > > to
> > > the card.
> > >
> > > Matthew.
> > >
>


[dpdk-dev] one worker reading multiple ports

2014-11-21 Thread Newman Poborsky
So, since mempool is multi-consumer (by default), if one is used to
configure queues on multiple NICs that have different socket owners, then
mbuf allocation will fail? But if 2 NICs have the socket owner, everything
should work fine?  Since I'm talking about 2 ports on the same NIC, they
must have the same owner, RX receive should work with RX queues configured
with the same mempool, right? But it my case it doesn't so I guess I'm
missing something.

Any idea how can I troubleshoot why allocation fails with one mempool and
works fine with each queue having its own mempool?

Thank you,

Newman

On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall 
wrote:

> On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote:
> > Thank you for your answer.
> >
> > I just realized that the reason the rte_eth_rx_burst() returns 0 is
> because
> > inside ixgbe_recv_pkts() this fails:
> > nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL
> >
> > Does this mean that every RX queue should have its own rte_mempool?  If
> so,
> > are there any optimal values for: number of RX descriptors, per-queue
> > rte_mempool size, number of hugepages (from what I understand, these 3
> are
> > correlated)?
> >
> > If I'm wrong, please explain why.
> >
> > Thanks!
> >
> > BR,
> > Newman
>
> Newman,
>
> Mempools are created per NUMA node (ordinarily this means per processor
> socket
> if sockets > 1).
>
> When doing Tx / Rx Queue Setup, one should determine the socket which owns
> the
> given PCI NIC, and try to use memory on that same socket to handle traffic
> for
> that NIC and Queues.
>
> So, for N cards with Q * N Tx / Rx queues, you only need S mempools.
>
> Then each of the Q * N queues will use the mempool from the socket closest
> to
> the card.
>
> Matthew.
>


[dpdk-dev] one worker reading multiple ports

2014-11-20 Thread Newman Poborsky
Thank you for your answer.

I just realized that the reason the rte_eth_rx_burst() returns 0 is because
inside ixgbe_recv_pkts() this fails:
nmb = rte_rxmbuf_alloc(rxq->mb_pool);  => nmb is NULL

Does this mean that every RX queue should have its own rte_mempool?  If so,
are there any optimal values for: number of RX descriptors, per-queue
rte_mempool size, number of hugepages (from what I understand, these 3 are
correlated)?

If I'm wrong, please explain why.

Thanks!

BR,
Newman

On Thu, Nov 20, 2014 at 9:56 AM, De Lara Guarch, Pablo <
pablo.de.lara.guarch at intel.com> wrote:

> Hi Newman,
>
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Newman Poborsky
> > Sent: Thursday, November 20, 2014 8:34 AM
> > To: dev at dpdk.org
> > Subject: [dpdk-dev] one worker reading multiple ports
> >
> > Hi,
> >
> > is it possible to use one worker thread (one lcore) to read packets from
> > multiple ports?
> >
> > When I start 2 workers and assign each one  to read from different ports
> > (with  rte_eth_rx_burst()) everything works fine, but if I assign one
> > worker to read packets from 2 ports, rte_eth_rx_burst() returns 0 as if
> no
> > packets are read.
>
> Yes, it is totally possible. The only problem would be if you try to use
> multiple threads
> to read/write on one port, in which case you should use multiple queues.
> Look at l3fwd app for instance. You can use just a single core to handle
> packets on multiple ports.
>
> Pablo
> >
> > Is there any reason for this kind of behaviour?
> >
> > Thanks!
> >
> > Br,
> > Newman P.
>


[dpdk-dev] one worker reading multiple ports

2014-11-20 Thread Newman Poborsky
Hi,

is it possible to use one worker thread (one lcore) to read packets from
multiple ports?

When I start 2 workers and assign each one  to read from different ports
(with  rte_eth_rx_burst()) everything works fine, but if I assign one
worker to read packets from 2 ports, rte_eth_rx_burst() returns 0 as if no
packets are read.

Is there any reason for this kind of behaviour?

Thanks!

Br,
Newman P.


[dpdk-dev] building shared library

2014-11-11 Thread Newman Poborsky
It works!!!  Thanks everybody!

I wasn't using '-Wl,--no-as-needed'  while compiling, so no PMD driver was
linked and hence no constructor called. After putting this options, it
finally works.

Again, thank you very much, I could never figure out all these steps on my
own!

BR,
Newman

On Tue, Nov 11, 2014 at 4:54 PM, Neil Horman  wrote:

> On Tue, Nov 11, 2014 at 03:26:04PM +, De Lara Guarch, Pablo wrote:
> >
> >
> > > -Original Message-
> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Newman Poborsky
> > > Sent: Tuesday, November 11, 2014 3:17 PM
> > > To: Gonzalez Monroy, Sergio
> > > Cc: dev at dpdk.org
> > > Subject: Re: [dpdk-dev] building shared library
> > >
> > > Hi,
> > >
> > > after building DPDK libs as shared libraries and linking it, I'm back
> to my
> > > first problem: rte_eal_driver_register() never gest called and my app
> > > crashes since there are no drivers registered.  As previously
> mentioned, in
> > > regular DPDK user app this functions is called for every driver before
> > > main(). How?
> >
> > If I am not wrong here, you have to use the -d option to specify the
> driver you want to use.
> >
> > Btw, the option you were looking for can be found in
> config/common_linuxapp or config/common_bsdapp.
> >
>
> Alternatively, when you link your application you can speify
> -llibrte_pmd_ and your applicaion should call all the constructors
> when
> the dynamic loader hits your binaries DT_NEEDED table.  Thats how you can
> avoid
> the command line specification.
>
> Neil
>
> > Pablo
> > >
> > > BR,
> > > Newman
> > >
> > > On Tue, Nov 11, 2014 at 3:44 PM, Newman Poborsky
> > > 
> > > wrote:
> > >
> > > > Hi Sergio,
> > > >
> > > > no, that sounds good, thank you.  Since I'm not that familiar with
> DPDK
> > > > build system, where should this option be set? In 'lib' folder's
> Makefile?
> > > >
> > > > Thank you once again!
> > > >
> > > > BR,
> > > > Newman
> > > >
> > > > On Tue, Nov 11, 2014 at 3:18 PM, Sergio Gonzalez Monroy <
> > > > sergio.gonzalez.monroy at intel.com> wrote:
> > > >
> > > >> On Tue, Nov 11, 2014 at 01:10:29PM +0100, Newman Poborsky wrote:
> > > >> > Hi,
> > > >> >
> > > >> > I want to build one .so file with my app (it contains API that I
> want to
> > > >> > call through JNI) and all DPDK libs that I use in my app.
> > > >> >
> > > >> > As I've already mentioned, when I build and start my dpdk app as a
> > > >> > standalone application, I can see that before main() is called,
> there
> > > >> is a
> > > >> > call to 'rte_eal_driver_register()' function for every driver.
> When I
> > > >> build
> > > >> > .so file, this does not happen and no driver is registered so
> everyting
> > > >> > after rte_eal_init() fails.
> > > >> >
> > > >> Hi Newman,
> > > >>
> > > >> AFAIK the current build system does not support that.
> > > >>
> > > >> You can build DPDK as shared libs by setting the following config
> option:
> > > >> CONFIG_RTE_BUILD_SHARED_LIB=y
> > > >>
> > > >> Then build your app as an .so that links against DPDK libs, so you
> have
> > > >> explicit dependencies (such dependencies should show with ldd).
> > > >>
> > > >> Is there any reason why you want everything to be a single .so ?
> > > >>
> > > >> I don't know much about how Java loads DSOs but I reckon that it
> must
> > > >> resolve
> > > >> explicit dependencies such as libc.
> > > >>
> > > >> Thanks,
> > > >> Sergio
> > > >>
> > > >>
> > > >> >
> > > >> > BR,
> > > >> > Newman
> > > >> >
> > > >>
> > > >
> > > >
>


[dpdk-dev] building shared library

2014-11-11 Thread Newman Poborsky
Hi,

after building DPDK libs as shared libraries and linking it, I'm back to my
first problem: rte_eal_driver_register() never gest called and my app
crashes since there are no drivers registered.  As previously mentioned, in
regular DPDK user app this functions is called for every driver before
main(). How?

BR,
Newman

On Tue, Nov 11, 2014 at 3:44 PM, Newman Poborsky 
wrote:

> Hi Sergio,
>
> no, that sounds good, thank you.  Since I'm not that familiar with DPDK
> build system, where should this option be set? In 'lib' folder's Makefile?
>
> Thank you once again!
>
> BR,
> Newman
>
> On Tue, Nov 11, 2014 at 3:18 PM, Sergio Gonzalez Monroy <
> sergio.gonzalez.monroy at intel.com> wrote:
>
>> On Tue, Nov 11, 2014 at 01:10:29PM +0100, Newman Poborsky wrote:
>> > Hi,
>> >
>> > I want to build one .so file with my app (it contains API that I want to
>> > call through JNI) and all DPDK libs that I use in my app.
>> >
>> > As I've already mentioned, when I build and start my dpdk app as a
>> > standalone application, I can see that before main() is called, there
>> is a
>> > call to 'rte_eal_driver_register()' function for every driver. When I
>> build
>> > .so file, this does not happen and no driver is registered so everyting
>> > after rte_eal_init() fails.
>> >
>> Hi Newman,
>>
>> AFAIK the current build system does not support that.
>>
>> You can build DPDK as shared libs by setting the following config option:
>> CONFIG_RTE_BUILD_SHARED_LIB=y
>>
>> Then build your app as an .so that links against DPDK libs, so you have
>> explicit dependencies (such dependencies should show with ldd).
>>
>> Is there any reason why you want everything to be a single .so ?
>>
>> I don't know much about how Java loads DSOs but I reckon that it must
>> resolve
>> explicit dependencies such as libc.
>>
>> Thanks,
>> Sergio
>>
>>
>> >
>> > BR,
>> > Newman
>> >
>>
>
>


[dpdk-dev] building shared library

2014-11-11 Thread Newman Poborsky
Hi Sergio,

no, that sounds good, thank you.  Since I'm not that familiar with DPDK
build system, where should this option be set? In 'lib' folder's Makefile?

Thank you once again!

BR,
Newman

On Tue, Nov 11, 2014 at 3:18 PM, Sergio Gonzalez Monroy <
sergio.gonzalez.monroy at intel.com> wrote:

> On Tue, Nov 11, 2014 at 01:10:29PM +0100, Newman Poborsky wrote:
> > Hi,
> >
> > I want to build one .so file with my app (it contains API that I want to
> > call through JNI) and all DPDK libs that I use in my app.
> >
> > As I've already mentioned, when I build and start my dpdk app as a
> > standalone application, I can see that before main() is called, there is
> a
> > call to 'rte_eal_driver_register()' function for every driver. When I
> build
> > .so file, this does not happen and no driver is registered so everyting
> > after rte_eal_init() fails.
> >
> Hi Newman,
>
> AFAIK the current build system does not support that.
>
> You can build DPDK as shared libs by setting the following config option:
> CONFIG_RTE_BUILD_SHARED_LIB=y
>
> Then build your app as an .so that links against DPDK libs, so you have
> explicit dependencies (such dependencies should show with ldd).
>
> Is there any reason why you want everything to be a single .so ?
>
> I don't know much about how Java loads DSOs but I reckon that it must
> resolve
> explicit dependencies such as libc.
>
> Thanks,
> Sergio
>
>
> >
> > BR,
> > Newman
> >
>


[dpdk-dev] building shared library

2014-11-11 Thread Newman Poborsky
Hi,

I want to build one .so file with my app (it contains API that I want to
call through JNI) and all DPDK libs that I use in my app.

As I've already mentioned, when I build and start my dpdk app as a
standalone application, I can see that before main() is called, there is a
call to 'rte_eal_driver_register()' function for every driver. When I build
.so file, this does not happen and no driver is registered so everyting
after rte_eal_init() fails.


BR,
Newman

On Tue, Nov 11, 2014 at 11:37 AM, Gonzalez Monroy, Sergio <
sergio.gonzalez.monroy at intel.com> wrote:

> Hi  Newman,
>
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Newman Poborsky
> > Sent: Monday, November 10, 2014 2:23 PM
> >
> > Hi,
> >
> > is it possible to build a  dpdk app as a shared library?
> >
> > I tried to put 'include $(RTE_SDK)/mk/rte.extshared.mk' in my Makefile
> (and
> > define SHARED) and it builds .so lib, but all rte_* symbols are
> undefined.
> >
> Can you elaborate a bit on how you are building DPDK and your app?
> Is your objective to build a single .so containing your app and all DPDK
> libs?
> Or do you want your app to have a link dependency on DPDK shared libs?
>
>


[dpdk-dev] building shared library

2014-11-11 Thread Newman Poborsky
Hi,

sure, here it is:
ldd libdpdk-api.so
linux-vdso.so.1 =>  (0x7fff3fffe000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f583dd99000)
/lib64/ld-linux-x86-64.so.2 (0x7f583e5d4000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x7f583db7a000)

This is a library built with Makefile that has the following options:
RTE_BUILD_SHARED_LIB=y
CFLAGS += -fPIC
LDLIBS += -lrte_eal -lrte_mbuf -lrte_cmdline -lrte_timer  -lrte_mempool
-lrte_ring  -lrte_pmd_ring -lethdev -lrte_malloc
include $(RTE_SDK)/mk/rte.extshared.mk

There are no missing libraries.

I also had to add '-fPIC' flag to all Makefiles of lrte_*  libs above.   Is
this the correct way to build shared lib? Am I missing something?

When I build it as a regular dpdk app (like helloworld example) ldd output
is this:
ldd dpdk-api
linux-vdso.so.1 =>  (0x7fffacbfe000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x7ffe91b2b000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7ffe91922000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7ffe9171e000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7ffe91139000)
/lib64/ld-linux-x86-64.so.2 (0x7ffe92042000)
libpcap.so.1 => /usr/local/lib/libpcap.so.1 (0x7ffe90ef8000)

Thank you for any help!

BR,
Newman P.


On Tue, Nov 11, 2014 at 4:28 AM, Chi, Xiaobo (NSN - CN/Hangzhou) <
xiaobo.chi at nsn.com> wrote:

> Hi,
> I am using DPDK based shared lib, but never met such problems. Can you
> please share this the result of "ldd x.so" and check if all those
> depended lib are all avalible?
>
> brgs,
> chi xiaobo
>
> -Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of ext Newman Poborsky
> Sent: Monday, November 10, 2014 10:23 PM
> To: dev at dpdk.org
> Subject: [dpdk-dev] building shared library
>
> Hi,
>
> is it possible to build a  dpdk app as a shared library?
>
> I tried to put 'include $(RTE_SDK)/mk/rte.extshared.mk' in my Makefile
> (and
> define SHARED) and it builds .so lib, but all rte_* symbols are undefined.
>
> After that i tried adding:
> LDLIBS += -lrte_eal -lrte_mbuf -lrte_cmdline -lrte_timer  -lrte_mempool
> -lrte_ring  -lrte_pmd_ring -lethdev -lrte_malloc
>
> And now almost all symbols in .so file are defined (missing only
> rte_hexdump).
>
> I thought this was gonna be it. But after using this library, pci probe-ing
> fails since I don't have any pmd drivers registered, and
> rte_eth_dev_count() returns 0.
>
> But how are drivers supposed to be registered?
>
> When I use gdb with regular dpdk app (not shared library), I can see this:
> #0  0x0046fab0 in rte_eal_driver_register ()
> #1  0x00418fb7 in devinitfn_bond_drv ()
> #2  0x004f15ed in __libc_csu_init ()
> #3  0x76efee55 in __libc_start_main (main=0x41ee65 , argc=1,
> argv=0x7fffe4f8, init=0x4f15a0 <__libc_csu_init>, fini=,
> rtld_fini=, stack_end=0x7fffe4e8) at
> libc-start.c:246
> #4  0x0041953c in _start ()
>
>
> Ok, if I'm not mistaken, it seems driver registration is called before
> main. How is this accomplished? Cause in shared library build, I don't have
> this before main() and after rte_eal_init() (since driver list is empty)
> everything else fails.
>
> Any suggestions please? I'd really appreciate it...
>
> BR,
> Newman P.
>


[dpdk-dev] building shared library

2014-11-10 Thread Newman Poborsky
Hi,

is it possible to build a  dpdk app as a shared library?

I tried to put 'include $(RTE_SDK)/mk/rte.extshared.mk' in my Makefile (and
define SHARED) and it builds .so lib, but all rte_* symbols are undefined.

After that i tried adding:
LDLIBS += -lrte_eal -lrte_mbuf -lrte_cmdline -lrte_timer  -lrte_mempool
-lrte_ring  -lrte_pmd_ring -lethdev -lrte_malloc

And now almost all symbols in .so file are defined (missing only
rte_hexdump).

I thought this was gonna be it. But after using this library, pci probe-ing
fails since I don't have any pmd drivers registered, and
rte_eth_dev_count() returns 0.

But how are drivers supposed to be registered?

When I use gdb with regular dpdk app (not shared library), I can see this:
#0  0x0046fab0 in rte_eal_driver_register ()
#1  0x00418fb7 in devinitfn_bond_drv ()
#2  0x004f15ed in __libc_csu_init ()
#3  0x76efee55 in __libc_start_main (main=0x41ee65 , argc=1,
argv=0x7fffe4f8, init=0x4f15a0 <__libc_csu_init>, fini=,
rtld_fini=, stack_end=0x7fffe4e8) at
libc-start.c:246
#4  0x0041953c in _start ()


Ok, if I'm not mistaken, it seems driver registration is called before
main. How is this accomplished? Cause in shared library build, I don't have
this before main() and after rte_eal_init() (since driver list is empty)
everything else fails.

Any suggestions please? I'd really appreciate it...

BR,
Newman P.


[dpdk-dev] flow director - perfect match filter

2014-10-31 Thread Newman Poborsky
Hi,

after setting filter.l4type to RTE_FDIR_L4TYPE_UDP, packets that were
supposed to be matched were (finally!) matched, so I am more than grateful!
Thank you!!!

What do you mean by 'sending an IP packet'? Is ICMP an IP packet (has no
layer 4), or?

Once again, thank you!

BR,
Newman P.

On Fri, Oct 31, 2014 at 1:48 AM, Wu, Jingjing  wrote:

> Hi, Poborsky
>
> Please try sending an IP packet (not UDP, TCP or SCTP).
>
> Or you can try to set filter.l4type = RTE_FDIR_L4TYPE_TCP when adding a
> filter.
>
> I guess it is because the NIC already classify you packet as TCP one.
>
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Newman Poborsky
> > Sent: Thursday, October 30, 2014 11:18 PM
> > To: dev at dpdk.org
> > Subject: [dpdk-dev] flow director - perfect match filter
> >
> > Hi,
> >
> > I'm not sure this is the right place to post a question like this, but
> I've been
> > stuck with the same problem for days now.
> >
> > I'm trying to use flow director perfect match filters and so far I
> haven't been
> > able to get it working. I have tried writing my own simple app (based on
> given
> > examples) and I tried adding filter using something as simple as:
> >
> > //setting the mask to watch for src IP
> > memset(&fdir_masks, 0x00, sizeof(struct rte_fdir_masks));
> > fdir_masks.src_ipv4_mask = 0x; ret =
> > rte_eth_dev_fdir_set_masks(portid, &fdir_masks); //adding filter
> > memset(&filter, 0, sizeof(struct rte_fdir_filter));
> filter.ip_src.ipv4_addr =
> > htonl(ipv4_src); filter.l4type = RTE_FDIR_L4TYPE_NONE; filter.iptype =
> > RTE_FDIR_IPTYPE_IPV4; ret =
> > rte_eth_dev_fdir_add_perfect_filter(portid,&filter,3,queue,0);
> > ...
> >
> >
> > After running this code and using using tcpreplay to push traffic to
> interface, I
> > get all the misses in the stats and no matches.  As far as I understand,
> to
> > match only on src IP, all rte_fdir_filter elements should be set to 0,
> and only
> > src_ip should be masked (with 1's).
> >
> > After this I tried running testpmd application but also no luck. The
> commands
> > I used are:
> > set_masks_filter 0 only_ip_flow 0 src_mask 0x 0x dst_mask
> > 0x 0x flexbytes 0 vlan_id 0 vlan_prio 0 add_perfect_filter 0
> ip
> > src 10.10.10.10 0 dst 0.0.0.0 0 flexbytes 0 vlan 0 queue 1 soft 3
> >
> > Filter is added but after sending packets to this interface, I only see
> misses in
> > stats.
> >
> > What I am doing wrong?  I've read the docs and looked at API reference
> but
> > didn't find anything that could help me.
> >
> > Configuring flow director filter using ethtool and sending same packets
> > results in matches, so packets I'm sending should be matched.
> >
> > Thank you for any help!
> >
> > Newman P.
>


[dpdk-dev] flow director - perfect match filter

2014-10-30 Thread Newman Poborsky
Hi,

I'm not sure this is the right place to post a question like this, but I've
been stuck with the same problem for days now.

I'm trying to use flow director perfect match filters and so far I haven't
been able to get it working. I have tried writing my own simple app (based
on given examples) and I tried adding filter using something as simple as:

//setting the mask to watch for src IP
memset(&fdir_masks, 0x00, sizeof(struct rte_fdir_masks));
fdir_masks.src_ipv4_mask = 0x;
ret = rte_eth_dev_fdir_set_masks(portid, &fdir_masks);
//adding filter
memset(&filter, 0, sizeof(struct rte_fdir_filter));
filter.ip_src.ipv4_addr = htonl(ipv4_src);
filter.l4type = RTE_FDIR_L4TYPE_NONE;
filter.iptype = RTE_FDIR_IPTYPE_IPV4;
ret = rte_eth_dev_fdir_add_perfect_filter(portid,&filter,3,queue,0);
...


After running this code and using using tcpreplay to push traffic to
interface, I get all the misses in the stats and no matches.  As far as I
understand, to match only on src IP, all rte_fdir_filter elements should be
set to 0, and only src_ip should be masked (with 1's).

After this I tried running testpmd application but also no luck. The
commands I used are:
set_masks_filter 0 only_ip_flow 0 src_mask 0x 0x dst_mask
0x 0x flexbytes 0 vlan_id 0 vlan_prio 0
add_perfect_filter 0 ip src 10.10.10.10 0 dst 0.0.0.0 0 flexbytes 0 vlan 0
queue 1 soft 3

Filter is added but after sending packets to this interface, I only see
misses in stats.

What I am doing wrong?  I've read the docs and looked at API reference but
didn't find anything that could help me.

Configuring flow director filter using ethtool and sending same packets
results in matches, so packets I'm sending should be matched.

Thank you for any help!

Newman P.