Hi all,
I am writing a dpdk application that will receive packets from one
interface and process them. It does not forward packets in the traditional
sense. However, I do need to process them at full line rate and therefore
need more than one core. The packets can be somewhat generic in nature a
queue per core into the NIC and then let NIC do round
> robin on those queues blindly. What's the harm if this feature is added,
> let those who want to use it, use, and those who hate it or think it is
> useless, ignore.
>
> Regards
> -Prashant
>
> -Original Message
Has anyone used the "port config all reta (hash,queue)" command of testpmd
with any success?
I haven't found much documentation on it.
Can someone provide an example on why and how it was used.
Regards and Happy New Year,
Michael Quicquaro
Hello all,
I have built dpdk (and pktgen-dpdk) on a couple RHEL 6.4 servers. This
distribution comes with glibc 2.12
I have read that glibc 2.7 is needed for coreset() functionality. My
testing, at the moment, only requires one cpu core.
I assume that upgrading to glibc 2.7 would probably brea
1.2.
> Regards,
> /Bruce
>
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Michael Quicquaro
> > Sent: Thursday, November 21, 2013 6:26 PM
> > To: dev at dpdk.org
> > Subject: [dpdk-dev] glibc 2.1
> >
> > Hello
wrote:
> On 12/31/2013 08:45 PM, Michael Quicquaro wrote:
>
>> Has anyone used the "port config all reta (hash,queue)" command of testpmd
>> with any success?
>>
>> I haven't found much documentation on it.
>>
>> Can someone provide an example
Hello,
My hardware is a Dell PowerEdge R820:
4x Intel Xeon E5-4620 2.20GHz 8 core
16GB RDIMM 1333 MHz Dual Rank, x4 - Quantity 16
Intel X520 DP 10Gb DA/SFP+
So in summary 32 cores @ 2.20GHz and 256GB RAM
... plenty of horsepower.
I've reserved 16 1GB Hugepages
I am configuring only one interfac
Why are there so many RX-errors?
On Thu, Jan 9, 2014 at 9:35 PM, Daniel Kan wrote:
> Thanks, Sy Jong. I couldn?t reproduce your outcome on dpdk 1.5.1 with
> ixgbe. As I sent in the earlier email, rxmode.mq_mode is defaulted to 0
> (i.e. ETH_MQ_RX_NONE); it should be set to ETH_MQ_RX_RSS.
>
> Da
Thank you, everyone, for all of your suggestions, but unfortunately I'm
still having the problem.
I have reduced the test down to using 2 cores (one is the master core) both
of which are on the socket in which the NIC's PCI slot is connected. I am
running in rxonly mode, so I am basically just co
more you have, more it loses, actually). In
> my case this situation occured when average burst size is less than 20
> packets or so. I'm not sure what's the reason for this behavior, but I
> observed it on several applications on Intel 82599 10Gb cards.
>
> Regards, Dmitry
&g
10 matches
Mail list logo