[dpdk-dev] Huge pages to be allocated based on number of mbufs

2016-03-14 Thread Saurabh Mishra
Hi, We are planning to support virtio, vmxnet3, ixgbe, i40e, bxn2x and SR-IOV on some of them with DPDK. We have seen that even if we give correct number of mbufs given the number hugepages reserved, rte_eth_tx_queue_setup() may still fail with no enough memory (I saw this on i40evf but worked

[dpdk-dev] [dpdk-users] DPDK i40evf problem in receving packet

2016-02-10 Thread Saurabh Mishra
ctly prohibited. If you have received this > communication > > in error, please delete it and email confirmation to the sender. Thank > You.* > > > > > > On Wed, Feb 10, 2016 at 6:30 AM, Saurabh Mishra > > > wrote: > > > > > Hi Qian -- > &

[dpdk-dev] DPDK i40evf problem in receving packet

2016-02-09 Thread Saurabh Mishra
Hi Qian -- Any suggestions? This is bit urgent. /Saurabh On Sat, Feb 6, 2016 at 9:22 AM, Saurabh Mishra wrote: > Hi Qian -- > > > Here's the data from Host: > > [root at oscompute3 ~]# ethtool -i p3p1 > > driver: i40e > > version: 1.0.11-k > > firmware-

[dpdk-dev] DPDK i40evf problem in receving packet

2016-02-06 Thread Saurabh Mishra
> Thanks > Qian > > -Original Message- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Saurabh Mishra > Sent: Saturday, February 06, 2016 6:33 AM > To: dev at dpdk.org; users at dpdk.org > Subject: [dpdk-dev] DPDK i40evf problem in receving packet > > Hi, >

[dpdk-dev] DPDK i40evf problem in receving packet

2016-02-05 Thread Saurabh Mishra
Hi, I'm seeing two problems: 1) when use our kernel '3.10.88-8.0.0.0.6', we only receive first packet but not subsequent ones at all after that. However, when I use centos7.0, then l2fwd is able to receive all the packets. 2) I've also seen that on centos7.0, symmetric_mp itself is not

[dpdk-dev] DPDK ixgbevf multi-queue disabled

2016-02-03 Thread Saurabh Mishra
egards, > Choi, Sy Jong > Platform Application Engineer > > -Original Message- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Saurabh Mishra > Sent: Thursday, February 04, 2016 3:47 AM > To: dev at dpdk.org; users at dpdk.org > Subject: [dpdk-dev] DPDK ixg

[dpdk-dev] DPDK ixgbevf multi-queue disabled

2016-02-03 Thread Saurabh Mishra
Is there any way to enable multi-queue for SR-IOV of ixgbe? I've seen that PF driver automatically disables multi-queue when VFs are created from host. We want to use multiple queues with DPDK in case of ixgbevf too. [781203.692378] ixgbe :06:00.0: Multiqueue Disabled: Rx Queue count = 1,

[dpdk-dev] i40evf DPDK init_adminq failed: -53

2016-02-01 Thread Saurabh Mishra
Hi, on KVM system, after doing NVM upgrade to new firmware and I don't see init_adminq failed messages. Thanks, /Saurabh On Mon, Feb 1, 2016 at 11:49 AM, Saurabh Mishra wrote: > Hi, > > So I tried to update the firmware and it says "Update not available" for > i40e >

[dpdk-dev] i40evf DPDK init_adminq failed: -53

2016-02-01 Thread Saurabh Mishra
8 :82:00.0 ixgbe Up 1Mbps Full 00:1b:21:90:f9:f8 1500 Intel(R) 82599 10 Gigabit Dual Port Network Connection [root] ethtool -i vmnic6 driver: i40e version: 1.3.38 firmware-version: 4.41 0x80001866 16.5.20 bus-info: :07:00.0 On Mon, Feb 1, 2016 at 10:25 AM, Saurabh Mis

[dpdk-dev] i40evf DPDK init_adminq failed: -53

2016-02-01 Thread Saurabh Mishra
. > > Hope it works for you. > > Thanks, > Michael > > > On 1/30/2016 4:35 AM, Saurabh Mishra wrote: > > Has anybody seen this before? What's the workaround or fix? We are using > > dpdk-2.2.0 on KVM centos: > > > > Host PF version: 1.0.11-k on Centos

[dpdk-dev] i40evf DPDK init_adminq failed: -53

2016-01-29 Thread Saurabh Mishra
Has anybody seen this before? What's the workaround or fix? We are using dpdk-2.2.0 on KVM centos: Host PF version: 1.0.11-k on Centos7 [root@ ~]# ./symmetric_mp fakeelf -c 2 -m2048 -n4 --proc-type=primary -- -p 3 --num-procs=2 --proc-id=0 [.] EAL: Virtual area found at 0x7fff7580 (size =

[dpdk-dev] DPDK mbuf pool in SR-IOV env and one RX/TX queue

2016-01-27 Thread Saurabh Mishra
, 2016 12:19 PM, "Bruce Richardson" wrote: > On Mon, Jan 25, 2016 at 04:15:28PM -0800, Saurabh Mishra wrote: > > Hi Bruce -- > > > > >The sharing of the mbuf pool is not an issue, but sharing of rx/tx > queues > > is. > > >The ethdev queues ar

[dpdk-dev] DPDK bnx2x driver link problem

2016-01-27 Thread Saurabh Mishra
Looks like bnx2x has link problem?sometime it sees link up and most of the time it see link down even though the RX/TX counters are going up. Has anybody seen this type of problem? If I don't use DPDK then I don't see this type of link related problem. The counter shows that it?s receiving and

[dpdk-dev] rte_mbuf size for jumbo frame

2016-01-26 Thread Saurabh Mishra
ing the data into a larger buffer will definitely cause the application > to be slower. > > Lawrence > > > This one time (01/26/2016 09:40 AM), at band camp, Saurabh Mishra wrote: > > Hi, > > Since we do full content inspection, we will end up coalescing mbuf chains > in

[dpdk-dev] rte_mbuf size for jumbo frame

2016-01-26 Thread Saurabh Mishra
>> >> Mike >> >> -Original Message- >> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Masaru OKI >> Sent: Monday, January 25, 2016 2:41 PM >> To: Saurabh Mishra; users at dpdk.org; dev at dpdk.org >> Subject: Re: [dpdk-dev] rte_mbuf

[dpdk-dev] DPDK mbuf pool in SR-IOV env and one RX/TX queue

2016-01-25 Thread Saurabh Mishra
s not able to send the packets -- rte_eth_tx_burst() succeed but recipient does not receive the packet. Thanks, /Saurabh On Sat, Jan 23, 2016 at 8:09 AM, Bruce Richardson < bruce.richardson at intel.com> wrote: > On Thu, Jan 21, 2016 at 08:35:20PM -0800, Saurabh Mishra wrote: > > Hi, &

[dpdk-dev] rte_mbuf size for jumbo frame

2016-01-25 Thread Saurabh Mishra
Hi, We wanted to use 10400 bytes size of each rte_mbuf to enable Jumbo frames. Do you guys see any problem with that? Would all the drivers like ixgbe, i40e, vmxnet3, virtio and bnx2x work with larger rte_mbuf size? We would want to avoid detailing with chained mbufs. /Saurabh

[dpdk-dev] bnx2x assertion failure sc->bar[0]

2016-01-23 Thread Saurabh Mishra
Hi, We are seeing assertion failure in bnx2x with DPDK example code. [root at VM ~]# ./symmetric_mp fakeelf -c 2 -m2048 -n2 --proc-type=secondary -- -p 3 --num-procs=2 --proc-id=1 [.] [.] EAL: PCI device :0b:00.0 on NUMA socket 0 EAL: probe driver: 14e4:168e rte_bnx2x_pmd EAL: PCI

[dpdk-dev] DPDK mbuf pool in SR-IOV env and one RX/TX queue

2016-01-21 Thread Saurabh Mishra
Hi, Is it possible for two or more processes to share the same mbuf_pool in SR-IOV with single rx/tx queue? char *eal_argv[] = {"fakeelf", "-c2", "-n4", "--proc-type=primary",}; int ret = rte_eal_init(4, eal_argv);