[dpdk-dev] [PATCH] pcap: remove test for PCAP_CAN_SEND
The libpcap library has had the ability to send packets since 2004, theres really no need to test for it. Especially in the way dpdk is doing as, as according to the libpcap git tree pcap_sendpacket has never been a #define, and dpdk tests for its existance with an #ifdef. Its easier just to remove the test entirely Signed-off-by: Neil Horman --- lib/librte_pmd_pcap/rte_eth_pcap.c | 12 1 file changed, 12 deletions(-) diff --git a/lib/librte_pmd_pcap/rte_eth_pcap.c b/lib/librte_pmd_pcap/rte_eth_pcap.c index fbafd19..fe94a79 100644 --- a/lib/librte_pmd_pcap/rte_eth_pcap.c +++ b/lib/librte_pmd_pcap/rte_eth_pcap.c @@ -217,7 +217,6 @@ eth_pcap_tx_dumper(void *queue, return num_tx; } -#ifdef PCAP_CAN_SEND /* * Callback to handle sending packets through a real NIC. */ @@ -248,17 +247,6 @@ eth_pcap_tx(void *queue, tx_queue->err_pkts += nb_pkts - num_tx; return num_tx; } -#else -static uint16_t -eth_pcap_tx(__rte_unused void *queue, - __rte_unused struct rte_mbuf **bufs, - __rte_unused uint16_t nb_pkts) -{ - RTE_LOG(ERR, PMD, "pcap library cannot send packets, please rebuild " - "with a more up to date libpcap\n"); - return -1; -} -#endif static int eth_dev_start(struct rte_eth_dev *dev) -- 1.8.3.1
[dpdk-dev] VM does not receive any packets from testpmd using SR-IOV VF PMD
Hi folks, I have two Intel Xeon servers connected back-to-back using Intel 82599 NICs (a sender to a receiver). I followed the steps in DPDK manual section 9.3 and use single VF direct pass-through to the VM and run testpmd on the VF at the receiver. At the sender side, I run Pktgen and program the VF's MAC address as destination mac and sends them out to the VM at 50% rate. After booting the VM, according to the manual step 10, I need to take over the physical function by running the testpmd at host side. Then in VM I start the testpmd. The testpmd in VM starts ok without any error, and I also check the "show port info all". However, while I keep sending packets to the VM, the "show port stats all" always reports 0 RX-packet. Actually no packet rx/tx at all. I've check that without running VM, the sender indeed could send packet to the receiver server. I'm wondering how to debug this issue and any comments are very appreciated. btw, one weird thing is that I try to do "set promisc all on", however I check the port info and it is always disabled. Thank you. Regards, William
[dpdk-dev] testpmd only receive 127 packets, then becomes all RX-errors
After trial and error, I found it's because I only enable one port, so once the packet fills up all my rx descriptor, the rest of incoming packet becomes RX-errors. If I set the rxd to 2048, then the RX-packets will be 2047. ./testpmd -c 0xff -n2 -v -m 128M -- --burst=128 -i --rxq 2 --txq 2 --rxd 2048 --txd 2048 (Please correct me if I'm wrong) So I think this is the right behaviour. Thanks. On Fri, Mar 28, 2014 at 3:26 PM, William Tu wrote: > Hi folks, > > I'm using two servers connected back to back using 2 Intel 82599 10G NICs > to test the dpdk. At the sender side, I use dpdk Pktgen to generate UDP > packet. I program the destination mac statically to make sure no ARP is > necessary. > > My first experiments using Pktgen on both sender and receiver shows around > 6M pkt/sec, with each packet having 128 byte. Then I replace the receiver > side with testpmd > " x86_64-default-linuxapp-gcc/app/testpmd -c ff -n 4 -- -i " > and the sender side remains the same setting using Pktgen. > > As shown below, once my testpmd receives 127 packets, the rest of all > packets becomes RX-errors. I try lower down the sender's rate to only 10% > but still the same. > NIC statistics for port 0 > > RX-packets: 127RX-errors: 145675959 RX-bytes: 16256 > TX-packets: 0 TX-errors: 0 TX-bytes: 0 > > > > Can anyone gives me some comments? > Thank you. > > William (Cheng-Chun Tu) > > > > My system does not have numa, below is a full log at rx side: > bitmask: ff > Launching app > EAL: Detected lcore 0 as core 0 on socket 0 > EAL: Detected lcore 1 as core 1 on socket 0 > EAL: Detected lcore 2 as core 2 on socket 0 > EAL: Detected lcore 3 as core 3 on socket 0 > EAL: Detected lcore 4 as core 0 on socket 0 > EAL: Detected lcore 5 as core 1 on socket 0 > EAL: Detected lcore 6 as core 2 on socket 0 > EAL: Detected lcore 7 as core 3 on socket 0 > EAL: Setting up hugepage memory... > EAL: Ask a virtual area of 0x2097152 bytes > EAL: Virtual area found at 0x7f577ea0 (size = 0x20) > EAL: Ask a virtual area of 0x2097152 bytes > EAL: Virtual area found at 0x7f577e60 (size = 0x20) > EAL: Ask a virtual area of 0x4194304 bytes > EAL: Virtual area found at 0x7f577e00 (size = 0x40) > EAL: Ask a virtual area of 0x4194304 bytes > EAL: Virtual area found at 0x7f577da0 (size = 0x40) > EAL: Ask a virtual area of 0x8388608 bytes > EAL: Virtual area found at 0x7f577d00 (size = 0x80) > EAL: Ask a virtual area of 0x8388608 bytes > EAL: Virtual area found at 0x7f577c60 (size = 0x80) > EAL: Ask a virtual area of 0x12582912 bytes > EAL: Virtual area found at 0x7f577b80 (size = 0xc0) > EAL: Ask a virtual area of 0x4194304 bytes > EAL: Virtual area found at 0x7f577b20 (size = 0x40) > EAL: Ask a virtual area of 0x4194304 bytes > EAL: Virtual area found at 0x7f577ac0 (size = 0x40) > EAL: Ask a virtual area of 0x12582912 bytes > EAL: Virtual area found at 0x7f5779e0 (size = 0xc0) > EAL: Ask a virtual area of 0x4194304 bytes > EAL: Virtual area found at 0x7f577980 (size = 0x40) > EAL: Ask a virtual area of 0x4194304 bytes > EAL: Virtual area found at 0x7f577920 (size = 0x40) > EAL: Ask a virtual area of 0x12582912 bytes > EAL: Virtual area found at 0x7f577840 (size = 0xc0) > EAL: Ask a virtual area of 0x4194304 bytes > EAL: Virtual area found at 0x7f5777e0 (size = 0x40) > EAL: Ask a virtual area of 0x12582912 bytes > EAL: Virtual area found at 0x7f577700 (size = 0xc0) > EAL: Ask a virtual area of 0x4194304 bytes > EAL: Virtual area found at 0x7f5776a0 (size = 0x40) > EAL: Ask a virtual area of 0x4194304 bytes > EAL: Virtual area found at 0x7f577640 (size = 0x40) > EAL: Ask a virtual area of 0x4194304 bytes > EAL: Virtual area found at 0x7f5775e0 (size = 0x40) > EAL: Ask a virtual area of 0x138412032 bytes > EAL: Virtual area found at 0x7f576d80 (size = 0x840) > EAL: Ask a virtual area of 0x12582912 bytes > EAL: Virtual area found at 0x7f576ca0 (size = 0xc0) > EAL: Ask a virtual area of 0x4194304 bytes > EAL: Virtual area found at 0x7f576c40 (size = 0x40) > EAL: Requesting 128 pages of size 2MB from socket 0 > EAL: TSC frequency is ~3400025 KHz > EAL: Master core 0 is ready (tid=7ecef820) > EAL: Core 3 is ready (tid=6abfc700) > EAL: Core 6 is ready (tid=63fff700) > EAL: Core 4 is ready (tid=6a3fb700) > EAL: Core 7 is ready (tid=693f9700) > EAL: Core 5 is ready (tid=69bfa700) > EAL: Core 1 is ready (tid=6bbfe700) > EAL: Core 2 is ready (tid=6b3fd700) > EAL: PCI device :01:00.0 on NUMA socket -1 > EAL: probe driver: 8086:10fb rte_ixgbe_pmd > EAL: PCI memory mapped at 0x7f577ec3c000 > EAL: PCI memory mapped at 0x7f577ed02000 > EAL: PCI device :01:00.1 on NUMA socket -1 > EAL: probe driver: 8086:10fb rte_ixgbe_pmd > EAL:
[dpdk-dev] [memnic PATCH 1/5] pmd: fix race condition
2014-03-28 09:49, Hiroshi Shimamoto: > Do you want me resubmit update one? > If so, will do next week. Yes, please submit a v2. Thank you -- Thomas
[dpdk-dev] Achieve maximum transmit rate using Intel DPDK
Hi all, Does anyone know on what are the optimal settings that should be provided, so as to get the maximum transmit rate for an NIC using Intel DPDK ?
[dpdk-dev] testpmd only receive 127 packets, then becomes all RX-errors
Hi folks, I'm using two servers connected back to back using 2 Intel 82599 10G NICs to test the dpdk. At the sender side, I use dpdk Pktgen to generate UDP packet. I program the destination mac statically to make sure no ARP is necessary. My first experiments using Pktgen on both sender and receiver shows around 6M pkt/sec, with each packet having 128 byte. Then I replace the receiver side with testpmd " x86_64-default-linuxapp-gcc/app/testpmd -c ff -n 4 -- -i " and the sender side remains the same setting using Pktgen. As shown below, once my testpmd receives 127 packets, the rest of all packets becomes RX-errors. I try lower down the sender's rate to only 10% but still the same. NIC statistics for port 0 RX-packets: 127RX-errors: 145675959 RX-bytes: 16256 TX-packets: 0 TX-errors: 0 TX-bytes: 0 Can anyone gives me some comments? Thank you. William (Cheng-Chun Tu) My system does not have numa, below is a full log at rx side: bitmask: ff Launching app EAL: Detected lcore 0 as core 0 on socket 0 EAL: Detected lcore 1 as core 1 on socket 0 EAL: Detected lcore 2 as core 2 on socket 0 EAL: Detected lcore 3 as core 3 on socket 0 EAL: Detected lcore 4 as core 0 on socket 0 EAL: Detected lcore 5 as core 1 on socket 0 EAL: Detected lcore 6 as core 2 on socket 0 EAL: Detected lcore 7 as core 3 on socket 0 EAL: Setting up hugepage memory... EAL: Ask a virtual area of 0x2097152 bytes EAL: Virtual area found at 0x7f577ea0 (size = 0x20) EAL: Ask a virtual area of 0x2097152 bytes EAL: Virtual area found at 0x7f577e60 (size = 0x20) EAL: Ask a virtual area of 0x4194304 bytes EAL: Virtual area found at 0x7f577e00 (size = 0x40) EAL: Ask a virtual area of 0x4194304 bytes EAL: Virtual area found at 0x7f577da0 (size = 0x40) EAL: Ask a virtual area of 0x8388608 bytes EAL: Virtual area found at 0x7f577d00 (size = 0x80) EAL: Ask a virtual area of 0x8388608 bytes EAL: Virtual area found at 0x7f577c60 (size = 0x80) EAL: Ask a virtual area of 0x12582912 bytes EAL: Virtual area found at 0x7f577b80 (size = 0xc0) EAL: Ask a virtual area of 0x4194304 bytes EAL: Virtual area found at 0x7f577b20 (size = 0x40) EAL: Ask a virtual area of 0x4194304 bytes EAL: Virtual area found at 0x7f577ac0 (size = 0x40) EAL: Ask a virtual area of 0x12582912 bytes EAL: Virtual area found at 0x7f5779e0 (size = 0xc0) EAL: Ask a virtual area of 0x4194304 bytes EAL: Virtual area found at 0x7f577980 (size = 0x40) EAL: Ask a virtual area of 0x4194304 bytes EAL: Virtual area found at 0x7f577920 (size = 0x40) EAL: Ask a virtual area of 0x12582912 bytes EAL: Virtual area found at 0x7f577840 (size = 0xc0) EAL: Ask a virtual area of 0x4194304 bytes EAL: Virtual area found at 0x7f5777e0 (size = 0x40) EAL: Ask a virtual area of 0x12582912 bytes EAL: Virtual area found at 0x7f577700 (size = 0xc0) EAL: Ask a virtual area of 0x4194304 bytes EAL: Virtual area found at 0x7f5776a0 (size = 0x40) EAL: Ask a virtual area of 0x4194304 bytes EAL: Virtual area found at 0x7f577640 (size = 0x40) EAL: Ask a virtual area of 0x4194304 bytes EAL: Virtual area found at 0x7f5775e0 (size = 0x40) EAL: Ask a virtual area of 0x138412032 bytes EAL: Virtual area found at 0x7f576d80 (size = 0x840) EAL: Ask a virtual area of 0x12582912 bytes EAL: Virtual area found at 0x7f576ca0 (size = 0xc0) EAL: Ask a virtual area of 0x4194304 bytes EAL: Virtual area found at 0x7f576c40 (size = 0x40) EAL: Requesting 128 pages of size 2MB from socket 0 EAL: TSC frequency is ~3400025 KHz EAL: Master core 0 is ready (tid=7ecef820) EAL: Core 3 is ready (tid=6abfc700) EAL: Core 6 is ready (tid=63fff700) EAL: Core 4 is ready (tid=6a3fb700) EAL: Core 7 is ready (tid=693f9700) EAL: Core 5 is ready (tid=69bfa700) EAL: Core 1 is ready (tid=6bbfe700) EAL: Core 2 is ready (tid=6b3fd700) EAL: PCI device :01:00.0 on NUMA socket -1 EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: PCI memory mapped at 0x7f577ec3c000 EAL: PCI memory mapped at 0x7f577ed02000 EAL: PCI device :01:00.1 on NUMA socket -1 EAL: probe driver: 8086:10fb rte_ixgbe_pmd EAL: PCI memory mapped at 0x7f577e8fe000 EAL: PCI memory mapped at 0x7f577e8fa000 EAL: PCI device :02:10.0 on NUMA socket -1 EAL: probe driver: 8086:10ed rte_ixgbevf_pmd EAL: :02:10.0 not managed by UIO driver, skipping EAL: PCI device :02:10.2 on NUMA socket -1 EAL: probe driver: 8086:10ed rte_ixgbevf_pmd EAL: :02:10.2 not managed by UIO driver, skipping EAL: PCI device :04:00.0 on NUMA socket -1 EAL: probe driver: 8086:10d3 rte_em_pmd EAL: :04:00.0 not managed by UIO driver, skipping EAL: PCI device :05:00.0 on NUMA socket -1 EAL: probe driver: 8086:10d3 rte_em_pmd EAL: :05:00.0 not managed by
[dpdk-dev] Questions on use of multiple NIC interfaces
1. Yes. 2. Yes. Look at programmer's guide section 16 Multi-process support. 3. You can use blacklist eal option. Regards, Vladimir 2014-03-28 13:25 GMT+04:00 Sujith Sankar (ssujith) : > Hi all, > > Could someone answer the following questions about the usage of multiple > NIC interfaces with DPDK? > > 1. If my server has two identical Intel NICs, could I use both with DPDK > and its applications? > 2. If both the NIC cards could be used with DPDK, could I use them with > separate instances of applications? E.g., NIC1 used by App1 and NIC2 used > by App2. > 3. If answer to qn no 2 is yes, does the driver take care to avoid > reinitialising NIC1 when App2 tries to initialise NIC2? From what I've > seen, DPDK calls the driver init for all the matching devices (vendor id > and device id). > > Thanks, > -Sujith >
[dpdk-dev] [memnic PATCH 1/5] pmd: fix race condition
Hi, > Subject: Re: [dpdk-dev] [memnic PATCH 1/5] pmd: fix race condition > > Hi Hiroshi-san, > > Please see my comments below. > > On 03/11/2014 06:37 AM, Hiroshi Shimamoto wrote: > > From: Hiroshi Shimamoto > > > > There is a race condition, on transmit to vSwitch. > > I think we should not talk specifically about vSwitch, as > another implementation of host memnic is possible. Maybe using > the term "host" is more appropriate? > > > + if (idx != ACCESS_ONCE(adapter->down_idx)) { > > + /* > > +* vSwitch freed this and got false positive, > > +* need to recover the status and retry. > > +*/ > > + p->status = MEMNIC_PKT_ST_FREE; > > + goto retry; > > + } > > + > > The patch indeed looks to improve reliability, even if it's > difficult to me to be sure that there is no other race condition. > Again, I would replace "vSwitch" by "host". okay, I'm fine with that. Do you want me resubmit update one? If so, will do next week. > > By the way, I guess the Linux code in linux/memnic_net.c should be > modified in the same way. Hm, yes, we should check kernel driver too. thanks, Hiroshi > > Regards, > Olivier
[dpdk-dev] Questions on use of multiple NIC interfaces
Hi all, Could someone answer the following questions about the usage of multiple NIC interfaces with DPDK? 1. If my server has two identical Intel NICs, could I use both with DPDK and its applications? 2. If both the NIC cards could be used with DPDK, could I use them with separate instances of applications? E.g., NIC1 used by App1 and NIC2 used by App2. 3. If answer to qn no 2 is yes, does the driver take care to avoid reinitialising NIC1 when App2 tries to initialise NIC2? From what I?ve seen, DPDK calls the driver init for all the matching devices (vendor id and device id). Thanks, -Sujith
[dpdk-dev] memory barriers in rte_ring
One caveat - a compiler_barrier should be enough when both sides are using strongly-ordered memory operations (as in the case of the rings). Weakly ordered operations will still need fencing. -Venky -Original Message- From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Stephen Hemminger Sent: Thursday, March 27, 2014 1:20 PM To: Olivier MATZ Cc: dev at dpdk.org Subject: Re: [dpdk-dev] memory barriers in rte_ring On Thu, 27 Mar 2014 20:47:37 +0100 Olivier MATZ wrote: > Hi Stephen, > > On 03/27/2014 08:06 PM, Stephen Hemminger wrote: > > Long answer: for the multple CPU access ring, it is equivalent to smp_wmb > > and smp_rmb > > in Linux kernel. For x86 where DPDK is used, this can normally be > > replaced by simpler > > compiler barrier. In kernel there is a special flage X86_OOSTORE which is > > only enabled > > for a few special cases, for most cases it is not. When cpu doesnt do out > > of order > > stores, there are no cases where other cpu will see wrong state. > > Thank you for this clarification. > > So, if I understand properly, all usages of rte_*mb() sequencing > memory operations between CPUs could be replaced by a compiler > barrier. On the other hand, if the memory is also accessed by a > device, a memory barrier has to be used. > > Olivier > I think so for the current architecture that DPDK runs on. It might be good to abstract this in some way for eventual users in other environments.