Hi, I made a test using the linux kernel to capture packets with the testpmd app. In this way I used the following command:
# sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c '0xfc' -n 4 --vdev 'eth_pcap0,rx_iface=eth0,tx_pcap=/tmp/file.pcap' -- --port-topology=chained I show below the output in this case: EAL: Detected 8 lcore(s) EAL: Probing VFIO support... EAL: VFIO support initialized PMD: Initializing pmd_pcap for eth_pcap0 PMD: Creating pcap-backed ethdev on numa socket 0 PMD: bnxt_rte_pmd_init() called for (null) EAL: PCI device 0000:01:00.0 on NUMA socket -1 EAL: probe driver: 8086:10fb rte_ixgbe_pmd USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=187456, size=2176, socket=0 Configuring Port 0 (socket 0) Port 0: XX:XX:XX:XX:XX:XX Checking link statuses... Port 0 Link Up - speed 10000 Mbps - full-duplex Done No commandline core given, start packet forwarding io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support disabled, MP over anonymous pages disabled Logical Core 3 (socket 0) forwards packets on 1 streams: RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 io packet forwarding - CRC stripping disabled - packets/burst=32 nb forwarding cores=1 - nb forwarding ports=1 RX queues=1 - RX desc=128 - RX free threshold=0 RX threshold registers: pthresh=0 hthresh=0 wthresh=0 TX queues=1 - TX desc=512 - TX free threshold=0 TX threshold registers: pthresh=0 hthresh=0 wthresh=0 TX RS bit threshold=0 - TXQ flags=0x0 Press enter to exit Telling cores to stop... Waiting for lcores to finish... ---------------------- Forward statistics for port 0 ---------------------- RX-packets: 1591270 RX-dropped: 0 RX-total: 1591270 TX-packets: 1591270 TX-dropped: 0 TX-total: 1591270 ---------------------------------------------------------------------------- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++ RX-packets: 1591270 RX-dropped: 0 RX-total: 1591270 TX-packets: 1591270 TX-dropped: 0 TX-total: 1591270 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Done. Shutting down port 0... Stopping ports... Done Closing ports... Done Bye... Once I interrupt the app I can see that packets are recorded and also the pcap was generated. So I receive the traffic correctly. It seems that the problem happens when I install the uio_pci_generic driver for the NIC. Thanks a lot! Jos?. El 10/11/16 a las 14:20, Pattan, Reshma escribi?: > Hi, > > Comments below. > > Thanks, > Reshma > >> -----Original Message----- >> From: jose suarez [mailto:jsuarezv at ac.upc.edu] >> Sent: Thursday, November 10, 2016 12:32 PM >> To: Pattan, Reshma <reshma.pattan at intel.com> >> Cc: users at dpdk.org >> Subject: Re: [dpdk-users] Capture traffic with DPDK-dump >> >> Hi, >> >> Thank you very much for your response. I followed your comment about the >> full PCI id and now the PDUMP application is working fine :). It creates the >> pcap file. >> >> My problem now is that I noticed that in the testpmd app I don't receive any >> packets. I write below the commands that I use to execute both apps: >> >> # sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-type >> primary --socket-mem 1000 --file-prefix pg1 -w 0000:01:00.0 -- -i --port- >> topology=chained >> >> # sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xf8 -n 4 --proc- >> type auto --socket-mem 1000 --file-prefix pg2 -w 0000:01:00.0 -- --pdump >> 'device_id=0000:01:00.0,queue=*,rx-dev=/tmp/file.pcap' >> >> Before I execute these commands, I ensure that all the hugepages are free >> (sudo rm -R /dev/hugepages/*) >> >> In this way I split up the hugepages (I have 2048 in total) between both >> processes, as Keith Wiles advised me. Also I don't overlap any core with the >> masks used (0x06 and 0xf8) >> >> My NIC (Intel 82599ES), which PCI id is 0000:01:00.0, it is connected to a >> 10G >> link that receives traffic from a mirrored port. I show you the internet >> device >> settings related to this NIC: >> >> Network devices using DPDK-compatible driver >> ============================================ >> 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' >> drv=igb_uio unused=ixgbe >> >> >> When I run the testpmd app and check the port stats, I get the following >> output: >> >> #sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc- >> type=auto --socket-mem 1000 --file-prefix pg1 -w 0000:01:00.0 -- -i --port- >> topology=chained >> EAL: Detected 8 lcore(s) >> EAL: Auto-detected process type: PRIMARY >> EAL: Probing VFIO support... >> PMD: bnxt_rte_pmd_init() called for (null) >> EAL: PCI device 0000:01:00.0 on NUMA socket 0 >> EAL: probe driver: 8086:10fb rte_ixgbe_pmd >> Interactive-mode selected >> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, >> size=2176, >> socket=0 >> Configuring Port 0 (socket 0) >> Port 0: XX:XX:XX:XX:XX:XX >> Checking link statuses... >> Port 0 Link Up - speed 10000 Mbps - full-duplex Done >> testpmd> show port stats 0 > > After testpmd comes to the prompt, you need to execute command "start". This > will start traffic forwarding. > >> ######################## NIC statistics for port 0 >> ######################## >> RX-packets: 0 RX-missed: 0 RX-bytes: 0 >> RX-errors: 0 >> RX-nombuf: 0 >> TX-packets: 0 TX-errors: 0 TX-bytes: 0 >> >> Throughput (since last show) >> Rx-pps: 0 >> Tx-pps: 0 >> ################################################################ >> ############ >> >> It doesn't receive any packet. Did I missed any step in the configuration of >> the testpmd app? > I am wondering , you should see all packets hitting Rx-missed. > So I suggest just stop everything. Unbind port back to Linux. Then check if > the port is receiving packets from other end using tcpdump. > If not then you may need to debug the issue. > > After everything is fine, bind back the port to dpdk, run testpmd and see if > you could receive packets or not. > If you are seeing packets against "Rx-missed", then run start command in > testpmd prompt to start packet forwarding. After that you will be able to > See the packets in the capture file. > > Thanks, > Reshma >