Hi, Thank you very much for your response. I followed your comment about the full PCI id and now the PDUMP application is working fine :). It creates the pcap file.
My problem now is that I noticed that in the testpmd app I don't receive any packets. I write below the commands that I use to execute both apps: # sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-type primary --socket-mem 1000 --file-prefix pg1 -w 0000:01:00.0 -- -i --port-topology=chained # sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xf8 -n 4 --proc-type auto --socket-mem 1000 --file-prefix pg2 -w 0000:01:00.0 -- --pdump 'device_id=0000:01:00.0,queue=*,rx-dev=/tmp/file.pcap' Before I execute these commands, I ensure that all the hugepages are free (sudo rm -R /dev/hugepages/*) In this way I split up the hugepages (I have 2048 in total) between both processes, as Keith Wiles advised me. Also I don't overlap any core with the masks used (0x06 and 0xf8) My NIC (Intel 82599ES), which PCI id is 0000:01:00.0, it is connected to a 10G link that receives traffic from a mirrored port. I show you the internet device settings related to this NIC: Network devices using DPDK-compatible driver ============================================ 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe When I run the testpmd app and check the port stats, I get the following output: #sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-type=auto --socket-mem 1000 --file-prefix pg1 -w 0000:01:00.0 -- -i --port-topology=chained EAL: Detected 8 lcore(s) EAL: Auto-detected process type: PRIMARY EAL: Probing VFIO support... PMD: bnxt_rte_pmd_init() called for (null) EAL: PCI device 0000:01:00.0 on NUMA socket 0 EAL: probe driver: 8086:10fb rte_ixgbe_pmd Interactive-mode selected USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0 Configuring Port 0 (socket 0) Port 0: XX:XX:XX:XX:XX:XX Checking link statuses... Port 0 Link Up - speed 10000 Mbps - full-duplex Done testpmd> show port stats 0 ######################## NIC statistics for port 0 ######################## RX-packets: 0 RX-missed: 0 RX-bytes: 0 RX-errors: 0 RX-nombuf: 0 TX-packets: 0 TX-errors: 0 TX-bytes: 0 Throughput (since last show) Rx-pps: 0 Tx-pps: 0 ############################################################################ It doesn't receive any packet. Did I missed any step in the configuration of the testpmd app? Thanks! Jos?. El 10/11/16 a las 11:56, Pattan, Reshma escribi?: > Hi, > > Comments below. > > Thanks, > Reshma > > >> -----Original Message----- >> From: users [mailto:users-bounces at dpdk.org] On Behalf Of jose suarez >> Sent: Monday, November 7, 2016 5:50 PM >> To: users at dpdk.org >> Subject: [dpdk-users] Capture traffic with DPDK-dump >> >> Hello everybody! >> >> I am new in DPDK. I'm trying simply to capture traffic from a 10G physical >> NIC. I installed the DPDK from source files and activated the following >> modules in common-base file: >> >> CONFIG_RTE_LIBRTE_PMD_PCAP=y >> >> CONFIG_RTE_LIBRTE_PDUMP=y >> >> CONFIG_RTE_PORT_PCAP=y >> >> Then I built the distribution using the dpdk-setup.h script. Also I add >> hugepages and check they are configured successfully: >> >> AnonHugePages: 4096 kB >> HugePages_Total: 2048 >> HugePages_Free: 0 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> Hugepagesize: 2048 kB >> >> To capture the traffic I guess I can use the dpdk-pdump application, but I >> don't know how to use it. First of all, does it work if I bind the interfaces >> using the uio_pci_generic drive? I guess that if I capture the traffic using >> the >> linux kernel driver (ixgbe) I will loose a lot of packets. >> >> To bind the NIC I write this command: >> >> sudo ./tools/dpdk-devbind.py --bind=uio_pci_generic eth0 >> >> >> When I check the interfaces I can see that the NIC was binded successfully. >> Also I checked that mi NIC is compatible with DPDK (Intel >> 8599) >> >> Network devices using DPDK-compatible driver >> ============================================ >> 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' >> drv=uio_pci_generic unused=ixgbe,vfio-pci >> >> >> To capture packets, I read in the mailing list that it is necessary to run >> the >> testpmd application and then dpdk-pdump using different cores. >> So I used the following commands: >> >> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 -- -i >> >> sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xff -n 2 -- --pdump >> 'device_id=01:00.0,queue=*,rx-dev=/tmp/file.pcap' > 1)Please pass on the full PCI id, i.e "0000:01:00.0" in the command instead > of "01:00.0". > In latest DPDK 16.11 code full PCI id is used by eal layer to identify the > device. > > 2)Also note that you should not use same core mask for both primary and > secondary processes in multi process context. > ex: -c0x6 for testpmd and -c0x2 for dpdk-pdump can be used. > > Please let me know how you are proceeding. > > > Thanks, > Reshma