[dpdk-users] Capture traffic with DPDK-dump

2016-11-10 Thread jose suarez
Hi,

I made a test using the linux kernel to capture packets with the testpmd 
app. In this way I used the following command:

# sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c '0xfc' -n 4 --vdev 
'eth_pcap0,rx_iface=eth0,tx_pcap=/tmp/file.pcap' -- --port-topology=chained


I show below the output in this case:

EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
PMD: Initializing pmd_pcap for eth_pcap0
PMD: Creating pcap-backed ethdev on numa socket 0
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device :01:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
USER1: create a new mbuf pool : n=187456, size=2176, 
socket=0
Configuring Port 0 (socket 0)
Port 0: XX:XX:XX:XX:XX:XX
Checking link statuses...
Port 0 Link Up - speed 1 Mbps - full-duplex
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support 
disabled, MP over anonymous pages disabled
Logical Core 3 (socket 0) forwards packets on 1 streams:
   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

   io packet forwarding - CRC stripping disabled - packets/burst=32
   nb forwarding cores=1 - nb forwarding ports=1
   RX queues=1 - RX desc=128 - RX free threshold=0
   RX threshold registers: pthresh=0 hthresh=0 wthresh=0
   TX queues=1 - TX desc=512 - TX free threshold=0
   TX threshold registers: pthresh=0 hthresh=0 wthresh=0
   TX RS bit threshold=0 - TXQ flags=0x0
Press enter to exit

Telling cores to stop...
Waiting for lcores to finish...

   -- Forward statistics for port 0 
--
   RX-packets: 1591270RX-dropped: 0 RX-total: 1591270
   TX-packets: 1591270TX-dropped: 0 TX-total: 1591270


   +++ Accumulated forward statistics for all 
ports+++
   RX-packets: 1591270RX-dropped: 0 RX-total: 1591270
   TX-packets: 1591270TX-dropped: 0 TX-total: 1591270


Done.

Shutting down port 0...
Stopping ports...
Done
Closing ports...
Done

Bye...

Once I interrupt the app I can see that packets are recorded and also 
the pcap was generated. So I receive the traffic correctly. It seems 
that the problem happens when I install the uio_pci_generic driver for 
the NIC.


Thanks a lot!

Jos?.


El 10/11/16 a las 14:20, Pattan, Reshma escribi?:
> Hi,
>
> Comments below.
>
> Thanks,
> Reshma
>
>> -Original Message-
>> From: jose suarez [mailto:jsuarezv at ac.upc.edu]
>> Sent: Thursday, November 10, 2016 12:32 PM
>> To: Pattan, Reshma 
>> Cc: users at dpdk.org
>> Subject: Re: [dpdk-users] Capture traffic with DPDK-dump
>>
>> Hi,
>>
>> Thank you very much for your response. I followed your comment about the
>> full PCI id and now the PDUMP application is working fine :). It creates the
>> pcap file.
>>
>> My problem now is that I noticed that in the testpmd app I don't receive any
>> packets. I write below the commands that I use to execute both apps:
>>
>> # sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-type
>> primary --socket-mem 1000 --file-prefix pg1 -w :01:00.0 -- -i --port-
>> topology=chained
>>
>> # sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xf8 -n 4 --proc-
>> type auto --socket-mem 1000 --file-prefix pg2 -w :01:00.0 -- --pdump
>> 'device_id=:01:00.0,queue=*,rx-dev=/tmp/file.pcap'
>>
>> Before I execute these commands, I ensure that all the hugepages are free
>> (sudo rm -R /dev/hugepages/*)
>>
>> In this way I split up the hugepages (I have 2048 in total) between both
>> processes, as Keith Wiles advised me. Also I don't overlap any core with the
>> masks used (0x06 and 0xf8)
>>
>> My NIC (Intel 82599ES), which PCI id is :01:00.0, it is connected to a 
>> 10G
>> link that receives traffic from a mirrored port. I show you the internet 
>> device
>> settings related to this NIC:
>>
>> Network devices using DPDK-compatible driver
>> 
>> :01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>> drv=igb_uio unused=ixgbe
>>
>>
>> When I run the testpmd app and check the port stats, I get the following
>> output:
>>
>> #sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-
>> type=auto --socket-mem 1000 --file-prefix pg1 -w :01:00.0 -- -i --port-
>> topology=chained
>> EAL: Detected 8 lcore(s)
>> EAL: Auto-detected process type: PRIMARY
>> EAL: Probing VFIO support...
>> PMD

[dpdk-users] Capture traffic with DPDK-dump

2016-11-10 Thread jose suarez
Hi,

Thank you very much for your response. I followed your comment about the 
full PCI id and now the PDUMP application is working fine :). It creates 
the pcap file.

My problem now is that I noticed that in the testpmd app I don't receive 
any packets. I write below the commands that I use to execute both apps:

# sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-type 
primary --socket-mem 1000 --file-prefix pg1 -w :01:00.0 -- -i 
--port-topology=chained

# sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xf8 -n 4 
--proc-type auto --socket-mem 1000 --file-prefix pg2 -w :01:00.0 -- 
--pdump 'device_id=:01:00.0,queue=*,rx-dev=/tmp/file.pcap'

Before I execute these commands, I ensure that all the hugepages are 
free (sudo rm -R /dev/hugepages/*)

In this way I split up the hugepages (I have 2048 in total) between both 
processes, as Keith Wiles advised me. Also I don't overlap any core with 
the masks used (0x06 and 0xf8)

My NIC (Intel 82599ES), which PCI id is :01:00.0, it is connected to 
a 10G link that receives traffic from a mirrored port. I show you the 
internet device settings related to this NIC:

Network devices using DPDK-compatible driver

:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' 
drv=igb_uio unused=ixgbe


When I run the testpmd app and check the port stats, I get the following 
output:

#sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 
--proc-type=auto --socket-mem 1000 --file-prefix pg1 -w :01:00.0 -- 
-i --port-topology=chained
EAL: Detected 8 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device :01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
Interactive-mode selected
USER1: create a new mbuf pool : n=155456, size=2176, 
socket=0
Configuring Port 0 (socket 0)
Port 0: XX:XX:XX:XX:XX:XX
Checking link statuses...
Port 0 Link Up - speed 1 Mbps - full-duplex
Done
testpmd> show port stats 0

    NIC statistics for port 0 

   RX-packets: 0  RX-missed: 0  RX-bytes:  0
   RX-errors: 0
   RX-nombuf:  0
   TX-packets: 0  TX-errors: 0  TX-bytes:  0

   Throughput (since last show)
   Rx-pps:0
   Tx-pps:0


It doesn't receive any packet. Did I missed any step in the 
configuration of the testpmd app?


Thanks!

Jos?.


El 10/11/16 a las 11:56, Pattan, Reshma escribi?:
> Hi,
>
> Comments below.
>
> Thanks,
> Reshma
>
>
>> -Original Message-
>> From: users [mailto:users-bounces at dpdk.org] On Behalf Of jose suarez
>> Sent: Monday, November 7, 2016 5:50 PM
>> To: users at dpdk.org
>> Subject: [dpdk-users] Capture traffic with DPDK-dump
>>
>> Hello everybody!
>>
>> I am new in DPDK. I'm trying simply to capture traffic from a 10G physical
>> NIC. I installed the DPDK from source files and activated the following
>> modules in common-base file:
>>
>> CONFIG_RTE_LIBRTE_PMD_PCAP=y
>>
>> CONFIG_RTE_LIBRTE_PDUMP=y
>>
>> CONFIG_RTE_PORT_PCAP=y
>>
>> Then I built the distribution using the dpdk-setup.h script. Also I add
>> hugepages and check they are configured successfully:
>>
>> AnonHugePages:  4096 kB
>> HugePages_Total:2048
>> HugePages_Free:0
>> HugePages_Rsvd:0
>> HugePages_Surp:0
>> Hugepagesize:   2048 kB
>>
>> To capture the traffic I guess I can use the dpdk-pdump application, but I
>> don't know how to use it. First of all, does it work if I bind the interfaces
>> using the uio_pci_generic drive? I guess that if I capture the traffic using 
>> the
>> linux kernel driver (ixgbe) I will loose a lot of packets.
>>
>> To bind the NIC I write this command:
>>
>> sudo ./tools/dpdk-devbind.py --bind=uio_pci_generic eth0
>>
>>
>> When I check the interfaces I can see that the NIC was binded successfully.
>> Also I checked that mi NIC is compatible with DPDK (Intel
>> 8599)
>>
>> Network devices using DPDK-compatible driver
>> 
>> :01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>> drv=uio_pci_generic unused=ixgbe,vfio-pci
>>
>>
>> To capture packets, I read in the mailing list that it is necessary to run 
>> the
>> testpmd application and then dpdk-pdump using different cores.
>> So I used the following commands:
>>
>> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 -- -i
>>
>> sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump

[dpdk-users] Capture traffic with DPDK-dump

2016-11-07 Thread jose suarez
Hello everybody!

I am new in DPDK. I'm trying simply to capture traffic from a 10G 
physical NIC. I installed the DPDK from source files and activated the 
following modules in common-base file:

CONFIG_RTE_LIBRTE_PMD_PCAP=y

CONFIG_RTE_LIBRTE_PDUMP=y

CONFIG_RTE_PORT_PCAP=y

Then I built the distribution using the dpdk-setup.h script. Also I add 
hugepages and check they are configured successfully:

AnonHugePages:  4096 kB
HugePages_Total:2048
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

To capture the traffic I guess I can use the dpdk-pdump application, but 
I don't know how to use it. First of all, does it work if I bind the 
interfaces using the uio_pci_generic drive? I guess that if I capture 
the traffic using the linux kernel driver (ixgbe) I will loose a lot of 
packets.

To bind the NIC I write this command:

sudo ./tools/dpdk-devbind.py --bind=uio_pci_generic eth0


When I check the interfaces I can see that the NIC was binded 
successfully. Also I checked that mi NIC is compatible with DPDK (Intel 
8599)

Network devices using DPDK-compatible driver

:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' 
drv=uio_pci_generic unused=ixgbe,vfio-pci


To capture packets, I read in the mailing list that it is necessary to 
run the testpmd application and then dpdk-pdump using different cores. 
So I used the following commands:

sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 -- -i

sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xff -n 2 -- --pdump 
'device_id=01:00.0,queue=*,rx-dev=/tmp/file.pcap'

Did I miss any step? Is it necessary to execute any more commands when 
running the testpmd app in interactive mode?


When I execute the pdump application I get the following error:

EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: WARNING: Address Space Layout Randomization (ASLR) is enabled in 
the kernel.
EAL:This may cause issues with mapping memory into secondary processes
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device :01:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI device :01:00.1 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
PMD: Initializing pmd_pcap for eth_pcap_rx_0
PMD: Creating pcap-backed ethdev on numa socket 0
Port 2 MAC: 00 00 00 01 02 03
PDUMP: client request for pdump enable/disable failed
PDUMP: client request for pdump enable/disable failed
EAL: Error - exiting with code: 1
   Cause: Unknown error -22


In the testpmd app I get the following info:

EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device :01:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI device :01:00.1 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
Interactive-mode selected
USER1: create a new mbuf pool : n=155456, size=2176, 
socket=0
Configuring Port 0 (socket 0)
Port 0: 00:E0:ED:FF:60:5C
Configuring Port 1 (socket 0)
Port 1: 00:E0:ED:FF:60:5D
Checking link statuses...
Port 0 Link Up - speed 1 Mbps - full-duplex
Port 1 Link Up - speed 1 Mbps - full-duplex
Done
testpmd> PDUMP: failed to get potid for device id=01:00.0
PDUMP: failed to get potid for device id=01:00.0


Could you please help me?

Thank you!