[dpdk-users] How to test l3fwd?

2016-07-20 Thread Charlie Li
I am also attaching the full logs from l3fwd and l2fwd.

On Wed, Jul 20, 2016 at 3:02 PM, Charlie Li  wrote:

> Hello,
>
> My setup is dpdk-2.2.0 on Fedora 23 Server with kernel 4.5.7.
>
> I have been testing L2 throughput with l2fwd and an Ixia traffic
> generator. It works as expected.
>
> Command: ./l2fwd -c 0xf -n 4 -- -p 0x3
> Ixia traffic: MAC (Ethernet frames)
>
>
> Now I am moving to test L3 throughput with l3fwd, but cannot start traffic
> from Ixia
>
> Command: ./l3fwd -c 0xf -n 4 -- -p 0x3 --config="(0,0,2)(1,0,3)"
> Ixia traffic: IPv4 (IP packets)
>
> My question is:
>
> What are the IP addresses of the two ports?
>
> "LPM: Adding route 0x01010100 / 24 (0)
> LPM: Adding route 0x02010100 / 24 (1)"
>
> Does it mean the IP addresses are 1.1.1.0 (netmask 255.255.255.0) for
> port0 and 2.1.1.0 (netmask 255.255.255.0) for port1?
>
> I set up the following two flows, but Ixia complains "unreachable"
>
> Flow1: Ixia PortA (1.1.1.100) -> DPDK Port0 (1.1.1.0) .(l3fwd)
>  DPDK Port1 (2.1.1.0) -> Ixia PortB (2.1.1.100)
>   Src IP: 1.1.1.100; Dst IP: 2.1.1.100; Gateway: 1.1.1.0
>
> Flow2: Ixia PortB (2.1.1.100) -> DPDK Port1 (2.1.1.0) .(l3fwd)
>  DPDK Port0 (1.1.1.0) -> Ixia PortA (1.1.1.100)
>   Src IP: 2.1.1.100; Dst IP: 1.1.1.100; Gateway: 2.1.1.0
>
> Thanks,
> Charlie
>
>
-- next part --
[cli at cli-desktop l3fwd]$ sudo -E ./build/l3fwd -c 0xf -n 4 -- -p 0x3 
--config="(0,0,2)(1,0,3)"
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 4 lcore(s)
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x4000 bytes
EAL: Virtual area found at 0x7f4e8000 (size = 0x4000)
EAL: Requesting 1 pages of size 1024MB from socket 0
EAL: TSC frequency is ~2096067 KHz
EAL: Master lcore 0 is ready (tid=5486e8c0;cpuset=[0])
EAL: lcore 1 is ready (tid=53149700;cpuset=[1])
EAL: lcore 2 is ready (tid=52948700;cpuset=[2])
EAL: lcore 3 is ready (tid=52147700;cpuset=[3])
EAL: PCI device :02:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7f4ec000
EAL:   PCI memory mapped at 0x7f4ec008
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device :02:00.1 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7f4ec0084000
EAL:   PCI memory mapped at 0x7f4ec0104000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 6
PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=4...  
Address:90:E2:BA:4F:3F:B0, Destination:02:00:00:00:00:00, Allocated mbuf pool 
on socket 0
LPM: Adding route 0x01010100 / 24 (0)
LPM: Adding route 0x02010100 / 24 (1)
LPM: Adding route IPV6 / 48 (0)
LPM: Adding route IPV6 / 48 (1)
txq=0,0,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb48d69c0 
hw_ring=0x7f4eb48d8a00 dma_addr=0x1b48d8a00
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.
txq=1,1,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb48c4840 
hw_ring=0x7f4eb48c6880 dma_addr=0x1b48c6880
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.
txq=2,2,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb48b26c0 
hw_ring=0x7f4eb48b4700 dma_addr=0x1b48b4700
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.
txq=3,3,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb48a0540 
hw_ring=0x7f4eb48a2580 dma_addr=0x1b48a2580
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.

Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=4...  
Address:90:E2:BA:4F:3F:B1, Destination:02:00:00:00:00:01, txq=0,0,0 PMD: 
ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb488e2c0 hw_ring=0x7f4eb4890300 
dma_addr=0x1b4890300
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.
txq=1,1,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb487c140 
hw_ring=0x7f4eb487e180 dma_addr=0x1b487e180
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.
txq=2,2,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb4869fc0 
hw_ring=0x7f4eb486c000 dma_addr=0x1b486c000
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.
txq=3,3,0 PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f4eb4857e40 
hw_ring=0x7f4eb4859e80 dma_addr=0x1b4859e80
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.


Initializing rx 

[dpdk-users] [dpdk-dev] capture packets on VM

2016-07-20 Thread Raja Jayapal
Hi Reshma/All,

Please find the "show config fwd" output below.

?testpmd> show config fwd
Warning! Cannot handle an odd number of ports with the current port topology. 
Configuration must be changed to have an even number of ports, or relaunch 
application with --port-topology=chained
io packet forwarding - ports=3 - cores=1 - streams=3 - NUMA support disabled, 
MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 3 streams:
? RX P=0/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
? RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
? RX P=2/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

As described above, i have modified the topology to have even number of ports.

testpmd> show config fwd
io packet forwarding - ports=4 - cores=1 - streams=4 - NUMA support disabled, 
MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 4 streams:
? RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
? RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
? RX P=2/Q=0 (socket 0) -> TX P=3/Q=0 (socket 0) peer=02:00:00:00:00:03
? RX P=3/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02

I could confirm that the packet is getting forwarded as per the config and 
setting the IP and MAC in packet generator does not take effect.
In testpmd app, whatever the MAC/IP is configured on traffic generator, the 
packets are forwaded in adjacent ports.

If i want to forward the packet based on IP, will it be achieved by running 
l2fwd or l3fwd ?

Kindly suggest.

Thanks,
Raja


-"Pattan, Reshma"  wrote: -
To: Raja Jayapal 
From: "Pattan, Reshma" 
Date: 07/19/2016 07:12PM
Cc: "users at dpdk.org" , "De Lara Guarch, Pablo" 

Subject: RE: [dpdk-dev] capture packets on VM


 Hi Raja,
 ?
 Since this is a usability question this should be discussed under users at 
dpdk.org mailing list. Hence I removed  dev at dpdk.org. 
 Ya actually packets received from port 0 should be transmitted on to port1 and 
packets received from port1 should go to port2.
 ?
 Can you paste o/p of ?show config fwd and also interpret if 
?traffic on your board is flowing as per existing flow rules.
 ?
 Ex:
 testpmd> show? config fwd
 io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support disabled, 
MP over anonymous pages disabled
 Logical Core 5 (socket 0) forwards packets on 2 streams:
 ? RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01 ??  
this means, Streams received on port 0 will be sent on to port1 . 
 ??RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 ? this 
means, Streams received on port1 will be sent onto port0.
 ?
 Thanks,
 Reshma
 ?



 From: Raja Jayapal [mailto:raja.jayapal at tcs.com] 
 Sent: Tuesday, July 19, 2016 12:38 PM
 To: Pattan, Reshma 
 Cc: dev at dpdk.org
 Subject: RE: [dpdk-dev] capture packets on VM
   ?
 Hi Reshma, 

 Thanks for your information.
 I have been trying to run the testpmd app and would like to get some idea on 
the packet flow in testpmd.

 br0 -vnet0- (port0)VM NIC

 br1--vnet1--(port1)VM NIC

 br2--vnet2--(port2)VM NIC

 br0 IP and MAC:
 fe:54:00:0d:af:af - 192.168.100.10
 br1 IP and MAC:
 fe:54:00:4e:5b:df - 192.168.100.20
 br2 IP and MAC:
 fe:54:00:93:78:6d - 192.168.100.30

 Ran testpmd application on VM and sending packets from Host using packeth.

 Using PackETH generator, sent traffic from br0 destined to br1(modified the 
source / destination MAC and IP in packeth tool), but i could see that the 
packets are received on port0 and transmitted on port2.

 Sending packets from br0 to br1:

 ./testpmd -c 3 -n 4 -- -i --total-num-mbufs=3000
 testpmd> show port stats all
 ?  NIC statistics for port 0? 
 ? RX-packets: 4? RX-missed: 0? RX-bytes:? 0
 ? RX-errors: 0
 ? RX-nombuf:? 0 
 ? TX-packets: 0? TX-errors: 0? TX-bytes:? 0
 ? 
 ?  NIC statistics for port 1? 
 ? RX-packets: 0? RX-missed: 0? RX-bytes:? 0
 ? RX-errors: 0
 ? RX-nombuf:? 0 
 ? TX-packets: 0? TX-errors: 0? TX-bytes:? 0
 ? 
 ?  NIC statistics for port 2? 
 ? RX-packets: 0? RX-missed: 0? RX-bytes:? 0
 ? RX-errors: 0
 ? RX-nombuf:? 0 
 ? TX-packets: 4? TX-errors: 0? TX-bytes:? 0
 ? 
 testpmd> 

 Second time, sent traffic from br1 to br2, but the packets are received on 
port2 and transmitted on port0.

 Sending packets from br1 to br2:

 testpmd> show port stats all
 ?  NIC statistics for port 0? 

[dpdk-users] Issue with OpenStack SR-IOV performance when using poll-mode DPDK ixgbevf driver on DPDK 2.2

2016-07-20 Thread Ewan Stephens
Hi There

I am experiencing issues with an application which utilizes the DPDK.

I'm trying to upgrade the version of the DPDK used by the application. 
Unfortunately, this is causing the application to perform worse than on 
previous versions when running on a KVM/OpenStack virtual machine.

Test Setup (Host):
Chassis - Dell R730
CPU - Haswell (Intel Xeon E5-2690 v3 @2.60 GHz)
NIC - 10GbE Intel Niantic NIC (Intel Corporation Ethernet 10G 2P X520 Adapter 
(rev 01))
NIC driver - ixgbe
OS - Mirantis OpenStack 7.0 (OpenStack Kilo) on Ubuntu 14.04
Kernel - 3.13.0-73
Hypervisor - KVM

Test setup (guest):
Number of cores - 8
Networking - SR-IOV
NIC driver - ixgbevf
OS - Centos 6.6
Kernel -  2.6.32-504

Tests undertaken
Three tests were run with DPDK 1.6, 2.0 and DPDK 2.2, all other parameters were 
controlled. The maximum number of packets that the application could process 
was 17 million/s+ on DPDK 1.6 and 2.0 (we did not test higher than 17 million/s 
but suspect it would have maxed out between 20 and 25 million/s) but with DPDK 
2.2 the test failed at 13 million/s. The test pass criteria was that less than 
1 in 100,000 packets were dropped.

Notes:

* The packets sent into the application were of roughly 100B in size 
each.

* We do not experience a similar performance regression when testing 
non-virtualised (i.e. application running directly on a Dell R730 server 
without any virtualization) between DPDK 1.6 and 2.2.

Conclusion
As previously stated all other parameters in the tests were controlled so I 
conclude there is some kind of regression in the DPDK code between 2.0 and 2.2 
that is causing the performance regression. This could be in the ixgbevf poll 
mode driver provided with DPDK or it could be in the DPDK code itself.

Work so far to debug issue
I've tried various things to debug the issue but with no success so far:

* While running the testing we used the Linux utility "perf" on the 
guest to check what proportion of CPU cycles were being consumed by different 
functions on 1.6 and 2.2. This didn't show any significant differences between 
the two tests.

* Took a look at the diffs between the ixgbevf driver code in 1.6 and 
2.2. Nothing obviously suspicious.

* Attempted to run the DPDK sample packet forwarder application on both 
1.6 and 2.2 on an OpenStack VM. The results of this testing were unreliable. 
Packets dropped varied from test to test on both versions of DPDK.

Questions

* Are there any known issues introduced between DPDK 2.0 and DPDK 2.2 
which might cause this type of performance issue?

* Has anyone else experienced a similar issue?

* What further steps could we take to debug the issue?

o   Note that I'm reluctant to try upgrading to 16.04 unless there is a good 
reason to believe this will fix the issue as integrating new versions of DPDK 
with the application is a time consuming process.

I'd really appreciate some help on this as I'm pretty stumped right now.

Thanks for your help
Ewan



[dpdk-users] [dpdk-dev] capture packets on VM

2016-07-20 Thread Pattan, Reshma
Hi Raja,

You can use l3fwd  to forward packets to specific ports based on destination ip.
L3fwd uses LPM lookup to do destination based forwarding. As described in below 
 paragraph from the given link,
you need to add routing entry for the destinations ips of your interest and 
compile the code.

Place where routing entry should go is in below strut from file 
dpdk/examples/l3fwd/ l3fwd_lpm.c
static struct ipv4_l3fwd_lpm_route ipv4_l3fwd_lpm_route_array[] = {
{IPv4(1, 1, 1, 0), 24, 0},
{IPv4(2, 1, 1, 0), 24, 1},
{IPv4(3, 1, 1, 0), 24, 2},
{IPv4(4, 1, 1, 0), 24, 3},
{IPv4(5, 1, 1, 0), 24, 4},
{IPv4(6, 1, 1, 0), 24, 5},
{IPv4(7, 1, 1, 0), 24, 6},
{IPv4(8, 1, 1, 0), 24, 7},
};

http://dpdk.org/doc/guides-16.04/sample_app_ug/l3_forward.html
"The LPM lookup key is represented by the Destination IP Address field read 
from the input packet.
The ID of the output interface for the input packet is the next hop returned by 
the LPM lookup.
The set of LPM rules used by the application is statically configured and 
loaded into the LPM object at initialization time."

@Pablo: Do you know how MAC based forwarding can be done using L2?

Thanks,
Reshma


From: Raja Jayapal [mailto:raja.jaya...@tcs.com]
Sent: Wednesday, July 20, 2016 7:10 AM
To: Pattan, Reshma 
Cc: users at dpdk.org; De Lara Guarch, Pablo 
Subject: RE: [dpdk-dev] capture packets on VM

Hi Reshma/All,

Please find the "show config fwd" output below.

 testpmd> show config fwd
Warning! Cannot handle an odd number of ports with the current port topology. 
Configuration must be changed to have an even number of ports, or relaunch 
application with --port-topology=chained
io packet forwarding - ports=3 - cores=1 - streams=3 - NUMA support disabled, 
MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 3 streams:
  RX P=0/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  RX P=2/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

As described above, i have modified the topology to have even number of ports.

testpmd> show config fwd
io packet forwarding - ports=4 - cores=1 - streams=4 - NUMA support disabled, 
MP over anonymous pages disabled
Logical Core 1 (socket 0) forwards packets on 4 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  RX P=2/Q=0 (socket 0) -> TX P=3/Q=0 (socket 0) peer=02:00:00:00:00:03
  RX P=3/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02

I could confirm that the packet is getting forwarded as per the config and 
setting the IP and MAC in packet generator does not take effect.
In testpmd app, whatever the MAC/IP is configured on traffic generator, the 
packets are forwaded in adjacent ports.

If i want to forward the packet based on IP, will it be achieved by running 
l2fwd or l3fwd ?

Kindly suggest.

Thanks,
Raja


-"Pattan, Reshma" mailto:reshma.pattan at 
intel.com>> wrote: -
To: Raja Jayapal mailto:raja.jayapal at tcs.com>>
From: "Pattan, Reshma" mailto:reshma.pat...@intel.com>>
Date: 07/19/2016 07:12PM
Cc: "users at dpdk.org" mailto:users at dpdk.org>>, "De Lara Guarch, Pablo" 
mailto:pablo.de.lara.guarch at intel.com>>
Subject: RE: [dpdk-dev] capture packets on VM
Hi Raja,

Since this is a usability question this should be discussed under users at 
dpdk.org mailing list. Hence I removed dev at 
dpdk.org.
Ya actually packets received from port 0 should be transmitted on to port1 and 
packets received from port1 should go to port2.

Can you paste o/p of  "show config fwd" and also interpret if  traffic on your 
board is flowing as per existing flow rules.

Ex:
testpmd> show  config fwd
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support disabled, 
MP over anonymous pages disabled
Logical Core 5 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01  ==> 
this means, Streams received on port 0 will be sent on to port1 .
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 ==> 
this means, Streams received on port1 will be sent onto port0.

Thanks,
Reshma

From: Raja Jayapal [mailto:raja.jaya...@tcs.com]
Sent: Tuesday, July 19, 2016 12:38 PM
To: Pattan, Reshma mailto:reshma.pattan at 
intel.com>>
Cc: dev at dpdk.org
Subject: RE: [dpdk-dev] capture packets on VM

Hi Reshma,

Thanks for your information.
I have been trying to run the testpmd app and would like to get some idea on 
the packet flow in testpmd.

br0 -vnet0- (port0)VM NIC

br1--vnet1--(port1)VM NIC

br2--vnet2--(port2)VM NIC

br0 IP and MAC:
fe:54:00:0d:af:af - 192.168.100.10
br1 IP and MAC:
fe:54:00:4e:5b:df - 192.168.100.20
br2 IP and MAC:

[dpdk-users] How to get instant pmd rx queue occupancy after rte_eth_rx_burst()

2016-07-20 Thread Ali Volkan Atli
Hi 

I'm trying to use a DPDK dropper by using rte_red_enqueue() and I need to know 
queue size after rte_eth_rx_burst(). Is there any sample application or a way 
to get queue occupancy? Thanks in advance.

- Volkan