[dpdk-users] DPDK on Alternate Architectures

2016-03-10 Thread Wiles, Keith
>Hi all,
>
>>
>
>I am trying to get started with DPDK. Just wanted to know if DPDK is ported to 
>any other arch other than Intel. I saw that DPDK has been ported to Cavium 
>Nitrox, but I am not really sure on the architecture of Nitrox cores so any 
>info on the architecture of Nitrox or any alternate architecture 
>implementation of DPDK would help(ideally MIPS port).

DPDK runs on IA, ARM and PPC architectures. Please have a look at the Docs for 
DPDK.

http://dpdk.readthedocs.org/en/v2.2.0/


>
>Thank you
>
>+--+
>Sirshak Das
>Research Assistant @ Indiana University
>Sent via Outlook
>
>


Regards,
Keith






[dpdk-users] DPDK on Alternate Architectures

2016-03-10 Thread Das, Sirshak
Hi all,

<>

I am trying to get started with DPDK. Just wanted to know if DPDK is ported to 
any other arch other than Intel. I saw that DPDK has been ported to Cavium 
Nitrox, but I am not really sure on the architecture of Nitrox cores so any 
info on the architecture of Nitrox or any alternate architecture implementation 
of DPDK would help(ideally MIPS port).

Thank you

+--+
Sirshak Das
Research Assistant @ Indiana University
Sent via Outlook



[dpdk-users] DPDK ip-pipeline error when using virtual function interface

2016-03-10 Thread Murad Kablan
Hi, I would like to try ip-pipline sample application and I'm getting the
below error. The virtual functions work fine with l2fwd application though.

I find this thread is close to my problem, however that solution didn't
workout for me.
http://dpdk.org/ml/archives/users/2015-November/72.html

My server is Ubuntu 14.04 with kernel 3.19.

dpdk_nic_bind.py --status
Network devices using DPDK-compatible driver

:82:10.0 '82599 Ethernet Controller Virtual Function' drv=vfio-pci
unused=
:82:10.2 '82599 Ethernet Controller Virtual Function' drv=vfio-pci
unused=
:82:10.4 '82599 Ethernet Controller Virtual Function' drv=vfio-pci
unused=
:82:10.6 '82599 Ethernet Controller Virtual Function' drv=vfio-pci
unused=
:82:11.0 '82599 Ethernet Controller Virtual Function' drv=vfio-pci
unused=

This is the error when I run
./build/ip_pipeline -p 0x01

AL:   probe driver: 8086:10ed rte_ixgbevf_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device :82:1e.4 on NUMA socket 1
EAL:   probe driver: 8086:10ed rte_ixgbevf_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device :82:1e.6 on NUMA socket 1
EAL:   probe driver: 8086:10ed rte_ixgbevf_pmd
EAL:   Not managed by a supported kernel driver, skipped
[APP] Initializing MEMPOOL0 ...
[APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
PMD: ixgbevf_dev_configure(): Configured Virtual Function port id: 0
PMD: ixgbevf_dev_configure(): VF can't disable HW CRC Strip
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fbdbffef6c0
sw_sc_ring=0x7fbdbffef180 hw_ring=0x7fbdbffefc00 dma_addr=0xc7ffefc00
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fbdbffdcfc0
hw_ring=0x7fbdbffdf000 dma_addr=0xc7ffdf000
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.
PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst
size no less than 4 (port=0).
PANIC in app_link_up_internal():
LINK0 (0): PMD set up error -95
7: [./build/ip_pipeline() [0x42fd83]]
6: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
[0x7fc25546cec5]]
5: [./build/ip_pipeline(main+0x5f) [0x42e92f]]
4: [./build/ip_pipeline(app_init+0xcf9) [0x43e6c9]]
3: [./build/ip_pipeline(app_link_up_internal+0x52f) [0x43d1cf]]
2: [./build/ip_pipeline(__rte_panic+0xc9) [0x4293ad]]
1: [./build/ip_pipeline(rte_dump_stack+0x1a) [0x4b54da]]
Aborted



Thanks in advance

Murad


[dpdk-users] L2fwd very slow throughput

2016-03-10 Thread Simon Jouet
Hi everyone,

I went through previous messages on the mailing list to look for people with a 
similar issue, even though I found some users with slower than expected 
throughput it is still significantly higher than what I?m getting 
(http://dpdk.org/ml/archives/dev/2014-July/004114.html). Therefore I would like 
to know if some people have experienced similar problems or would be able to 
point out what might be wrong with the setup.

The setup is fairly standard, I have two hosts with Intel X710 quad port 10G 
NICs. One host is running l2fwd and the other one is running MoonGen with the 
rfc2544 throughput test. The two hosts are connected with two SFP+ cables on 
port 0 and 1 of the NICs.

Both machines are similar, with an Intel 6700K Skylake processor (4 cores at 
4GHz with hyperthreading) and 32GB of DDR4 RAM (clocked at 3GHz) running Linux 
with a kernel 4.4.3 and 8, 1G hugepages and finally on both hosts DPDK 2.2.0 is 
used and the uio_pci_generic driver is loaded. I think that?s it for the setup.

If I run the throughput test using the Linux bridge with 64 bytes packet size, 
I achieve a slow but expected throughput of 1505.14Mbit/s

   maximal rate for packetsize 64: 2.24 Mpps, 1146.77 MBit/s, 1505.14 
MBit/s wire rate

Running the same test with l2fwd instead of the Linux bridge ends up with a 
very slow throughput

   maximal rate for packetsize 64: 1.07 Mpps, 546.08 MBit/s, 716.74 MBit/s 
wire rate

L2Fwd is started using the command below, I tried to give more or less cores, 
change the number of memory channel and also the number of RX queues but the 
result is always pretty much exactly the same.
   sudo ./build/l2fwd -c 0xff -n 3 -- -p 3

I tried to use testpmd in iofwd and macfwd mode with the default configuration 
and with exactly the same test I reach the 10Gbit/s without problem however 
with the l2fwd, l3fwd the performances are slow as described above. I tried to 
change the rx buffer size in the l2fwd example from 32 to 64 but the result is 
the same.

Finally I tried the netmap_compat bridge and the result is half the speed of 
l2fwd

   maximal rate for packetsize 64: 0.75 Mpps, 382.26 MBit/s, 501.71 MBit/s 
wire rate

Am I missing something obvious?

Best regards,
Simon



[dpdk-users] Not able to access uio0/device/config from LXC

2016-03-10 Thread Nagaprabhanjan Bellaru
I have followed the steps mentioned in this thread:


to create a uio0, but EAL complains about the following:
--
EAL: Cannot open /sys/class/uio/uio0/device/config: Permission denied
EAL: Error - exiting with code: 1
  Cause: Requested device :00:19.0 cannot be used
--

even though I run as a root user and have all the rw permissions. Can
anybody tell me what else I could be missing?

Thanks,
-nagp


[dpdk-users] Minimal dpdk configuration for 2 hosts

2016-03-10 Thread dawid_jurek
Hello Harold,
I did investigation and one directional forwarding through 2 hosts connected by 
one port NIC each is possible indeed.
Testpmd with following command line arguments can do that:
?
On host1 (as sender):
./testpmd -c 0x3 -n4 -- -i --forward-mode=txonly --port-topology=chained
On host2 (as reciever):
./testpmd -c 0x3 -n4 -- -i --forward-mode=rxonly --port-topology=chained
?
Anyway still I don't how to run two directional communication. Harold, could 
you provide commands/command line options for this?
Also It seems that? 2 port NIC on every host is required to run basicfwd, 
rxtx_callbacks and other examples
(dpdk gives me print that number of ports must be even).
?
Regards,
Dawid
?
W dniu 2016-03-06 10:30:51 u?ytkownik Harold Demure  napisa?:
Hello Dawid,
I am no expert but a single port should be able to take care of both TX and RX 
queues. For example, I am currently running two hosts with only one port each 
and they are able to both send and receive messages.
Regards,
Harold
2016-03-04 21:47 GMT+01:00 dawid_jurek :
Hello,
I wonder what is the minimal configuration (in sense on number of NIC ports) to 
run basic dpdk examples like
basicfwd, rxtx_callbacks or forwarding by testpmd for 2 hosts connected 
directly by Ethernet.
Is it possible to perform one directional transmission for some kind of 
sender-reciever scenario (2 hosts, every host with one port)?
It seems that for every kind of transmission between 2 machines I need at least 
4 ports
(because every port may take care of TX or RX but not both of them at the same 
time).
Is it correct?
Regards,
Dawid
?