[dpdk-users] [announce] driverctl: utility for persistent alternative driver binding

2015-12-04 Thread Panu Matilainen
Hi all,

While this is not directly related to DPDK or OVS, it is potentially 
useful for users of both, so excuse me for cross-posting.

Quoting from the project README (for the full text see
http://laiskiainen.org/git/?p=driverctl.git;a=blob_plain;f=README)

 > driverctl is a tool for manipulating and inspecting the system
 > device driver choices.
 >
 > Devices are normally assigned to their sole designated kernel driver
 > by default. However in some situations it may be desireable to
 > override that default, for example to try an older driver to
 > work around a regression in a driver or to try an experimental
 > alternative driver. Another common use-case is pass-through
 > drivers and driver stubs to allow userspace to drive the device,
 > such as in case of virtualization.
 >
 > driverctl integrates with udev to support overriding
 > driver selection for both cold- and hotplugged devices from the
 > moment of discovery, but can also change already assigned drivers,
 > assuming they are not in use by the system. The driver overrides
 > created by driverctl are persistent across system reboots
 > by default.
 >
 > Usage
 > -
 >
 > Find devices currently driven by ixgbe driver:
 >
 > # driverctl -v list-devices | grep ixgbe
 > :01:00.0 ixgbe (Ethernet 10G 4P X520/I350 rNDC)
 > :01:00.1 ixgbe (Ethernet 10G 4P X520/I350 rNDC)
 >
 > Change them to use the vfio-pci driver:
 > # driverctl set-override :01:00.0 vfio-pci
 > # driverctl set-override :01:00.1 vfio-pci
 >
 > Find devices with driver overrides:
 > # driverctl -v list-devices|grep \\*
 > :01:00.0 vfio-pci [*] (Ethernet 10G 4P X520/I350 rNDC)
 > :01:00.1 vfio-pci [*] (Ethernet 10G 4P X520/I350 rNDC)
 >
 > Remove the override from slot :01:00.1:
 > # driverctl unset-override :01:00.1

DPDK of course has its own dpdk_nic_bind(.py) tool for this purpose, the 
main differences to driverctl are:
- driverctl bindings are persistent across system boots
- driverctl bindings take place immediately on cold- and hotplug
- driverctl is a generic tool not limited to network adapters
- dpdk_nic_bind being a special purpose tool has many more
   sanity checks for its supported use-cases
- dpdk_nic_bind supports binding multiple NICs at once

The project currently lives at
 http://laiskiainen.org/git/?p=driverctl.git

Feedback, patches etc are welcome.

- Panu -


[dpdk-users] DPDK KNI Issue

2015-12-04 Thread Pattan, Reshma
Hi,

I had tried KNI ping testing on fedora ,  DPDK2.2 and using one loopback 
connection, it works fine and I tried without steps 9 and 10.
I am not sure why steps 9 & 10 are needed in your case, but you can try without 
those 2 steps and see the results. 
Also, after you start the ping, make sure there is no core dump in dmesg for 
KNI module.
If ur running tcpdump with icmp filter try running without filter and first see 
if ARP packets are reaching to KNI or not.
Also can you check if packet drop stats of kni iface increasing?

Thanks,
Reshma

> -Original Message-
> From: users [mailto:users-bounces at dpdk.org] On Behalf Of Ilir Iljazi
> Sent: Thursday, December 3, 2015 9:55 PM
> To: users at dpdk.org
> Subject: [dpdk-users] DPDK KNI Issue
> 
> Hi,
> I have been having an issue with dpdk kni whereby I cant send and receive
> packets from the kni interface. I spent about a week trying to figure it out 
> the
> issue myself to no avail. Although I did find articles with a similar 
> signature to
> mine none of the proposed solutions helped solve the problem.
> 
> Environment:
> Ubuntu Server 14.04
> DPDK Package 2.1.0 (Latest)
> Network Card: (10Gbe ixgbe driver)
> 
> 06:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
> Network Connection
> 06:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+
> Network Connection
> 
> 06.00.0 (port 0 connected to switch)
> 06:00.1 (port 1 not connected to switch)
> 
> Configuration:
> 1.) DPDK built without issue
> 2.) Modules Loaded:
> 
> insmod $RTE_TARGET/kmod/igb_uio.ko
> insmod $RTE_TARGET/kmod/rte_kni.ko kthread_mode=multiple
> 
> 
> 3.) Reserved Huge Pages:
> 
> echo 4096 >
> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> echo 4096 >
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> 
> 
> 4.) Mounted huge page partition
> 
> echo ">>> Mounting huge page partition"
> mkdir -p /mnt/huge
> mount -t hugetlbfs nodev /mnt/huge
> 
> 
> 5.) Interfaces 06:00.0/1 bound to igb uio module (option 19 on setup)
> 
> Network devices using DPDK-compatible driver
> 
> :06:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
> :06:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
> 
> 
> 6.) Started kni test application:
> 
> Command: ./examples/kni/build/app/kni -n 4 -c 0xff -- -p 0x1 -P --
> config="(0,5,7)" &
> 
> Output:
> 
> EAL: PCI device :06:00.0 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7fcda5c0
> EAL:   PCI memory mapped at 0x7fcda5c8
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device :06:00.1 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7fcda5c84000
> EAL:   PCI memory mapped at 0x7fcda5d04000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 6
> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
> APP: Port ID: 0
> APP: Rx lcore ID: 5, Tx lcore ID: 7
> APP: Initialising port 0 ...
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7fcd5c1adcc0
> sw_sc_ring=0x7fcd5c1ad780 hw_ring=0x7fcd5c1ae200 dma_addr=0xe5b1ae200
> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7fcd5c19b5c0
> hw_ring=0x7fcd5c19d600 dma_addr=0xe5b19d600
> PMD: ixgbe_set_tx_function(): Using simple tx code path
> PMD: ixgbe_set_tx_function(): Vector tx enabled.
> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst 
> size
> no less than 32.
> KNI: pci: 06:00:00  8086:10fb
> 
> 
> Checking link status
> done
> Port 0 Link Up - speed 1 Mbps - full-duplex
> APP: Lcore 1 has nothing to do
> APP: Lcore 2 has nothing to do
> APP: Lcore 3 has nothing to do
> APP: Lcore 4 has nothing to do
> APP: Lcore 5 is reading from port 0
> APP: Lcore 6 has nothing to do
> APP: Lcore 7 is writing to port 0
> APP: Lcore 0 has nothing to do
> 
> 
> 7.) KNI interface configured and brought up:
> 
> root at l3sys2-acc2-3329:~/dpdk-2.1.0# ifconfig vEth0 192.168.13.95 netmask
> 255.255.248.0 up
> APP: Configure network interface of 0 up
> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst 
> size
> no less than 32.
> 
> root at l3sys2-acc2-3329:~/dpdk-2.1.0# ifconfig vEth0
> 
> vEth0 Link encap:Ethernet  HWaddr 90:e2:ba:55:fd:c4
>   inet addr:192.168.13.95  Bcast:192.168.15.255  Mask:255.255.248.0
>   inet6 addr: fe80::92e2:baff:fe55:fdc4/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:0 errors:0 dropped:8 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
> 
> Note also that dmesg is clean not pointing to any issues:
> [ 1770.113952] KNI: /dev/kni opened
> [