[ovs-discuss] Bug in passing values : OVS QOS HFSC

2018-08-28 Thread Suprajith HS
Hi ,

I have been trying to pass values of m1 , d and m2 in hfsc_setup_class__
 function.
I have changed the HFSC_Class to include the following :

[image: image.png]
So now the  hfsc_setup_class__  func looks like below.
[image: image.png]
I am unable to pass on the values of m1.d and m2 from CLI. It looks like
other-config supports only min and max rates.
The absence of other variables is a bug or am I doing something wrong ?


Thank you,

Best Regards,

Suprajith HS
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Multi-Node RSPAN Mirroring

2018-08-28 Thread Bryan Richardson
Hello-

I have a multi-node cluster, each node with a physical NIC connected to a
trunk port on a switch, and with OVS on each node that has the physical NIC
as a port on a bridge. I'm able to run multiple VMs on each node using
multiple VLANs scheduled throughout the cluster, and I have no problems at
all with connectivity between VMs on the same VLANs but on different nodes
(ie. the VLANs and trunks are working as expected).

I'm attempting to get mirroring of all the traffic in a VLAN to another
VLAN working, but I've only been partially successful. Here's a contrived
scenario that explains my test setup:

3 nodes: A, B, and C
3 experiment VMs: X, Y, and Z
1 capture VM: CAP
1 experiment VLAN: 101
1 RSPAN VLAN: 201

Each experiment VM is scheduled on a different node: X -> A, Y -> B, Z -> C
The capture VM is scheduled on node C: CAP -> C

Each experiment VM has a tap on the OVS bridge for the experiment VLAN 101.
The capture VM has a tap on the OVS bridge for the RSPAN VLAN 201.

All 3 of the experiment VMs can successfully communicate with each other
over experiment VLAN 101 across nodes.

On each node, I add an OVS mirror to SPAN experiment VLAN 101 traffic to
RSPAN VLAN 201 using the command below. In the command, the eno1 interface
is the physical interface on the node that is trunked to the physical
switch.

ovs-vsctl \
  -- --id=@trunk get port eno1 \
  -- --id=@m create mirror name=m0 select-src-port=@trunk
select-dst-port=@trunk select-vlan=101 output-vlan=201 \
  -- set bridge br0 mirrors=@m

After configuring the above mirror on each node and running tcpdump on the
capture VM interface connected to the RSPAN VLAN 201, I only see traffic
sourced by and destined to the experiment VM running on the same node (ie.
VM "Z" on node "C").

Since each node is dumping mirrored traffic onto the RSPAN VLAN 201, I was
hoping to see all experiment VLAN 101 traffic across all 3 nodes in the
capture VM.

Does anyone know why this isn't working as expected? Or perhaps it is
working as expected and I'm just out of luck? Is it the case where the
RSPAN VLAN 201 has to be configured as an RSPAN VLAN in the physical switch
as well? I cannot test this theory right now because I do not have a
physical switch capable of RSPAN configuration.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Multicast snooping turned off on interfaces with a multicast querier/router

2018-08-28 Thread Ajit Warrier
I have a setup where an embedded device runs OVS on a bridge connecting two
interfaces (1 and 2). Interface 1 is connected to a multicast router
sending IGMP queries periodically. Now if I open a multicast socket on a
device on interface 1, I see the IGMP join going into the embedded device
with OVS, but the command:

ovs-appctl mdb/show br0

does not list that multicast flow. Opening a multicast socket on a device
connected to interface 2 works as expected - I get an entry for that flow
in the above command.

Any idea why this happens ?

Thanks,
Ajit.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] VM does not receive packets when using ovs-dpdk

2018-08-28 Thread Kashyap Thimmaraju
Hi Everybody,

I'm using a Mellanox Connectx4-Ln NIC (OFED 4.3-1.0.1.0) with ovs-2.9.0,
dpdk 17.11 and qemu 3.0. I've followed the ovs documentation [1] and
mellanox documentation [2] to configure ovs, the dpdk ports connected to
the physical ports and the dpdkvhostuserclient ports for the VM. I also
inserted appropriate flow rules to send the packets to the VM. However,
I do not receive any packets in the VM. Packets are sent and received if
I simply send the packets from one dpdk port to the other (phy-phy). So
I'm sure there is something strange going on when sending the packets
from the dpdk port to the dpdkvhostuserclient port.

Below is some configuration/system information. Any help is much
appreciated.

**

Interface/port/flow configuration

root@havel:/usr/local/src/ovs-dpdk/openvswitch-2.9.0/utilities#
./ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
options:dpdk-devargs=:03:00.0,n_rxq_desc=1024,n_txq_desc=1024,n_rxq=1,pmd-rxq-affinity="2"
ofport_request=1
root@havel:/usr/local/src/ovs-dpdk/openvswitch-2.9.0/utilities#
./ovs-vsctl add-port br0 dpdk1 -- set Interface dpdk1 type=dpdk
options:dpdk-devargs=:03:00.1,n_rxq_desc=1024,n_txq_desc=1024,n_rxq=1,pmd-rxq-affinity="2"
ofport_request=2
root@havel:/usr/local/src/ovs-dpdk/openvswitch-2.9.0/utilities#
./ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1
type=dpdkvhostuserclient options:vhost-server-path=/tmp/vhost-user1
root@havel:/usr/local/src/ovs-dpdk/openvswitch-2.9.0/utilities#
./ovs-vsctl add-port br0 vhost-user2 -- set Interface vhost-user2
type=dpdkvhostuserclient options:vhost-server-path=/tmp/vhost-user2

root@havel:/usr/local/src/ovs-dpdk/openvswitch-2.9.0/utilities#
./ovs-vsctl show
3fcbc293-ad6d-4f5c-be97-5236d75ce47a
    Bridge "br0"
    Port "dpdk0"
    Interface "dpdk0"
    type: dpdk
    options:
{dpdk-devargs=":03:00.0,n_rxq_desc=1024,n_txq_desc=1024,n_rxq=1,pmd-rxq-affinity=2"}
    Port "dpdk1"
    Interface "dpdk1"
    type: dpdk
    options:
{dpdk-devargs=":03:00.1,n_rxq_desc=1024,n_txq_desc=1024,n_rxq=1,pmd-rxq-affinity=2"}
    Port "br0"
    Interface "br0"
    type: internal
    Port "vhost-user2"
    Interface "vhost-user2"
    type: dpdkvhostuserclient
    options: {vhost-server-path="/tmp/vhost-user2"}
    Port "vhost-user1"
    Interface "vhost-user1"
    type: dpdkvhostuserclient
    options: {vhost-server-path="/tmp/vhost-user1"}

root@havel:/usr/local/src/ovs-dpdk/openvswitch-2.9.0/utilities#
./ovs-vsctl list interface dpdk0
_uuid   : 501edcef-76f6-4909-8eb8-df0f62ba3203
admin_state : up
bfd : {}
bfd_status  : {}
cfm_fault   : []
cfm_fault_status    : []
cfm_flap_count  : []
cfm_health  : []
cfm_mpid    : []
cfm_remote_mpids    : []
cfm_remote_opstate  : []
duplex  : full
error   : []
external_ids    : {}
ifindex : 10816857
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current    : []
link_resets : 0
link_speed  : 10
link_state  : up
lldp    : {}
mac : []
mac_in_use  : "ec:0d:9a:cc:a5:2a"
mtu : 1500
mtu_request : []
name    : "dpdk0"
ofport  : 1
ofport_request  : 1
options :
{dpdk-devargs=":03:00.0,n_rxq_desc=1024,n_txq_desc=1024,n_rxq=1,pmd-rxq-affinity=2"}
other_config    : {}
statistics  : {rx_bytes=6523520, rx_dropped=0, rx_errors=0,
rx_mbuf_allocation_errors=0, rx_missed_errors=0, rx_packets=101930,
tx_bytes=0, tx_dropped=0, tx_errors=0, tx_packets=0}
status  : {driver_name="net_mlx5", if_descr="DPDK 17.11.1
net_mlx5", if_type="6", max_hash_mac_addrs="0", max_mac_addrs="128",
max_rx_pktlen="1518", max_rx_queues="65535", max_tx_queues="65535",
max_vfs="0", max_vmdq_pools="0", min_rx_bufsize="32", numa_id="0",
pci-device_id="0x1015", pci-vendor_id="0x", port_no="0"}
type    : dpdk


root@havel:/usr/local/src/ovs-dpdk/openvswitch-2.9.0/utilities#
./ovs-vsctl list interface dpdk1
_uuid   : 93589e22-b258-4d15-96de-cf8cac4f2816
admin_state : up
bfd : {}
bfd_status  : {}
cfm_fault   : []
cfm_fault_status    : []
cfm_flap_count  : []
cfm_health  : []
cfm_mpid    : []
cfm_remote_mpids    : []
cfm_remote_opstate  : []
duplex  : full
error   : []
external_ids    : {}
ifindex : 6054325
ingress_policing_burst: 0
ingress_policing_rate: 0
lacp_current    : []
link_resets : 0
link_speed  : 10
link_state  : up
lldp    : {}
mac : []
mac_in_use  : 

[ovs-discuss] DPDK vxlan not work

2018-08-28 Thread menkeyi
[root@compute01 ~]# uname -r
3.10.0-862.el7.x86_64

1??Ovs vxlan mode is available
2??Ovs + DPDK vxlan mode is not available

DPDK ovs info:

[root@compute01 ~]# ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.9.0
DPDK 17.11.0

[root@compute01 ~]# dpdk-devbind --status|head -n10
Network devices using DPDK-compatible driver

:81:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=vfio-pci 
unused=
:84:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=vfio-pci 
unused=

[root@compute01 ~]# ovs-vsctl show
b79a7e81-2d68-4ecd-8cd8-bc2b7d1c52ef
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "vxlan-33000169"
Interface "vxlan-33000169"
type: vxlan
options: {df_default="true", in_key=flow, 
local_ip="51.0.1.101", out_key=flow, remote_ip="51.0.1.105"}
Port br-tun
Interface br-tun
type: internal
Port "vxlan-330001ca"
Interface "vxlan-330001ca"
type: vxlan
options: {df_default="true", in_key=flow, 
local_ip="51.0.1.101", out_key=flow, remote_ip="51.0.1.202"}
Bridge br-provider
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-provider
Interface br-provider
type: internal
Port phy-br-provider
Interface phy-br-provider
type: patch
options: {peer=int-br-provider}
Port "Team1"
Interface "team1-enp132s0f0"
type: dpdk
options: {dpdk-devargs=":84:00.0"}
Interface "team1-enp129s0f0"
type: dpdk
options: {dpdk-devargs=":81:00.0"}
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "vhu15e890ef-36"
tag: 1
Interface "vhu15e890ef-36"
type: dpdkvhostuserclient
options: 
{vhost-server-path="/var/run/openvswitch/vhu15e890ef-36"}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-provider
Interface int-br-provider
type: patch
options: {peer=phy-br-provider}
Port br-int
Interface br-int
type: internal
Port "vhu82caa59c-d3"
tag: 1
Interface "vhu82caa59c-d3"
type: dpdkvhostuserclient
options: 
{vhost-server-path="/var/run/openvswitch/vhu82caa59c-d3"}
Port "tapadb8cafa-a0"
tag: 4095
Interface "tapadb8cafa-a0"
type: internal
ovs_version: "2.9.0"










Create a virtual machine and find that the error is found in the 
ovs-vswitchd.log log. I don't know if it is related to this??
on vxlan_sys_4789 device failed: No such device







2018-08-25T02:34:54.453Z|01178|dpdk|INFO|VHOST_CONFIG: vhost-user client: 
socket created, fd: 83
2018-08-25T02:34:54.453Z|01179|netdev_dpdk|INFO|vHost User device 
'vhu82caa59c-d3' created in 'client' mode, using client socket 
'/var/run/openvswitch/vhu82caa59c-d3'
2018-08-25T02:34:54.453Z|01180|dpdk|WARN|VHOST_CONFIG: failed to connect to 
/var/run/openvswitch/vhu82caa59c-d3: No such file or directory
2018-08-25T02:34:54.453Z|01181|dpdk|INFO|VHOST_CONFIG: 
/var/run/openvswitch/vhu82caa59c-d3: reconnecting...
2018-08-25T02:34:54.634Z|01182|dpif_netdev|INFO|Core 8 on numa node 1 assigned 
port 'team1-enp129s0f0' rx queue 0 (measured processing cycles 587888).
2018-08-25T02:34:54.634Z|01183|dpif_netdev|INFO|Core 8 on numa node 1 assigned 
port 'team1-enp132s0f0' rx queue 0 (measured processing cycles 402684).
2018-08-25T02:34:54.634Z|01184|dpif_netdev|INFO|Core 0 on numa node 0 assigned 
port 'vhu82caa59c-d3' rx queue 0 (measured processing cycles 0).
2018-08-25T02:34:54.635Z|01185|bridge|INFO|bridge br-int: added interface 
vhu82caa59c-d3 on port 1
2018-08-25T02:34:54.723Z|01186|dpif_netdev|INFO|Core 8 on numa node 1 assigned 
port 'team1-enp129s0f0' rx queue 0 (measured processing cycles 587888).
2018-08-25T02:34:54.723Z|01187|dpif_netdev|INFO|Core 8 on numa node 1 assigned 
port 'team1-enp132s0f0' rx queue 0 (measured processing cycles 402684).
2018-08-25T02:34:54.723Z|01188|dpif_netdev|INFO|Core 0 on numa node 0 assigned 
port 'vhu82caa59c-d3' rx queue 0 (measured processing cycles 0).
2018-08-25T02:34:54.770Z|01189|dpdk|INFO|VHOST_CONFIG: vhost-user client: 
socket created, fd: 84
2018-08-25T02:34:54.770Z|01190|netdev_dpdk|INFO|vHost User device 
'vhu15e890ef-36' created in 

Re: [ovs-discuss] Requested device cannot be used

2018-08-28 Thread O Mahony, Billy
> -Original Message-
> From: 聶大鈞 [mailto:tcn...@iii.org.tw]
> Sent: Tuesday, August 28, 2018 11:52 AM
> To: O Mahony, Billy ; ovs-discuss@openvswitch.org
> Subject: RE: [ovs-discuss] Requested device cannot be used
> 
> Hello Billy,
> 
> Thanks for your suggestion, It does solve my problem.
> 
> Here comes a further question, I did notice that this NIC card is allocated to
> numa one(my second numa node).
> Hence, I setup socket-mem by trying following commands:
> ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-
> mem="0,1024", or ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-
> socket-mem=1024, or ovs-vsctl --no-wait set Open_vSwitch .
> other_config:dpdk-socket-mem="1024,1024"
> All these commands could not solve the problem.
> 
[[BO'M]] I would have thought that 1024,1024 should work. As you do have 2048 
pages configured. However it could be that the allocation was not even across 
the two NUMAs and on one of the nodes not all the requested pages were 
allocated (due to not enough large enough contiguous physical address being 
free). 

I usu do 'echo 1024 > 
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages' to make 
my HP allocations and then readback from that file and .../free_hugepages to 
verify the actual allocations (note those figures are denominated in pages NOT 
MB like dpdk-socket-mem)

> Could you please tell me what difference between yours(512,5125) and
> mine(1024,1024) is?
> 
> Finally, thanks for your help again, when things go stable, I'll adjust 
> pmd-cpu-
> mask for performance.
> 
> Best Regard
> 
> Tcnieh
> 
> 
> -Original Message-
> From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
> Sent: Tuesday, August 28, 2018 5:09 PM
> To: tcn...@iii.org.tw; ovs-discuss@openvswitch.org
> Subject: RE: [ovs-discuss] Requested device cannot be used
> 
> Hi Tcnieh,
> 
> 
> 
> Looks like your nics are on NUMA1 (second numa node) – as their pci bus
> number is > 80.
> 
> 
> 
> But you have not told OvS to allocate hugepage memory on the second numa
> node – the 0 in “--socket-mem 1024,0).”
> 
> 
> 
> So you need to change your line to something like:
> 
> ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-
> mem="512,512"
> 
> 
> 
> to have Hugepages available on both nodes.
> 
> 
> 
> Also you have allocated just a single core (core 0) for DPDK PMDs. It is also
> unusual to allocate core zero. That should work but with reduced performance
> as the PMD (on NUMA0) will have to access the packet data on NUMA1.
> 
> 
> 
> Have a look at your cpu topology. And modify your core-mask to allocate a core
> from NUMA1 also.
> 
> 
> 
> The details are in the docs: Documentation/topics/dpdk/* and
> Documentation/howto/dpdk.rst.
> 
> 
> 
> Regards,
> 
> Billy
> 
> 
> 
> 
> 
> From: ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss-
> boun...@openvswitch.org] On Behalf Of ???
> Sent: Tuesday, August 28, 2018 3:37 AM
> To: ovs-discuss@openvswitch.org
> Subject: [ovs-discuss] Requested device cannot be used
> 
> 
> 
> Hello all,
>   I am trying to get the performance of intel x520 10G NIC over Dell 
> R630/R730,
> but I keep getting an unexpected error, please see below.
> 
> I followed the instruction of https://goo.gl/T7iTuk   
> to
> compiler the DPDK and OVS code. I've successfully binded both my x520 NIC
> ports to DPDK, using either igb_uio or vfio_pci:
> 
> ~~
> Network devices using DPDK-compatible driver
> 
> :82:00.0 'Ethernet 10G 2P X520 Adapter 154d' drv=igb_uio unused=vfio-pci
> :82:00.1 'Ethernet 10G 2P X520 Adapter 154d' drv=igb_uio unused=vfio-pci
> 
> Network devices using kernel driver
> ===
> :01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno1 drv=tg3
> unused=igb_uio,vfio-pci
> :01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno2 drv=tg3
> unused=igb_uio,vfio-pci
> :02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno3 drv=tg3
> unused=igb_uio,vfio-pci
> :02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno4 drv=tg3
> unused=igb_uio,vfio-pci *Active*
> 
> Other Network devices
> =
> 
> ~~~
> 
> And the hugepage was set to 2048 * 2M
> ~~~
> HugePages_Total:2048
> HugePages_Free: 1024
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:   2048 kB
> ~~~
> 
> Here comes the problem, while I tried to init the ovsdb-server and 
> ovs-vswitch, I
> got the following error:
> ~~~
>2018-08-27T09:54:05.548Z|2|ovs_numa|INFO|Discovered 16 CPU cores
> on NUMA node 0
>2018-08-27T09:54:05.548Z|3|ovs_numa|INFO|Discovered 16 CPU cores
> on NUMA node 1
>

Re: [ovs-discuss] Requested device cannot be used

2018-08-28 Thread 聶大鈞
Hello Billy,

Thanks for your suggestion, It does solve my problem.

Here comes a further question, I did notice that this NIC card is allocated to 
numa one(my second numa node).
Hence, I setup socket-mem by trying following commands:
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="0,1024", or
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=1024, or
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="1024,1024"
All these commands could not solve the problem.

Could you please tell me what difference between yours(512,5125) and 
mine(1024,1024) is?

Finally, thanks for your help again, when things go stable, I'll adjust 
pmd-cpu-mask for performance.

Best Regard

Tcnieh


-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com] 
Sent: Tuesday, August 28, 2018 5:09 PM
To: tcn...@iii.org.tw; ovs-discuss@openvswitch.org
Subject: RE: [ovs-discuss] Requested device cannot be used

Hi Tcnieh,

 

Looks like your nics are on NUMA1 (second numa node) – as their pci bus number 
is > 80.

 

But you have not told OvS to allocate hugepage memory on the second numa node – 
the 0 in “--socket-mem 1024,0).”

 

So you need to change your line to something like:

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="512,512"

 

to have Hugepages available on both nodes.

 

Also you have allocated just a single core (core 0) for DPDK PMDs. It is also 
unusual to allocate core zero. That should work but with reduced performance as 
the PMD (on NUMA0) will have to access the packet data on NUMA1.

 

Have a look at your cpu topology. And modify your core-mask to allocate a core 
from NUMA1 also.

 

The details are in the docs: Documentation/topics/dpdk/* and 
Documentation/howto/dpdk.rst.

 

Regards,

Billy

 

 

From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of ???
Sent: Tuesday, August 28, 2018 3:37 AM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] Requested device cannot be used

 

Hello all,
  I am trying to get the performance of intel x520 10G NIC over Dell R630/R730, 
but I keep getting an unexpected error, please see below.
 
I followed the instruction of https://goo.gl/T7iTuk   to 
compiler the DPDK and OVS code. I've successfully binded both my x520 NIC ports 
to DPDK, using either igb_uio or vfio_pci:
 
~~
Network devices using DPDK-compatible driver 

:82:00.0 'Ethernet 10G 2P X520 Adapter 154d' drv=igb_uio unused=vfio-pci
:82:00.1 'Ethernet 10G 2P X520 Adapter 154d' drv=igb_uio unused=vfio-pci
 
Network devices using kernel driver
===
:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno1 drv=tg3 
unused=igb_uio,vfio-pci
:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno2 drv=tg3 
unused=igb_uio,vfio-pci
:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno3 drv=tg3 
unused=igb_uio,vfio-pci
:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno4 drv=tg3 
unused=igb_uio,vfio-pci *Active*
 
Other Network devices
=

~~~
 
And the hugepage was set to 2048 * 2M
~~~
HugePages_Total:2048
HugePages_Free: 1024
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
~~~
 
Here comes the problem, while I tried to init the ovsdb-server and ovs-vswitch, 
I got the following error:
~~~
   2018-08-27T09:54:05.548Z|2|ovs_numa|INFO|Discovered 16 CPU cores on NUMA 
node 0
   2018-08-27T09:54:05.548Z|3|ovs_numa|INFO|Discovered 16 CPU cores on NUMA 
node 1
   2018-08-27T09:54:05.548Z|4|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 
CPU cores
   
2018-08-27T09:54:05.548Z|5|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
 connecting...
   2018-08-   
27T09:54:05.549Z|6|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
 connected
   2018-08-27T09:54:05.552Z|7|dpdk|INFO|DPDK Enabled - initializing...
   2018-08-27T09:54:05.552Z|8|dpdk|INFO|No vhost-sock-dir provided - 
defaulting to /usr/local/var/run/openvswitch
   2018-08-27T09:54:05.552Z|9|dpdk|INFO|EAL ARGS: ovs-vswitchd --socket-mem 
1024,0 -c 0x0001
   2018-08-27T09:54:05.553Z|00010|dpdk|INFO|EAL: Detected 32 lcore(s)
   2018-08-27T09:54:05.558Z|00011|dpdk|WARN|EAL: No free hugepages reported in 
hugepages-1048576kB
   2018-08-27T09:54:05.559Z|00012|dpdk|INFO|EAL: Probing VFIO support...
   2018-08-27T09:54:06.700Z|00013|dpdk|INFO|EAL: PCI device :82:00.0 on 
NUMA socket 1
   2018-08-27T09:54:06.700Z|00014|dpdk|INFO|EAL:   probe driver: 8086:154d 
net_ixgbe
2018-08-27T09:54:06.700Z|00015|dpdk|ERR|EAL: Requested device :82:00.0 

Re: [ovs-discuss] Mega-flow generation

2018-08-28 Thread Sara Gittlin
Thank you Ben and Billy
-Sara

On Tue, Aug 28, 2018 at 11:27 AM O Mahony, Billy 
wrote:

> Hi Sara,
>
> This article
> https://software.intel.com/en-us/articles/ovs-dpdk-datapath-classifier
> gives practical overview of how megaflows, aka wildcarded or datapath
> flows, work at least in the ovs-dpdk (userspace datapath) context.
>
> Regards,
> Billy
>
> > -Original Message-
> > From: ovs-discuss-boun...@openvswitch.org [mailto:ovs-discuss-
> > boun...@openvswitch.org] On Behalf Of Ben Pfaff
> > Sent: Monday, August 27, 2018 5:10 PM
> > To: Sara Gittlin 
> > Cc: ovs-discuss@openvswitch.org
> > Subject: Re: [ovs-discuss] Mega-flow generation
> >
> > On Mon, Aug 27, 2018 at 02:46:19PM +0300, Sara Gittlin wrote:
> > > Can someone refer me to the code of the megaflow generation process ?
> > > Is this process  invoked by an upcall from the kernel module ? like in
> > > microflow ?
> >
> > Did you read the OVS paper?  It's all about megaflows.
> > http://www.openvswitch.org/support/papers/nsdi2015.pdf
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Requested device cannot be used

2018-08-28 Thread O Mahony, Billy
Hi Tcnieh,

Looks like your nics are on NUMA1 (second numa node) – as their pci bus number 
is > 80.

But you have not told OvS to allocate hugepage memory on the second numa node – 
the 0 in “--socket-mem 1024,0).”

So you need to change your line to something like:
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="512,512"

to have Hugepages available on both nodes.

Also you have allocated just a single core (core 0) for DPDK PMDs. It is also 
unusual to allocate core zero. That should work but with reduced performance as 
the PMD (on NUMA0) will have to access the packet data on NUMA1.

Have a look at your cpu topology. And modify your core-mask to allocate a core 
from NUMA1 also.

The details are in the docs: Documentation/topics/dpdk/* and 
Documentation/howto/dpdk.rst.

Regards,
Billy


From: ovs-discuss-boun...@openvswitch.org 
[mailto:ovs-discuss-boun...@openvswitch.org] On Behalf Of ???
Sent: Tuesday, August 28, 2018 3:37 AM
To: ovs-discuss@openvswitch.org
Subject: [ovs-discuss] Requested device cannot be used


Hello all,

  I am trying to get the performance of intel x520 10G NIC over Dell R630/R730, 
but I keep getting an unexpected error, please see below.



I followed the instruction of https://goo.gl/T7iTuk to compiler the DPDK and 
OVS code. I've successfully binded both my x520 NIC ports to DPDK, using either 
igb_uio or vfio_pci:



~~

Network devices using DPDK-compatible driver



:82:00.0 'Ethernet 10G 2P X520 Adapter 154d' drv=igb_uio unused=vfio-pci

:82:00.1 'Ethernet 10G 2P X520 Adapter 154d' drv=igb_uio unused=vfio-pci



Network devices using kernel driver

===

:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno1 drv=tg3 
unused=igb_uio,vfio-pci

:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno2 drv=tg3 
unused=igb_uio,vfio-pci

:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno3 drv=tg3 
unused=igb_uio,vfio-pci

:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno4 drv=tg3 
unused=igb_uio,vfio-pci *Active*



Other Network devices

=



~~~



And the hugepage was set to 2048 * 2M

~~~

HugePages_Total:2048

HugePages_Free: 1024

HugePages_Rsvd:0

HugePages_Surp:0

Hugepagesize:   2048 kB

~~~



Here comes the problem, while I tried to init the ovsdb-server and ovs-vswitch, 
I got the following error:

~~~

   2018-08-27T09:54:05.548Z|2|ovs_numa|INFO|Discovered 16 CPU cores on NUMA 
node 0

   2018-08-27T09:54:05.548Z|3|ovs_numa|INFO|Discovered 16 CPU cores on NUMA 
node 1

   2018-08-27T09:54:05.548Z|4|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 
CPU cores

   
2018-08-27T09:54:05.548Z|5|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
 connecting...

   2018-08-   
27T09:54:05.549Z|6|reconnect|INFO|unix:/usr/local/var/run/openvswitch/db.sock:
 connected

   2018-08-27T09:54:05.552Z|7|dpdk|INFO|DPDK Enabled - initializing...

   2018-08-27T09:54:05.552Z|8|dpdk|INFO|No vhost-sock-dir provided - 
defaulting to /usr/local/var/run/openvswitch

   2018-08-27T09:54:05.552Z|9|dpdk|INFO|EAL ARGS: ovs-vswitchd --socket-mem 
1024,0 -c 0x0001

   2018-08-27T09:54:05.553Z|00010|dpdk|INFO|EAL: Detected 32 lcore(s)

   2018-08-27T09:54:05.558Z|00011|dpdk|WARN|EAL: No free hugepages reported in 
hugepages-1048576kB

   2018-08-27T09:54:05.559Z|00012|dpdk|INFO|EAL: Probing VFIO support...

   2018-08-27T09:54:06.700Z|00013|dpdk|INFO|EAL: PCI device :82:00.0 on 
NUMA socket 1

   2018-08-27T09:54:06.700Z|00014|dpdk|INFO|EAL:   probe driver: 8086:154d 
net_ixgbe

2018-08-27T09:54:06.700Z|00015|dpdk|ERR|EAL: Requested device :82:00.0 
cannot be used

   2018-08-27T09:54:06.700Z|00016|dpdk|INFO|EAL: PCI device :82:00.1 on 
NUMA socket 1

   2018-08-27T09:54:06.700Z|00017|dpdk|INFO|EAL:   probe driver: 8086:154d 
net_ixgbe

2018-08-27T09:54:06.700Z|00018|dpdk|ERR|EAL: Requested device :82:00.1 
cannot be used

   2018-08-27T09:54:06.701Z|00019|dpdk|INFO|DPDK Enabled - initialized

   2018-08-27T09:54:06.705Z|00020|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath 
supports recirculation

~~~



Therefore, I also got the same error when I added a dpdk-port:

~~~

2018-08-27T09:54:06.709Z|00036|dpdk|INFO|EAL: PCI device :82:00.0 on NUMA 
socket 1

2018-08-27T09:54:06.709Z|00037|dpdk|INFO|EAL:   probe driver: 8086:154d 
net_ixgbe

2018-08-27T09:54:06.710Z|00038|dpdk|WARN|EAL: Requested device :82:00.0 
cannot be used

2018-08-27T09:54:06.710Z|00039|dpdk|ERR|EAL: Driver cannot attach the device 
(:82:00.0)