[ovs-discuss] Performance of VF and DPDK port on multiport card

2020-04-28 Thread Pradeep K.S
Hi All,

I have 4 port card X710 card(i40e driver), on 3 of them i have configured
SRIOV and using one physical port bound to ovs-dpdk.  I am observing some
strange behaviour with performance. If i run traffic on VFS and DPDK port ,
performance is better on VFS.

If I remove DPDK port and run traffic, SRIOV VFS starts dropping the
packets.

Card is running in multimode-driver , 3 kernel and 1 OVS-DPDK.  I have  few
questions

1) what exactly OVS-DPDK(i40epmd) is setting in the global pnics card
registers that is causing better performance of the overall card (VFS as
well as DPDK)
2) To isolate between kernel/dpdk DPDK provides multi-mode driver option,
do we have such option in ovs-dpdk , atleast look up of code/doc didn't
point me to anything.


-- 
Thanks and Regards,
Pradeep.K.S.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] pmd segfault

2019-08-08 Thread Pradeep K.S
Hi All,

Current software version:
=
openvswitch-2.9.0-19.el7fdp.1.x86_64
dpdk-17.11-7.el7.x86_64
3.10.0-957.21.3.el7.x86_64


VM Configuration:
=
I have vm having regular 2 virtio ports, 1 vhost-dpdk , 1 sriov ports on
it. At times PMD crashes with following backtrace


(gdb) bt
#0  rte_vhost_dequeue_burst (vid=, queue_id=,
mbuf_pool=0x7f42bf7b6940, pkts=pkts@entry=0x7f45467f3770,
count=count@entry=32) at
/usr/src/debug/openvswitch-2.9.0/dpdk-17.11/lib/librte_vhost/virtio_net.c:1567
#1  0x564cce70a2e4 in netdev_dpdk_vhost_rxq_recv (rxq=,
batch=0x7f45467f3760) at lib/netdev-dpdk.c:1849
#2  0x564cce656671 in netdev_rxq_recv (rx=,
batch=batch@entry=0x7f45467f3760) at lib/netdev.c:701
#3  0x564cce62fc1f in dp_netdev_process_rxq_port
(pmd=pmd@entry=0x7f46b0358010,
rxq=0x564cd007ba90, port_no=3) at lib/dpif-netdev.c:3279
#4  0x564cce63002a in pmd_thread_main (f_=) at
lib/dpif-netdev.c:4145
#5  0x564cce6accb6 in ovsthread_wrapper (aux_=) at
lib/ovs-thread.c:348
#6  0x7f46c6804dd5 in start_thread () from /lib64/libpthread.so.0
#7  0x7f46c5c01ead in clone () from /lib64/libc.so.6
(gdb) f 3
#3  0x564cce62fc1f in dp_netdev_process_rxq_port
(pmd=pmd@entry=0x7f46b0358010,
rxq=0x564cd007ba90, port_no=3) at lib/dpif-netdev.c:3279
3279error = netdev_rxq_recv(rxq->rx, &batch);
gdb) p *(rxq->rx)
Cannot access memory at address 0x7f42bfa59b80

(gdb) f 4
(gdb) p /x *poll_list[0]->rxq
$5 = {port = 0x564cd0240670, rx = 0x7f42bfa5b200, core_id = 0x7fff,
intrvl_idx = 0x1e, pmd = 0x7f46b0358010, cycles = {0x1b870,
0x746dc}, cycles_intrvl = {0x48c76, 0x11b16, 0xe7e8, 0x180de, 0x10388,
0x124e8}}


pmd_load_queues_and_ports(pmd, &poll_list);

Let me know if you need more info, i have core file too.

Regards,
Pradeep.K.S
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Faster way to load ovs-dpdk or isolation zones ?

2019-03-04 Thread Pradeep K.S
I have my management port and as well as data ports on different
ovs-bridges, as ovs-dpdk is the common driver  for all the bridges, If for
some reason ovs-dpdk goes down, i would loose management connectivity to
the the system.

Though ovs-dpdk restarts after crash, in my case I have 2M huge pages and
large system memory(256G), it will take long  time to restart ovs-dpdk as
it maps hugepages, one solution is to move to 1G huge pages,

Any faster way to map hugepages with 2M? (I saw some one presented solution
to this in ovs-dpdk conf,  any plans that might be part of ovs ?)
or
Is there an isolation mechanisms like management bridges/data bridges run
different instances of ovs dpdk/availability zones or anything like that?

-Pradeep.K.S
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] vhost_net driver version and ovs-dpdk versions needs to compatible ?

2019-01-18 Thread Pradeep K.S
Hi,

I have ovs-dpdk setup and have spawned a VM with 2 vhost-user ports. I am
using vhost in  dpdkvhostuserclient mode.  If I reboot the guest vm by
issuing reboot command i am getting SEGV , looked at the stack trace , (my
guess it might be accessing old Queue memory).

is there a compatibility matrix for "vhost_net" driver used in the guest
and ovs-dpdk version.

dpdk-17.11-7
openvswitch-2.9.0


#0  0x5625f482f34c in rte_vhost_dequeue_burst () <--- Its trying to
access VM memory ,on reboot
#1  0x5625f49762e4 in netdev_dpdk_vhost_rxq_recv ()
#2  0x5625f48c2671 in netdev_rxq_recv ()
#3  0x5625f489bc1f in dp_netdev_process_rxq_port ()
#4  0x5625f489c02a in pmd_thread_main ()
#5  0x5625f4918cb6 in ovsthread_wrapper ()
#6  0x7f817384bdc5 in start_thread () from /lib64/libpthread.so.0
#7  0x7f8172c4e73d in clone () from /lib64/libc.so.6


-- 
Thanks and Regards,
Pradeep.K.S.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Packet capturing on vHost User Ports

2018-12-20 Thread Pradeep K.S
Hi Tobias,

You can use ovs-tcpdump  for doing packet capture on vhost/dpdk/bond ports.

Regards,
Pradeep.K.S

On Thu, Dec 20, 2018 at 4:35 PM Tobias Hofmann -T (tohofman - AAP3 INC at
Cisco) via discuss  wrote:

> Hello,
>
>
>
> I am trying to capture packets with *tcpdump* which unfortunately does
> not work with *vhost user ports. *It fails saying that there is “*no such
> device”* which is probably due to the fact that the vhost user ports are
> not recognized as proper interfaces by Linux.
>
>
>
> Has anyone already thought about packet capturing on vhost user ports and
> has a solution?
>
>
>
> Thanks for your help!
>
> Tobias
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>


-- 
Thanks and Regards,
Pradeep.K.S.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Ovs with DPDK:Error attaching device '0000:02:00.0' to DPDK

2018-07-30 Thread Pradeep K.S
DPDK Looks hugepages in 2M, 1G  and this is fine if system doesn't have 1G
huge pages we can expect this message.
2018-07-30T02:12:05.443Z|00013|dpdk|WARN|EAL: No free hugepages reported in
hugepages-1048576kB

This is something to look far:

2018-07-30T02:13:51.690Z|00041|dpdk|ERR|EAL: Unable to find a bus for the
device ':02:00.0'

2018-07-30T02:13:51.690Z|00042|netdev_dpdk|WARN|Error attaching device
':02:00.0' to DPDK


Which driver this device(:02:00.0)  is originally bound to ?  can you
also paste the "dpdk-devbind -s"  output.









On Sun, Jul 29, 2018 at 9:00 PM, Vancasola <616923...@qq.com> wrote:

> Thanks for your reply, I
>
> load/inserted  vfio-pci driver
>
> and bind NICs before starting OVS-DPDK and try again, but it still doesn't
> work.
>
> And this is the latest
>
> ovs-vswichd.log:
>
>
>
>   
>   
>
> 2018-07-30T02:12:05.416Z|1|vlog|INFO|opened log file
> /usr/local/var/log/openvswitch/ovs-vswitchd.log
>
> 2018-07-30T02:12:05.439Z|2|ovs_numa|INFO|Discovered 8 CPU cores on
> NUMA node 0
>
> 2018-07-30T02:12:05.439Z|3|ovs_numa|INFO|Discovered 1 NUMA nodes and
> 8 CPU cores
>
> 2018-07-30T02:12:05.439Z|4|reconnect|INFO|unix:/
> usr/local/var/run/openvswitch/db.sock: connecting...
>
> 2018-07-30T02:12:05.439Z|5|reconnect|INFO|unix:/
> usr/local/var/run/openvswitch/db.sock: connected
>
> 2018-07-30T02:12:05.441Z|6|dpdk|INFO|Using DPDK 17.11.3
>
> 2018-07-30T02:12:05.441Z|7|dpdk|INFO|DPDK Enabled - initializing...
>
> 2018-07-30T02:12:05.441Z|8|dpdk|INFO|No vhost-sock-dir provided -
> defaulting to /usr/local/var/run/openvswitch
>
> 2018-07-30T02:12:05.441Z|9|dpdk|INFO|IOMMU support for
> vhost-user-client disabled.
>
> 2018-07-30T02:12:05.441Z|00010|dpdk|INFO|Per port memory for DPDK devices
> disabled.
>
> 2018-07-30T02:12:05.441Z|00011|dpdk|INFO|EAL ARGS: ovs-vswitchd
> --socket-mem 1024 -c 0x0001
>
> 2018-07-30T02:12:05.442Z|00012|dpdk|INFO|EAL: Detected 8 lcore(s)
>
> 2018-07-30T02:12:05.443Z|00013|dpdk|WARN|EAL: No free hugepages reported
> in hugepages-1048576kB
>
> 2018-07-30T02:12:05.443Z|00014|dpdk|INFO|EAL: Probing VFIO support...
>
> 2018-07-30T02:12:05.443Z|00015|dpdk|INFO|EAL: VFIO support initialized
>
> 2018-07-30T02:12:05.703Z|00016|dpdk|INFO|DPDK Enabled - initialized
>
> 2018-07-30T02:12:05.705Z|00017|bridge|INFO|ovs-vswitchd (Open vSwitch)
> 2.10.90
>
> 2018-07-30T02:12:40.234Z|00018|memory|INFO|45056 kB peak resident set
> size after 34.8 seconds
>
> 2018-07-30T02:12:40.241Z|00019|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports recirculation
>
> 2018-07-30T02:12:40.241Z|00020|ofproto_dpif|INFO|netdev@ovs-netdev: VLAN
> header stack length probed as 1
>
> 2018-07-30T02:12:40.241Z|00021|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS
> label stack length probed as 3
>
> 2018-07-30T02:12:40.241Z|00022|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports truncate action
>
> 2018-07-30T02:12:40.241Z|00023|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports unique flow ids
>
> 2018-07-30T02:12:40.241Z|00024|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports clone action
>
> 2018-07-30T02:12:40.241Z|00025|ofproto_dpif|INFO|netdev@ovs-netdev: Max
> sample nesting level probed as 10
>
> 2018-07-30T02:12:40.241Z|00026|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports eventmask in conntrack action
>
> 2018-07-30T02:12:40.241Z|00027|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports ct_clear action
>
> 2018-07-30T02:12:40.241Z|00028|ofproto_dpif|INFO|netdev@ovs-netdev: Max
> dp_hash algorithm probed to be 1
>
> 2018-07-30T02:12:40.241Z|00029|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports ct_state
>
> 2018-07-30T02:12:40.241Z|00030|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports ct_zone
>
> 2018-07-30T02:12:40.241Z|00031|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports ct_mark
>
> 2018-07-30T02:12:40.241Z|00032|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports ct_label
>
> 2018-07-30T02:12:40.241Z|00033|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports ct_state_nat
>
> 2018-07-30T02:12:40.241Z|00034|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports ct_orig_tuple
>
> 2018-07-30T02:12:40.241Z|00035|ofproto_dpif|INFO|netdev@ovs-netdev:
> Datapath supports ct_orig_tuple6
>
> 2018-07-30T02:12:40.259Z|00036|bridge|INFO|bridge br0: added interface
> br0 on port 65534
>
> 2018-07-30T02:12:40.259Z|00037|bridge|INFO|bridge br0: using datapath ID
> a2cd30da744a
>
> 2018-07-30T02:12:40.259Z|00038|connmgr|INFO|br0: added service controller
> "punix:/usr/local/var/run/openvswitch/br0.mgmt"
>
> 2018-07-30T02:12:50.272Z|00039|memory|INFO|peak resident set size grew
> 188% in last 10.0 seconds, from 45056 kB to 129736 kB
>
> 2018-07-30T02:12:50.272Z|00040|memory|INFO|handlers:5 ports:1
> revalidators:3 rules:5
>
> 2018-07-30T02:13:51.690Z|00041|dpdk|ERR|EAL: Unable to find a bus for the
> device ':02:00.0'
>
> 2018-07-30T02:13:51.690Z|00042|netdev_dpdk|W

Re: [ovs-discuss] Ovs with DPDK:Error attaching device '0000:02:00.0' to DPDK

2018-07-28 Thread Pradeep K.S
Did you load/inserted  vfio-pci driver before starting ovs, else ovs-dpdk
won't initialize with support for vfio-pci.
If not reload the openvswitch process after loading vfio module in the
kernel.

ovs-vswichd.log will help to debug further if above doesn't resolve your
error.




On Sat, Jul 28, 2018 at 3:26 AM, Vancasola <616923...@qq.com> wrote:

>
> Hi,
> When I try to add a port with dpdk, I met an error:  "Error attaching device 
> ':02:00.0' to DPDK".This is the instruction: ovs-vsctl add-port br0 
> dpdk-p0 -- set Interface dpdk-p0 type=dpdk 
> options:dpdk-devargs=:02:00.0 I have searched github and many mails, but 
> I still cannot find the solution(for a long time)  :(:(. So can you tell me 
> how to solve it?
>
> I have set CONFIG_RTE_BUILD_SHARED_LIB=y in $DPDK_DIR/config/common_base.
>
>
>
> http://docs.openvswitch.org/en/latest/intro/install/dpdk/
>
> http://docs.openvswitch.org/en/latest/intro/install/
> general/#general-build-reqs
>
> http://docs.openvswitch.org/en/latest/howto/dpdk/
>
> These is my main tutorial.
>
>
> This is a partial display:
>
>
>
>
>
> about hugepages
>
>
> root@sdn:/usr/src/ovs# echo 'vm.nr_hugepages=2048' >
> /etc/sysctl.d/hugepages.conf
>
> root@sdn:/usr/src/ovs# sysctl -w vm.nr_hugepages=1024
>
> vm.nr_hugepages = 1024
>
> root@sdn:/usr/src/ovs# grep HugePages_ /proc/meminfo
>
> HugePages_Total: 1024
>
> HugePages_Free: 1024
>
> HugePages_Rsvd: 0
>
> HugePages_Surp: 0
>
> root@sdn:/usr/src/ovs# mount -t hugetlbfs none /dev/hugepages``
>
>
> about vfio’s configuration
>
>
> root@sdn:/usr/src/ovs# dmesg | grep -e DMAR -e IOMMU
>
> [ 0.00] ACPI: DMAR 0xCECF8578 70 (v01 LENOVO TC-FW
> 1790 INTL 0001)
>
> [ 0.00] DMAR: IOMMU enabled
>
> [ 0.004000] DMAR: Host address width 39
>
> [ 0.004000] DMAR: DRHD base: 0x00fed9 flags: 0x1
>
> [ 0.004000] DMAR: dmar0: reg_base_addr fed9 ver 1:0 cap d2008c40660462
> ecap f050da
>
> [ 0.004000] DMAR: RMRR base: 0x00cea19000 end: 0x00cea38fff
>
> [ 0.004000] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed9 IOMMU 0
>
> [ 0.004000] DMAR-IR: HPET id 0 under DRHD base 0xfed9
>
> [ 0.004000] DMAR-IR: Queued invalidation will be enabled to support x2apic
> and Intr-remapping.
>
> [ 0.004000] DMAR-IR: Enabled IRQ remapping in x2apic mode
>
> [ 0.744604] DMAR: No ATSR found
>
> [ 0.744632] DMAR: dmar0: Using Queued invalidation
>
> [ 0.744680] DMAR: Hardware identity mapping for device :00:00.0
>
> [ 0.744681] DMAR: Hardware identity mapping for device :00:01.0
>
> [ 0.744682] DMAR: Hardware identity mapping for device :00:14.0
>
> [ 0.744682] DMAR: Hardware identity mapping for device :00:16.0
>
> [ 0.744683] DMAR: Hardware identity mapping for device :00:16.3
>
> [ 0.744684] DMAR: Hardware identity mapping for device :00:17.0
>
> [ 0.744685] DMAR: Hardware identity mapping for device :00:1d.0
>
> [ 0.744686] DMAR: Hardware identity mapping for device :00:1f.0
>
> [ 0.744687] DMAR: Hardware identity mapping for device :00:1f.2
>
> [ 0.744688] DMAR: Hardware identity mapping for device :00:1f.3
>
> [ 0.744688] DMAR: Hardware identity mapping for device :00:1f.4
>
> [ 0.744689] DMAR: Hardware identity mapping for device :00:1f.6
>
> [ 0.744691] DMAR: Hardware identity mapping for device :01:00.0
>
> [ 0.744692] DMAR: Hardware identity mapping for device :01:00.1
>
> [ 0.744694] DMAR: Hardware identity mapping for device :02:00.0
>
> [ 0.744695] DMAR: Hardware identity mapping for device :02:00.1
>
> [ 0.744695] DMAR: Setting RMRR:
>
> [ 0.744696] DMAR: Ignoring identity map for HW passthrough device
> :00:14.0 [0xcea19000 - 0xcea38fff]
>
> [ 0.744697] DMAR: Prepare 0-16MiB unity mapping for LPC
>
> [ 0.744697] DMAR: Ignoring identity map for HW passthrough device
> :00:1f.0 [0x0 - 0xff]
>
> [ 0.744699] DMAR: Intel(R) Virtualization Technology for Directed I/O
>
>
> root@sdn:/usr/src/ovs# cat /proc/cmdline | grep iommu=pt
>
> BOOT_IMAGE=/boot/vmlinuz-4.15.0-29-generic 
> root=UUID=bdc617a6-3bfc-4b7d-bf4b-72f5cb5dbaff
> ro quiet splash iommu=pt intel_iommu=on vt.handoff=1
>
>
>
> And I bind devices to VFIO:
>
>
>
> Network devices using DPDK-compatible driver
>
> 
>
> :02:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' drv=vfio-pci
> unused=ixgbe
>
> :02:00.1 'Ethernet Controller 10-Gigabit X540-AT2 1528' drv=vfio-pci
> unused=ixgbe
>
>
>
> And I ensure them initialized
>
>
> root@sdn:/usr/src/ovs# ovs-vsctl get Open_vSwitch . dpdk_initialized
>
> true
>
> root@sdn:/usr/src/ovs# ovs-vswitchd --version
>
> ovs-vswitchd (Open vSwitch) 2.10.90
>
> DPDK 17.11.3
>
> root@sdn:/usr/src/ovs# ovs-vsctl get Open_vSwitch . dpdk_version
>
> "DPDK 17.11.3"
>
>
>
> Then I add a bridge
>
>
>  ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
>
>
> Add a port:
>
>
> root@sdn:/usr/src/ovs# ovs-vsctl add-port br0 dpdk-p0 -- set Interface
> dpdk-p0 type=dpdk op

[ovs-discuss] Reclaim huge pages after disabling dpdk (dpdk-init=false)

2018-07-03 Thread Pradeep K.S
Hi All,

I am allocating ovs-dpdk socket memory during dpdk initialization.
[dpdk-socket-mem="3072,2048",  dpdk-init=true] ,  If i disable ovs-dpdk
memory is not getting released, basically i am relying on /proc/meminfo huge
page counter to validate its reclaimed, i don't see it coming back.

Any pointers will be helpful.

-- 
Thanks and Regards,
Pradeep.K.S.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] OVS-DPDK vm-vm performance

2018-05-21 Thread Pradeep K.S
Hi,

I switched from OVS to OVS-DPDK and changed the VNFs to use vhost-user
backend. I used iperf to compare the performance of vhost-net and
vhost-user backend performance. Despite tuning on all fronts [more queues,
more pmd threads chaining affinities, socket memory, increase cpus]  get
the below results,

1) VM-VM: lower through put than vhost-net, lot less (1/4 of vhost-net)
After googling found few links which pointed to TSO offload needed in
OVS-DPDK,
couldn't get option in OVS-DPDK to change that.

2) VM->PNIC->VM: Slightly better performance, not much.

I can look further on tuning, Anyone faced the same issue and any pointers
would be helpful.



-- 
Thanks and Regards,
Pradeep.K.S.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] ovs-vswitch with dpdk and non dpdk bridge.

2018-05-09 Thread Pradeep K.S
Hi,

I have installed OVS DPDK, I created 2 bridges  one with dpdk (datapath =
netdev) and other  regular ovs bridge.

I am able to create it , but is it supported scenario ? I mean can
ovs-vswitchd works with 2 different types of  bridge(one with dpdk and one
regular ovs bridge)


Regards,
Pradeep.K.S
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Blocking traffic between 2 ports

2017-12-12 Thread Pradeep K.S
Hi,

I want to restrict traffic between 2 ports, is there any direct way of
achieving that in OVS ?

I can do that by adding multiple flows(block destination mac, redirecting
broadcast/multicast traffic to uplink, and add flows for flood during
unicast mac resolution).

Any pointers would be helpful.

Regards,
Pradeep.K.S
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Programming flows to bond port

2017-12-04 Thread Pradeep K.S
I have a created bond interface, with multiple slaves. I want to program
flows
such that packets arriving on port X -> should be directed to bond port
But the
problem is bond port doesn't have ofportid, how to program such a flow
using ovs-ofctl

I tried setting bond_fake_iface, still I don't get  ofportid to program
ovs-ofctl flows.

-- 
Thanks and Regards,
Pradeep.K.S.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Matching fields with GRE+NSH

2017-08-23 Thread Pradeep K.S
Hi,

I have packet structure with following format

eth | OuterIP |GRE | NSH | Inner IP packet


*Few Questions:*

1)  Does Open Flow  matching supports raw data[Eth+OuterIP+GRE+NSH+InnerIP]
packet match. atleast from my reading ovs-ofctl, ovs-fields documentation I
did not find
such a thing.

2) Currently I can match till GRE[Outer +Inner Packet], But for NSH
matching the diff is not upstreamed yet. I see some threads of having
discussions about upstreaming this change.
https://github.com/yyang13/ovs_nsh_patches/issues/new

Does upstream version will support VXLAN+NSH or have plans for other
transport headers
for NSH like  GRE+NSH


Regards,
Pradeep.K.S.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss