[ovs-discuss] (no subject)
Hi all, I am using OVS version 2.8.2 I have a problem when executing "insert-buckets".I created a group of type select and when i try to insert a bucket using "insert-bucket" command supported by OpenFlow1.5 the group type gets changed from "select" to "all" (type=select -> type=all). Can I keep group's properties such as type, selection_method, fields when inserting new buckets? Your help would be appreciated. Thanks, Shivani. ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] Discrepancy between ofproto/trace output and dpctl dump-flows output
Yup thanks we are unblocked! - Amar On 12/6/17, 12:36 PM, "Ben Pfaff" wrote: OK. In the meantime, you can add "eth()" to the flows you're tracing to get the expected results. On Wed, Dec 06, 2017 at 07:02:42PM +, Amar Padmanabhan wrote: > Thanks Ben for taking a look, > Regards, > - Amar > > On 12/6/17, 10:17 AM, "Ben Pfaff" wrote: > > On Wed, Dec 06, 2017 at 05:58:41AM +, Amar Padmanabhan wrote: > > We are debugging a setup and are seeing something that we are finding it hard to explain. > > > > 1 - Here is the ovs-dpctl dump-flows output. > > recirc_id(0),in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no), packets:550, bytes:53900, used:0.364s, actions:userspace(pid=3276048382,slow_path(controller)) > > OK, the above datapath flow just says that packets in this flow have to > be handled in the userspace slow path because > > > 2 - We are now trying to trace this flow and are not seeing the output to controller flow getting hit in the trace. > > sudo ovs-appctl ofproto/trace "in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" > > Flow: packet_type=(1,0x800),in_port=32770,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0 > > bridge("gtp_br0") > > - > > 0. priority 0 resubmit(,1) > > 1. in_port=32770, priority 10 set_field:0->metadata resubmit(,2) > > 2. priority 0 resubmit(,3) > > 3. No match. drop Final flow: unchanged Megaflow: recirc_id=0,packet_type=(1,0x800),in_port=32770,nw_frag=no Datapath actions: drop ---> Why isn’t the output to controller flow getting hit? > > > > > > 3 - We are also seeing the flow counts go up for the output to controller flow per the ofctl dump-flows output (pasting relevant flows) > > > > NXST_FLOW reply (xid=0x4): cookie=0x0, duration=1482.245s, table=0, n_packets=1408, n_bytes=148464, idle_age=1, priority=0 actions=resubmit(,1) > > cookie=0x0, duration=1482.244s, table=1, n_packets=1283, n_bytes=123662, idle_age=1, priority=10,in_port=32770 actions=set_field:0->metadata,resubmit(,2) > > cookie=0x0, duration=1482.244s, table=2, n_packets=1247, n_bytes=122150, idle_age=1, priority=0 actions=resubmit(,3) > > cookie=0x0, duration=1482.245s, table=3, n_packets=1245, n_bytes=122010, idle_age=1, priority=0,ip,metadata=0,nw_dst=192.168.128.0/24 actions=CONTROLLER:65535 ---> Notice that this is getting hit as well > > OK, I spent a few minutes to mock up your environment (thanks for all > the details!) and experiment. It looks like the problem is actually a > mismatch between the formatting and parsing code for datapath flows. If > I run: > > ovs-appctl ofproto/trace "in_port(3),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" > > that is, add "eth()" to the datapath flow, then I get the expected > results: > > $ ovs-appctl ofproto/trace "in_port(1),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" > Flow: ip,in_port=32770,vlan_tci=0x,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0 > > bridge("br0") > - > 0. priority 0 > resubmit(,1) > 1. in_port=32770, priority 10 > load:0->OXM_OF_METADATA[] > resubmit(,2) > 2. priority 0 > resubmit(,3) > 3. ip,metadata=0,nw_dst=192.168.128.0/24, priority 0 > CONTROLLER:65535 > > Final flow: unchanged > Megaflow: recirc_id=0,eth,ip,in_port=32770,nw_dst=192.168.128.0/24,nw_frag=no > Datapath actions: drop > This flow is handled by the userspace slow path because it: > - Sends "packet-in" messages to the OpenFlow controller. > > Clearly that's a bug. I'll see what I can do about it. > > > Also, Whoever improved the output of ofproto/trace thanks a ton! > > That was me :-) You're welcome. > > ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] ovn-kubernetes the search goes on
Le 07/12/2017 à 00:55, Sébastien Bernard a écrit : Hello again, I'm still trying to makes ovn works with k8s bootstrapped by kubeadm. I've been successfull at starting all daemons without errors. Watchers are running, however, no mac address are shown when doing ovn-nbctl show, only the ports. kube-dns fails to start cause it cannot reach the kube-apiserver. I'm wondering how to debug this ? Here's a, example of output, only the infrastructure setup by the scripts are showing mac addresses : switch e832fd69-0e71-49f7-930b-4d005ae3a853 (join) port jtor-GR_km1 type: router addresses: ["00:00:00:B4:C3:00"] router-port: rtoj-GR_km1 port jtor-km1 type: router addresses: ["00:00:00:45:2B:BE"] router-port: rtoj-km1 switch 67de0349-cd5e-46a6-b952-56c198c07cef (km1) port stor-km1 type: router addresses: ["00:00:00:FC:B8:C2"] router-port: rtos-km1 port kube-system_kube-proxy-c9nfg addresses: ["dynamic"] port kube-system_kube-controller-manager-km1 addresses: ["dynamic"] port kube-system_etcd-km1 addresses: ["dynamic"] port kube-system_kube-apiserver-km1 addresses: ["dynamic"] port kube-system_kube-dns-545bc4bfd4-zpjj6 addresses: ["dynamic"] port k8s-km1 addresses: ["22:d5:cc:fa:14:b1 10.10.0.2"] port kube-system_kube-scheduler-km1 addresses: ["dynamic"] switch 6ade5db3-a6dd-45c1-b7ce-5a0e9d608471 (ext_km1) port etor-GR_km1 type: router addresses: ["00:0c:29:1f:93:48"] router-port: rtoe-GR_km1 port br-ens34_km1 addresses: ["unknown"] router d7d20e30-6505-4848-8361-d80253520a43 (km1) port rtoj-km1 mac: "00:00:00:45:2B:BE" networks: ["100.64.1.1/24"] port rtos-km1 mac: "00:00:00:FC:B8:C2" networks: ["10.10.0.1/24"] router aa6e86cf-2fa2-4cad-a301-97b35bed7df9 (GR_km1) port rtoj-GR_km1 mac: "00:00:00:B4:C3:00" networks: ["100.64.1.2/24"] port rtoe-GR_km1 mac: "00:0c:29:1f:93:48" networks: ["172.16.229.128/24"] nat d3767114-dc49-48d0-b462-8c41ba7c5243 external ip: "172.16.229.128" logical ip: "10.10.0.0/16" type: "snat" ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss A quick followup to show you the error given in the logs: --- déc. 07 02:02:39 km1 NetworkManager[634]: [1512608559.2058] manager: (3c97e930c02f0_c): new Veth device (/org/freedesktop/NetworkManager/Devices/15) déc. 07 02:02:39 km1 NetworkManager[634]: [1512608559.2065] manager: (3c97e930c02f011): new Veth device (/org/freedesktop/NetworkManager/Devices/16) déc. 07 02:02:39 km1 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 3c97e930c02f011: link becomes ready déc. 07 02:02:39 km1 NetworkManager[634]: [1512608559.3285] device (3c97e930c02f011): link connected déc. 07 02:02:39 km1 ovn_cni[70900]: ovs| 8 | ovn-k8s-cni-overlay | WARN | Failed to setup veth pair for pod: (101, 'Network is unreachable') déc. 07 02:02:39 km1 kubelet[69796]: 2017-12-07T01:02:39Z | 8 | ovn-k8s-cni-overlay | WARN | Failed to setup veth pair for pod: (101, 'Network is unreachable') déc. 07 02:02:39 km1 kubelet[69796]: 2017-12-07T01:02:39Z | 9 | ovn-k8s-cni-overlay | ERR | {"cniVersion": "0.1.0", "code": 100, "message": "container interface setup failure"} déc. 07 02:02:39 km1 ovn_cni[70900]: ovs| 9 | ovn-k8s-cni-overlay | ERR | {"cniVersion": "0.1.0", "code": 100, "message": "container interface setup failure"} déc. 07 02:02:39 km1 kubelet[69796]: E1207 02:02:39.375747 69796 cni.go:301] Error adding network: déc. 07 02:02:39 km1 kubelet[69796]: E1207 02:02:39.375764 69796 cni.go:250] Error while adding to cni network: déc. 07 02:02:39 km1 dockerd-current[69844]: time="2017-12-07T02:02:39.376073780+01:00" level=info msg="{Action=stop, LoginUID=4294967295, PID=69796}" déc. 07 02:02:39 km1 dockerd-current[69844]: shutting down, got signal: Terminated déc. 07 02:02:39 km1 systemd-machined[4406]: Machine 3c97e930c02f01123d36e9b4ee7b833d terminated. --- Error is spinning fast in the logs, always the same. Seb ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
[ovs-discuss] ovn-kubernetes the search goes on
Hello again, I'm still trying to makes ovn works with k8s bootstrapped by kubeadm. I've been successfull at starting all daemons without errors. Watchers are running, however, no mac address are shown when doing ovn-nbctl show, only the ports. kube-dns fails to start cause it cannot reach the kube-apiserver. I'm wondering how to debug this ? Here's a, example of output, only the infrastructure setup by the scripts are showing mac addresses : switch e832fd69-0e71-49f7-930b-4d005ae3a853 (join) port jtor-GR_km1 type: router addresses: ["00:00:00:B4:C3:00"] router-port: rtoj-GR_km1 port jtor-km1 type: router addresses: ["00:00:00:45:2B:BE"] router-port: rtoj-km1 switch 67de0349-cd5e-46a6-b952-56c198c07cef (km1) port stor-km1 type: router addresses: ["00:00:00:FC:B8:C2"] router-port: rtos-km1 port kube-system_kube-proxy-c9nfg addresses: ["dynamic"] port kube-system_kube-controller-manager-km1 addresses: ["dynamic"] port kube-system_etcd-km1 addresses: ["dynamic"] port kube-system_kube-apiserver-km1 addresses: ["dynamic"] port kube-system_kube-dns-545bc4bfd4-zpjj6 addresses: ["dynamic"] port k8s-km1 addresses: ["22:d5:cc:fa:14:b1 10.10.0.2"] port kube-system_kube-scheduler-km1 addresses: ["dynamic"] switch 6ade5db3-a6dd-45c1-b7ce-5a0e9d608471 (ext_km1) port etor-GR_km1 type: router addresses: ["00:0c:29:1f:93:48"] router-port: rtoe-GR_km1 port br-ens34_km1 addresses: ["unknown"] router d7d20e30-6505-4848-8361-d80253520a43 (km1) port rtoj-km1 mac: "00:00:00:45:2B:BE" networks: ["100.64.1.1/24"] port rtos-km1 mac: "00:00:00:FC:B8:C2" networks: ["10.10.0.1/24"] router aa6e86cf-2fa2-4cad-a301-97b35bed7df9 (GR_km1) port rtoj-GR_km1 mac: "00:00:00:B4:C3:00" networks: ["100.64.1.2/24"] port rtoe-GR_km1 mac: "00:0c:29:1f:93:48" networks: ["172.16.229.128/24"] nat d3767114-dc49-48d0-b462-8c41ba7c5243 external ip: "172.16.229.128" logical ip: "10.10.0.0/16" type: "snat" ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] ovs_assert(classifier_is_empty(&table->cls)) failed when restart openvswitch service.
On Wed, Dec 06, 2017 at 08:28:30PM +, Zhanghaibo (Euler) wrote: > Hello all, > > I run into abort issue when restart openvswitch service, the coredump file > shows that ovs_assert() failed in function oftable_destroy()/ofproto.c. > > The problem is pretty hard to reproduce, Do you have any idea about this? OVS > release is v2.7.0, Any suggestion would be appreciated. > > Source codes and gdb information were copied below. I'd first recommend upgrading to the latest on the 2.7 branch, which has over 230 bug fixes since 2.7.0 was released. ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] Discrepancy between ofproto/trace output and dpctl dump-flows output
OK. In the meantime, you can add "eth()" to the flows you're tracing to get the expected results. On Wed, Dec 06, 2017 at 07:02:42PM +, Amar Padmanabhan wrote: > Thanks Ben for taking a look, > Regards, > - Amar > > On 12/6/17, 10:17 AM, "Ben Pfaff" wrote: > > On Wed, Dec 06, 2017 at 05:58:41AM +, Amar Padmanabhan wrote: > > We are debugging a setup and are seeing something that we are finding > it hard to explain. > > > > 1 - Here is the ovs-dpctl dump-flows output. > > > recirc_id(0),in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no), > packets:550, bytes:53900, used:0.364s, > actions:userspace(pid=3276048382,slow_path(controller)) > > OK, the above datapath flow just says that packets in this flow have to > be handled in the userspace slow path because > > > 2 - We are now trying to trace this flow and are not seeing the output > to controller flow getting hit in the trace. > > sudo ovs-appctl ofproto/trace > "in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" > > Flow: > packet_type=(1,0x800),in_port=32770,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0 > > bridge("gtp_br0") > > - > > 0. priority 0 resubmit(,1) > > 1. in_port=32770, priority 10 set_field:0->metadata resubmit(,2) > > 2. priority 0 resubmit(,3) > > 3. No match. drop Final flow: unchanged Megaflow: > recirc_id=0,packet_type=(1,0x800),in_port=32770,nw_frag=no Datapath actions: > drop ---> Why isn’t the output to controller flow getting hit? > > > > > > 3 - We are also seeing the flow counts go up for the output to > controller flow per the ofctl dump-flows output (pasting relevant flows) > > > > NXST_FLOW reply (xid=0x4): cookie=0x0, duration=1482.245s, table=0, > n_packets=1408, n_bytes=148464, idle_age=1, priority=0 actions=resubmit(,1) > > cookie=0x0, duration=1482.244s, table=1, n_packets=1283, > n_bytes=123662, idle_age=1, priority=10,in_port=32770 > actions=set_field:0->metadata,resubmit(,2) > > cookie=0x0, duration=1482.244s, table=2, n_packets=1247, > n_bytes=122150, idle_age=1, priority=0 actions=resubmit(,3) > > cookie=0x0, duration=1482.245s, table=3, n_packets=1245, > n_bytes=122010, idle_age=1, priority=0,ip,metadata=0,nw_dst=192.168.128.0/24 > actions=CONTROLLER:65535 ---> Notice that this is getting hit as well > > OK, I spent a few minutes to mock up your environment (thanks for all > the details!) and experiment. It looks like the problem is actually a > mismatch between the formatting and parsing code for datapath flows. If > I run: > > ovs-appctl ofproto/trace > "in_port(3),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" > > that is, add "eth()" to the datapath flow, then I get the expected > results: > > $ ovs-appctl ofproto/trace > "in_port(1),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" > Flow: > ip,in_port=32770,vlan_tci=0x,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0 > > bridge("br0") > - > 0. priority 0 > resubmit(,1) > 1. in_port=32770, priority 10 > load:0->OXM_OF_METADATA[] > resubmit(,2) > 2. priority 0 > resubmit(,3) > 3. ip,metadata=0,nw_dst=192.168.128.0/24, priority 0 > CONTROLLER:65535 > > Final flow: unchanged > Megaflow: > recirc_id=0,eth,ip,in_port=32770,nw_dst=192.168.128.0/24,nw_frag=no > Datapath actions: drop > This flow is handled by the userspace slow path because it: > - Sends "packet-in" messages to the OpenFlow controller. > > Clearly that's a bug. I'll see what I can do about it. > > > Also, Whoever improved the output of ofproto/trace thanks a ton! > > That was me :-) You're welcome. > > ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
[ovs-discuss] ovs_assert(classifier_is_empty(&table->cls)) failed when restart openvswitch service.
Hello all, I run into abort issue when restart openvswitch service, the coredump file shows that ovs_assert() failed in function oftable_destroy()/ofproto.c. The problem is pretty hard to reproduce, Do you have any idea about this? OVS release is v2.7.0, Any suggestion would be appreciated. Source codes and gdb information were copied below. - /* Destroys 'table', including its classifier and eviction groups. * * The caller is responsible for freeing 'table' itself. */ static void oftable_destroy(struct oftable *table) { ovs_assert(classifier_is_empty(&table->cls)); ovs_mutex_lock(&ofproto_mutex); oftable_configure_eviction(table, 0, NULL, 0); ovs_mutex_unlock(&ofproto_mutex); hmap_destroy(&table->eviction_groups_by_id); heap_destroy(&table->eviction_groups_by_size); classifier_destroy(&table->cls); free(table->name); } - (gdb) bt #0 0x7fc6f51d6157 in raise () from /usr/lib64/libc.so.6 #1 0x7fc6f51d7848 in abort () from /usr/lib64/libc.so.6 #2 0x0055b52e in ovs_abort_valist (err_no=err_no@entry=0, format=format@entry=0x669f30 "%s: assertion %s failed in %s()", args=args@entry=0x7fc6a37fd7e8) at lib/util.c:337 #3 0x005620d0 in vlog_abort_valist (function=, line=, module_=, message=0x669f30 "%s: assertion %s failed in %s()", args=args@entry=0x7fc6a37fd7e8) at lib/vlog.c:1229 #4 0x0056214d in vlog_abort (function=function@entry=0x66a610 <__func__.7912> "ovs_assert_failure", line=line@entry=74, module=module@entry=0x934560 , message=message@entry=0x669f30 "%s: assertion %s failed in %s()") at lib/vlog.c:1243 #5 0x0055b2f9 in ovs_assert_failure (where=where@entry=0x63751e "ofproto/ofproto.c:8446", function=function@entry=0x638f70 <__func__.29749> "oftable_destroy", condition=condition@entry=0x637de8 "classifier_is_empty(&table->cls)") at lib/util.c:73 #6 0x004720ce in oftable_destroy (table=0x6c59480) at ofproto/ofproto.c:8446 #7 ofproto_destroy__ (ofproto=0x6c4fe50) at ofproto/ofproto.c:1653 #8 0x0052d3a6 in ovsrcu_call_postponed () at lib/ovs_rcu.c:323 #9 0x0052d574 in ovsrcu_postpone_thread (arg=) at lib/ovs_rcu.c:338 #10 0x0052f8e1 in ovsthread_wrapper (aux_=) at lib/ovs_thread.c:651 #11 0x7fc6f66e7dc5 in start_thread (arg=0x7fc6a37fe700) at pthread_create.c:308 #12 0x7fc6f529875d in clone () from /usr/lib64/libc.so.6 (gdb) print *table $15 = {flags = (unknown: 0), cls = {n_rules = 1, n_flow_segments = 3 '\003', flow_segments = "<@J", subtables_map = {impl = {p = 0xa5a1680}}, subtables = {impl = {p = 0x7fc690003db0}, temp = 0x0}, partitions = {impl = {p = 0x0}}, tries = {{field = 0x64a440 , root = {p = 0x0}}, {field = 0x64a408 , root = {p = 0x0}}, {field = 0x0, root = {p = 0x0}}}, n_tries = 2, publish = true}, name = 0x0, max_flows = 4294967295, n_flows = 0, eviction_fields = 0x0, n_eviction_fields = 0, eviction_group_id_basis = 0, eviction_groups_by_id = {buckets = 0x6c59518, one = 0x0, mask = 0, n = 0}, eviction_groups_by_size = {array = 0x0, n = 0, allocated = 0}, miss_config = OFPUTIL_TABLE_MISS_DEFAULT, eviction = 0, vacancy_event = (unknown: 0), vacancy_down = 0 '\000', vacancy_up = 0 '\000', n_matched = 33714, n_missed = 412} (gdb) p *(struct cmap_impl *)(struct cmap)(table->cls.subtables_map).impl $16 = {n = 1, max_n = 8, min_n = 1, mask = 1, basis = 3512155531, pad = '\000' , buckets = {{counter = 0, hashes = {0, 0, 0, 0, 0}, nodes = {{next = {p = 0x0}}, {next = {p = 0x0}}, {next = {p = 0x0}}, {next = { p = 0x0}}, {next = {p = 0x0}} (gdb) p *(struct pvector_impl *)(struct pvector)(table->cls.subtables).impl $17 = {size = 1, allocated = 7, vector = 0x7fc690003dc0} (gdb) ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] Discrepancy between ofproto/trace output and dpctl dump-flows output
Thanks Ben for taking a look, Regards, - Amar On 12/6/17, 10:17 AM, "Ben Pfaff" wrote: On Wed, Dec 06, 2017 at 05:58:41AM +, Amar Padmanabhan wrote: > We are debugging a setup and are seeing something that we are finding it hard to explain. > > 1 - Here is the ovs-dpctl dump-flows output. > recirc_id(0),in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no), packets:550, bytes:53900, used:0.364s, actions:userspace(pid=3276048382,slow_path(controller)) OK, the above datapath flow just says that packets in this flow have to be handled in the userspace slow path because > 2 - We are now trying to trace this flow and are not seeing the output to controller flow getting hit in the trace. > sudo ovs-appctl ofproto/trace "in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" > Flow: packet_type=(1,0x800),in_port=32770,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0 > bridge("gtp_br0") > - > 0. priority 0 resubmit(,1) > 1. in_port=32770, priority 10 set_field:0->metadata resubmit(,2) > 2. priority 0 resubmit(,3) > 3. No match. drop Final flow: unchanged Megaflow: recirc_id=0,packet_type=(1,0x800),in_port=32770,nw_frag=no Datapath actions: drop ---> Why isn’t the output to controller flow getting hit? > > > 3 - We are also seeing the flow counts go up for the output to controller flow per the ofctl dump-flows output (pasting relevant flows) > > NXST_FLOW reply (xid=0x4): cookie=0x0, duration=1482.245s, table=0, n_packets=1408, n_bytes=148464, idle_age=1, priority=0 actions=resubmit(,1) > cookie=0x0, duration=1482.244s, table=1, n_packets=1283, n_bytes=123662, idle_age=1, priority=10,in_port=32770 actions=set_field:0->metadata,resubmit(,2) > cookie=0x0, duration=1482.244s, table=2, n_packets=1247, n_bytes=122150, idle_age=1, priority=0 actions=resubmit(,3) > cookie=0x0, duration=1482.245s, table=3, n_packets=1245, n_bytes=122010, idle_age=1, priority=0,ip,metadata=0,nw_dst=192.168.128.0/24 actions=CONTROLLER:65535 ---> Notice that this is getting hit as well OK, I spent a few minutes to mock up your environment (thanks for all the details!) and experiment. It looks like the problem is actually a mismatch between the formatting and parsing code for datapath flows. If I run: ovs-appctl ofproto/trace "in_port(3),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" that is, add "eth()" to the datapath flow, then I get the expected results: $ ovs-appctl ofproto/trace "in_port(1),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" Flow: ip,in_port=32770,vlan_tci=0x,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0 bridge("br0") - 0. priority 0 resubmit(,1) 1. in_port=32770, priority 10 load:0->OXM_OF_METADATA[] resubmit(,2) 2. priority 0 resubmit(,3) 3. ip,metadata=0,nw_dst=192.168.128.0/24, priority 0 CONTROLLER:65535 Final flow: unchanged Megaflow: recirc_id=0,eth,ip,in_port=32770,nw_dst=192.168.128.0/24,nw_frag=no Datapath actions: drop This flow is handled by the userspace slow path because it: - Sends "packet-in" messages to the OpenFlow controller. Clearly that's a bug. I'll see what I can do about it. > Also, Whoever improved the output of ofproto/trace thanks a ton! That was me :-) You're welcome. ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] [Potential Spoof] Discrepancy between ofproto/trace output and dpctl dump-flows output
For packets that go to the slow path, the trace is supposed to show what happens when the packet is processed in the slow path and then note why it goes to the slow path. That wasn't happening in your case because of that formatting/parsing mismatch. On Wed, Dec 06, 2017 at 06:38:23AM +, Amar Padmanabhan wrote: > oh, I think I remember that output to slow_path doesn’t show up in the > ofproto/trace. Don’t remember why though > - Amar > > From: on behalf of Amar Padmanabhan > > Date: Tuesday, December 5, 2017 at 9:59 PM > To: "ovs-discuss@openvswitch.org" , Ben Pfaff > > Cc: Jacky Tian > Subject: [Potential Spoof] [ovs-discuss] Discrepancy between ofproto/trace > output and dpctl dump-flows output > > Hi, > We are debugging a setup and are seeing something that we are finding it hard > to explain. > > 1 - Here is the ovs-dpctl dump-flows output. > recirc_id(0),in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no), > packets:550, bytes:53900, used:0.364s, > actions:userspace(pid=3276048382,slow_path(controller)) > > 2 - We are now trying to trace this flow and are not seeing the output to > controller flow getting hit in the trace. > sudo ovs-appctl ofproto/trace > "in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" > Flow: > packet_type=(1,0x800),in_port=32770,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0 > bridge("gtp_br0") > - > 0. priority 0 resubmit(,1) > 1. in_port=32770, priority 10 set_field:0->metadata resubmit(,2) > 2. priority 0 resubmit(,3) > 3. No match. drop Final flow: unchanged Megaflow: > recirc_id=0,packet_type=(1,0x800),in_port=32770,nw_frag=no Datapath actions: > drop ---> Why isn’t the output to controller flow getting hit? > > > 3 - We are also seeing the flow counts go up for the output to controller > flow per the ofctl dump-flows output (pasting relevant flows) > > NXST_FLOW reply (xid=0x4): cookie=0x0, duration=1482.245s, table=0, > n_packets=1408, n_bytes=148464, idle_age=1, priority=0 actions=resubmit(,1) > cookie=0x0, duration=1482.244s, table=1, n_packets=1283, n_bytes=123662, > idle_age=1, priority=10,in_port=32770 > actions=set_field:0->metadata,resubmit(,2) > cookie=0x0, duration=1482.244s, table=2, n_packets=1247, n_bytes=122150, > idle_age=1, priority=0 actions=resubmit(,3) > cookie=0x0, duration=1482.245s, table=3, n_packets=1245, n_bytes=122010, > idle_age=1, priority=0,ip,metadata=0,nw_dst=192.168.128.0/24 > actions=CONTROLLER:65535 ---> Notice that this is getting hit as well > > 4 - Misc info: > ovs-vsctl (Open vSwitch) 2.8.0 > DB Schema 7.15.0 > > ovs-appctl dpif/show > gtp_br0: > gtp0 32768/4: (gtp: key=flow, remote_ip=flow) > gtp_br0 65534/1: (internal) > int_nat_peer 32770/3: (system) > veth0_ovs 32769/2: (system) > > Also, Whoever improved the output of ofproto/trace thanks a ton! > > Thanks in advance! > Amar ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] Discrepancy between ofproto/trace output and dpctl dump-flows output
On Wed, Dec 06, 2017 at 05:58:41AM +, Amar Padmanabhan wrote: > We are debugging a setup and are seeing something that we are finding it hard > to explain. > > 1 - Here is the ovs-dpctl dump-flows output. > recirc_id(0),in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no), > packets:550, bytes:53900, used:0.364s, > actions:userspace(pid=3276048382,slow_path(controller)) OK, the above datapath flow just says that packets in this flow have to be handled in the userspace slow path because > 2 - We are now trying to trace this flow and are not seeing the output to > controller flow getting hit in the trace. > sudo ovs-appctl ofproto/trace > "in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" > Flow: > packet_type=(1,0x800),in_port=32770,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0 > bridge("gtp_br0") > - > 0. priority 0 resubmit(,1) > 1. in_port=32770, priority 10 set_field:0->metadata resubmit(,2) > 2. priority 0 resubmit(,3) > 3. No match. drop Final flow: unchanged Megaflow: > recirc_id=0,packet_type=(1,0x800),in_port=32770,nw_frag=no Datapath actions: > drop ---> Why isn’t the output to controller flow getting hit? > > > 3 - We are also seeing the flow counts go up for the output to controller > flow per the ofctl dump-flows output (pasting relevant flows) > > NXST_FLOW reply (xid=0x4): cookie=0x0, duration=1482.245s, table=0, > n_packets=1408, n_bytes=148464, idle_age=1, priority=0 actions=resubmit(,1) > cookie=0x0, duration=1482.244s, table=1, n_packets=1283, n_bytes=123662, > idle_age=1, priority=10,in_port=32770 > actions=set_field:0->metadata,resubmit(,2) > cookie=0x0, duration=1482.244s, table=2, n_packets=1247, n_bytes=122150, > idle_age=1, priority=0 actions=resubmit(,3) > cookie=0x0, duration=1482.245s, table=3, n_packets=1245, n_bytes=122010, > idle_age=1, priority=0,ip,metadata=0,nw_dst=192.168.128.0/24 > actions=CONTROLLER:65535 ---> Notice that this is getting hit as well OK, I spent a few minutes to mock up your environment (thanks for all the details!) and experiment. It looks like the problem is actually a mismatch between the formatting and parsing code for datapath flows. If I run: ovs-appctl ofproto/trace "in_port(3),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" that is, add "eth()" to the datapath flow, then I get the expected results: $ ovs-appctl ofproto/trace "in_port(1),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)" Flow: ip,in_port=32770,vlan_tci=0x,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0 bridge("br0") - 0. priority 0 resubmit(,1) 1. in_port=32770, priority 10 load:0->OXM_OF_METADATA[] resubmit(,2) 2. priority 0 resubmit(,3) 3. ip,metadata=0,nw_dst=192.168.128.0/24, priority 0 CONTROLLER:65535 Final flow: unchanged Megaflow: recirc_id=0,eth,ip,in_port=32770,nw_dst=192.168.128.0/24,nw_frag=no Datapath actions: drop This flow is handled by the userspace slow path because it: - Sends "packet-in" messages to the OpenFlow controller. Clearly that's a bug. I'll see what I can do about it. > Also, Whoever improved the output of ofproto/trace thanks a ton! That was me :-) You're welcome. ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] L2 messages with proprietary ethertype dropped
On Tue, Dec 05, 2017 at 07:24:10PM -0500, Raghavendra Hegde wrote: > I am using a 3rd party application that runs on two VMs and uses L2 > messages for health checking between the VMs. The L2 messages use a > proprietary ethertype (0x8e9f) for messaging. I see the L2 message going > out of the VM. However, I don’t see the messages going out of the host. I > disabled port security on the neutron port, and the L2 messages go through > fine. All the L3 message between the two VMs go through fine even with > port- security enabled. My guess is that OVS is dropping the L2 messages. > Is it possible that OVS is dropping the L2 messages because it doesn’t > understand the new ethertype? Do we have to add any new flows to the OVS > tables to allow the proprietary ethertype L2 messages? OVS does not drop proprietary Ethertypes by default, but the flow table that Neutron installs might. You can use "ofproto/trace" to find out what's happening. See ovs-vswitchd(8) for more information. ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
[ovs-discuss] L2 messages with proprietary ethertype dropped
Hi, I am using a 3rd party application that runs on two VMs and uses L2 messages for health checking between the VMs. The L2 messages use a proprietary ethertype (0x8e9f) for messaging. I see the L2 message going out of the VM. However, I don’t see the messages going out of the host. I disabled port security on the neutron port, and the L2 messages go through fine. All the L3 message between the two VMs go through fine even with port- security enabled. My guess is that OVS is dropping the L2 messages. Is it possible that OVS is dropping the L2 messages because it doesn’t understand the new ethertype? Do we have to add any new flows to the OVS tables to allow the proprietary ethertype L2 messages? Thoughts? Raghav ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss