Reviewed: https://review.openstack.org/587244 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=379a9faf6206039903555ce7e3fc4221e5f06a7a Submitter: Zuul Branch: master
commit 379a9faf6206039903555ce7e3fc4221e5f06a7a Author: Arjun Baindur <xag...@gmail.com> Date: Mon Jul 30 15:31:50 2018 -0700 Change duplicate OVS bridge datapath-ids The native OVS/ofctl controllers talk to the bridges using a datapath-id, instead of the bridge name. The datapath ID is auto-generated based on the MAC address of the bridge's NIC. In the case where bridges are on VLAN interfaces, they would have the same MACs, therefore the same datapath-id, causing flows for one physical bridge to be programmed on each other. The datapath-id is a 64-bit field, with lower 48 bits being the MAC. We set the upper 12 unused bits to identify each unique physical bridge This could also be fixed manually using ovs-vsctl set, but it might be beneficial to automate this in the code. ovs-vsctl set bridge <mybr> other-config:datapath-id=<datapathid> You can change this yourself using above command. You can view/verify current datapath-id via ovs-vsctl get Bridge br-vlan datapath-id "00006ea5a4b38a4a" (please note that other-config is needed in the set, but not get) Closes-Bug: #1697243 Co-Authored-By: Rodolfo Alonso Hernandez <ralon...@redhat.com> Change-Id: I575ddf0a66e2cfe745af3874728809cf54e37745 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1697243 Title: ovs bridge flow table is dropped by unkown cause Status in neutron: Fix Released Bug description: Hi, My openstack has a provider network with ovs bridge is "provision", it has been running fine but found it is network breakdown after several hours,I found it's flow table is empty. Is there a way to trace a bridge's flow table changement? [root@cloud-sz-master-b12-01 neutron]# ovs-ofctl dump-flows provision NXST_FLOW reply (xid=0x4): [root@cloud-sz-master-b12-02 nova]# ovs-ofctl dump-flows provision NXST_FLOW reply (xid=0x4): [root@cloud-sz-master-b12-02 nova]# [root@cloud-sz-master-b12-02 nova]# [root@cloud-sz-master-b12-02 nova]# ip r ... 10.53.33.0/24 dev proTvision proto kernel scope link src 10.53.33.11 10.53.128.0/24 dev docker0 proto kernel scope link src 10.53.128.1 169.254.0.0/16 dev br-ex scope link metric 1055 169.254.0.0/16 dev provision scope link metric 1056 ... [root@cloud-sz-master-b12-02 nova]# ovs-ofctl show provision OFPT_FEATURES_REPLY (xid=0x2): dpid:0000248a075541e8 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STAS ARP_MATCH_IP actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst 1(bond0): addr:24:8a:07:55:41:e8 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max 2(phy-provision): addr:76:b5:88:cc:a6:74 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max LOCAL(provision): addr:24:8a:07:55:41:e8 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 [root@cloud-sz-master-b12-02 nova]# ifconfig bond0 bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500 inet6 fe80::268a:7ff:fe55:41e8 prefixlen 64 scopeid 0x20<link> ether 24:8a:07:55:41:e8 txqueuelen 1000 (Ethernet) RX packets 93588032 bytes 39646246456 (36.9 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8655257217 bytes 27148795388 (25.2 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@cloud-sz-master-b12-02 nova]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Min links: 0 Aggregator selection policy (ad_select): stable System priority: 65535 System MAC address: 24:8a:07:55:41:e8 Active Aggregator Info: Aggregator ID: 19 Number of ports: 2 Actor Key: 13 Partner Key: 11073 Partner Mac Address: 38:bc:01:c2:26:a1 Slave Interface: enp4s0f0 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 24:8a:07:55:41:e8 Slave queue ID: 0 Aggregator ID: 19 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 24:8a:07:55:41:e8 port key: 13 port priority: 255 port number: 1 port state: 61 details partner lacp pdu: system priority: 32768 system mac address: 38:bc:01:c2:26:a1 oper key: 11073 port priority: 32768 port number: 43 port state: 61 Slave Interface: enp5s0f0 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 24:8a:07:55:44:64 Slave queue ID: 0 Aggregator ID: 19 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 24:8a:07:55:41:e8 port key: 13 port priority: 255 port number: 2 port state: 61 details partner lacp pdu: system priority: 32768 system mac address: 38:bc:01:c2:26:a1 oper key: 11073 port priority: 32768 port number: 91 port state: 61 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1697243/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp