Re: [ovs-discuss] ovs route refresh problem
>On Thu, Mar 01, 2018 at 10:21:38AM -0800, Ben Pfaff wrote: >> On Fri, Feb 23, 2018 at 03:30:59AM +, Yinpeijun wrote: >> > >> > >>On Sun, Feb 11, 2018 at 07:27:34AM +, Yinpeijun wrote: >> > >> Hi all >> > >> Recently , I run a test two VM commulication with >> > >>vxlan and ovs+dpdk networking(ovs2.7.2). when I add 200 virtual >> > >>device on the physical service of the commulicate vm then check the >> > >>ping result, there is loss packet statistics. Then I use vlan and >> > >>ovs+dpdk networking, do the same thing , there is no loss packets >> > >>statistics. >> > >> I read the source code and add some log to >> > >>confirm the problem, the final problem I think is unreasonable >> > >>routing refresh, in route_table_rest function delete all items before >> > >>readding, so in the middle of the interval ovs_router_lookup can not >> > >>find the route then cause packet drop. So I think the implementation >> > >>need to optimize, Any advice on how to optimize it? >> > >> > >I don't fully understand your use case. However, if you're not using >> > >DPDK, then OVS isn't doing routing in userspace so this is probably not >> > >the problem. >> > >> > Thank you for your replay, the test case just for reproduce the >> > problem. The actual scene is to create or migrate virtual machines in >> > openstack env. Correspondence will be created linux bridge and other >> > virtual device. >> > >> > There is also have problem in netdev dataptah without dpdk. vxlan tunnel >> > need route in userspace and ovs maintain the route table as follow: >> > ovs-appctl ovs/route/show >> > Route Table: >> > Cached: x.xx.1.10/32 dev eth0 SRC x.xx.1.10 >> > Cached: 10.0.0.10/32 dev brcps SRC 10.0.0.10 >> > Cached: 127.0.0.1/32 dev lo SRC 127.0.0.1 >> > >> > So when I create virtual device trigger ovs refresh the route then affect >> > the already existing virtual machine communication. >> >> This is the same datapath, really, it's just that most people use it >> with DPDK, and so the solution would be the same. >> >> I think that the problem you're talking about can be fixed by holding >> the mutex in route_table_reset() across the entire update, instead of >> just for each individual operation on the routing table. Does that >> make sense? > >I sent a patch. Will you test it? >https://patchwork.ozlabs.org/patch/884064/ > >Thanks ok, I am glad to. I will do this test as soon as possible and tell you the test result later. ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] ovs route refresh problem
On Thu, Mar 01, 2018 at 10:21:38AM -0800, Ben Pfaff wrote: > On Fri, Feb 23, 2018 at 03:30:59AM +, Yinpeijun wrote: > > > > >>On Sun, Feb 11, 2018 at 07:27:34AM +, Yinpeijun wrote: > > >> Hi all > > >> Recently , I run a test two VM commulication with > > >> vxlan and ovs+dpdk networking(ovs2.7.2). when I add 200 virtual device > > >> on the physical service of the commulicate vm then check the ping > > >> result, there is loss packet statistics. Then I use vlan and ovs+dpdk > > >> networking, do the same thing , there is no loss packets statistics. > > >> I read the source code and add some log to confirm the > > >> problem, the final problem I think is unreasonable routing refresh, > > >> in route_table_rest function delete all items before readding, so in the > > >> middle of the interval ovs_router_lookup can not find the route then > > >> cause packet drop. So I think the implementation need to optimize, Any > > >> advice on how to optimize it? > > > > >I don't fully understand your use case. However, if you're not using > > >DPDK, then OVS isn't doing routing in userspace so this is probably not > > >the problem. > > > > Thank you for your replay, the test case just for reproduce the problem. > > The actual scene is to create or migrate virtual machines in openstack > > env. Correspondence will be created linux bridge > > and other virtual device. > > > > There is also have problem in netdev dataptah without dpdk. vxlan tunnel > > need route in userspace and ovs maintain the route table as follow: > > ovs-appctl ovs/route/show > > Route Table: > > Cached: x.xx.1.10/32 dev eth0 SRC x.xx.1.10 > > Cached: 10.0.0.10/32 dev brcps SRC 10.0.0.10 > > Cached: 127.0.0.1/32 dev lo SRC 127.0.0.1 > > > > So when I create virtual device trigger ovs refresh the route then affect > > the already existing virtual machine communication. > > This is the same datapath, really, it's just that most people use it > with DPDK, and so the solution would be the same. > > I think that the problem you're talking about can be fixed by holding > the mutex in route_table_reset() across the entire update, instead of > just for each individual operation on the routing table. Does that make > sense? I sent a patch. Will you test it? https://patchwork.ozlabs.org/patch/884064/ Thanks, Ben. ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] OVN load balancing on same subnet failing
On 9 March 2018 at 11:19, Ben Pfaff wrote: > On Fri, Mar 02, 2018 at 09:40:07AM -0800, Guru Shetty wrote: > > On 1 March 2018 at 21:09, Anil Venkata wrote: > > > > > > > > > > > On Fri, Mar 2, 2018 at 7:23 AM, Guru Shetty wrote: > > > > > >> > > >> > > >> On 27 February 2018 at 03:13, Anil Venkata > > >> wrote: > > >> > > >>> For example, I have a 10.1.0.0/24 network and a load balancer is > added > > >>> to it with 10.1.0.10 as VIP and 10.1.0.2(MAC 50:54:00:00:00:01), > > >>> 10.1.0.3(MAC 50:54:00:00:00:02) as members. > > >>> ovn-nbctl create load_balancer vips:10.1.0.10="10.1.0.2,10.1.0.3" > > >>> > > >> > > >> We currently need the VIP to be in a different subnet. You should > connect > > >> switch it to a dummy logical router (or connect it to a external > router). > > >> Since a VIP is in a different subnet, it sends an ARP for logical > router IP > > >> and then things will work. > > >> > > >> > > > > > > Thanks Guru. Any reason for introducing this constraint(i.e VIP to be > in a > > > different subnet)? Can we address this limitation? > > > > > > > It was just easy to implement with the constraint. You will need a ARP > > responder for the VIP. And now, you will have to specify the mac address > > for each VIP in the schema. So that is a bit of work - but not hard. > > Do we document the constraint? If we do not, then that would be > helpful. > I sent a patch: https://patchwork.ozlabs.org/patch/884054/ ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] OVN load balancing on same subnet failing
On Fri, Mar 02, 2018 at 09:40:07AM -0800, Guru Shetty wrote: > On 1 March 2018 at 21:09, Anil Venkata wrote: > > > > > > > On Fri, Mar 2, 2018 at 7:23 AM, Guru Shetty wrote: > > > >> > >> > >> On 27 February 2018 at 03:13, Anil Venkata > >> wrote: > >> > >>> For example, I have a 10.1.0.0/24 network and a load balancer is added > >>> to it with 10.1.0.10 as VIP and 10.1.0.2(MAC 50:54:00:00:00:01), > >>> 10.1.0.3(MAC 50:54:00:00:00:02) as members. > >>> ovn-nbctl create load_balancer vips:10.1.0.10="10.1.0.2,10.1.0.3" > >>> > >> > >> We currently need the VIP to be in a different subnet. You should connect > >> switch it to a dummy logical router (or connect it to a external router). > >> Since a VIP is in a different subnet, it sends an ARP for logical router IP > >> and then things will work. > >> > >> > > > > Thanks Guru. Any reason for introducing this constraint(i.e VIP to be in a > > different subnet)? Can we address this limitation? > > > > It was just easy to implement with the constraint. You will need a ARP > responder for the VIP. And now, you will have to specify the mac address > for each VIP in the schema. So that is a bit of work - but not hard. Do we document the constraint? If we do not, then that would be helpful. ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] Including bfd status for tunnel endpoints on ovs-vsctl show
Seems reasonable to me, feel free to submit a patch. On Fri, Mar 09, 2018 at 05:43:00PM +, Miguel Angel Ajo Pelayo wrote: > Ok, I have modified it to just show bfd_status, for rexample, in a 3 > controller + 1 compute > environment, with a router and a vm on the compute: > > [heat-admin@overcloud-controller-2 ovs]$ sudo utilities/ovs-vsctl show > a4cf68a7-07e9-4ed4-a317-016cb610c821 > Bridge br-int > fail_mode: secure > Port "ovn-7e8b8a-0" > Interface "ovn-7e8b8a-0" > type: geneve > options: {csum="true", key=flow, remote_ip="172.16.0.5"} > bfd_status: {diagnostic="No Diagnostic", flap_count="1", > forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, > state=up} > Port "ovn-0c37dd-0" > Interface "ovn-0c37dd-0" > type: geneve > options: {csum="true", key=flow, remote_ip="172.16.0.7"} > bfd_status: {diagnostic="No Diagnostic", flap_count="1", > forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, > state=up} > Port br-int > Interface br-int > type: internal > Port "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4" > Interface > "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4" > type: patch > options: > {peer="patch-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4-to-br-int"} > Port "ovn-9f2335-0" > Interface "ovn-9f2335-0" > type: geneve > options: {csum="true", key=flow, remote_ip="172.16.0.11"} > bfd_status: {diagnostic="No Diagnostic", flap_count="1", > forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, > state=up} > Bridge br-ex > fail_mode: standalone > Port br-ex > Interface br-ex > ... > > > if I admin disable that single router with " neutron router-update router1 > --admin-state-up false" > (the bfd should be disabled all around because it's not necessary, and > ovs-vswitchd takes care > of clearing up bfd_status): > > [heat-admin@overcloud-controller-2 ovs]$ sudo utilities/ovs-vsctl show > a4cf68a7-07e9-4ed4-a317-016cb610c821 > Bridge br-int > fail_mode: secure > Port "ovn-7e8b8a-0" > Interface "ovn-7e8b8a-0" > type: geneve > options: {csum="true", key=flow, remote_ip="172.16.0.5"} > Port "ovn-0c37dd-0" > Interface "ovn-0c37dd-0" > type: geneve > options: {csum="true", key=flow, remote_ip="172.16.0.7"} > Port br-int > Interface br-int > type: internal > Port "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4" > Interface > "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4" > type: patch > options: > {peer="patch-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4-to-br-int"} > Port "ovn-9f2335-0" > Interface "ovn-9f2335-0" > type: geneve > options: {csum="true", key=flow, remote_ip="172.16.0.11"} > Bridge br-ex > fail_mode: standalone > ... > > > > > On Fri, Mar 9, 2018 at 9:32 AM Miguel Angel Ajo Pelayo > wrote: > > > Thank you Ben, I'll finish it, test it properly and submit the patch. > > > > I don't know if it makes any sense to add a filter where the dictionary > > has only a key "enabled" and it's "false", > > or it's really not worth it. I'll check out how it looks with a real > > deployment and get back here with the results. > > > > On Thu, Mar 8, 2018 at 7:12 PM Ben Pfaff wrote: > > > >> On Thu, Mar 08, 2018 at 04:43:50PM +, Miguel Angel Ajo Pelayo wrote: > >> > Ok, looking at the code, it seems like we may only need to do this? > >> > > >> > diff --git a/utilities/ovs-vsctl.c b/utilities/ovs-vsctl.c > >> > index 21fa18d..2ac60bf 100644 > >> > --- a/utilities/ovs-vsctl.c > >> > +++ b/utilities/ovs-vsctl.c > >> > @@ -1018,7 +1018,9 @@ static struct cmd_show_table cmd_show_tables[] = { > >> > &ovsrec_interface_col_name, > >> > {&ovsrec_interface_col_type, > >> >&ovsrec_interface_col_options, > >> > - &ovsrec_interface_col_error}, > >> > + &ovsrec_interface_col_error, > >> > + &ovsrec_interface_col_bfd, > >> > + &ovsrec_interface_col_bfd_status}, > >> > {NULL, NULL, NULL} > >> > }, > >> > >> Yes, you also need to increase the size of columns[] in cmd_show_table: > >> > >> diff --git a/lib/db-ctl-base.h b/lib/db-ctl-base.h > >> index eb444270535b..5d8532a7bde2 100644 > >> --- a/lib/db-ctl-base.h > >> +++ b/lib/db-ctl-base.h > >> @@ -197,7 +197,7 @@ struct weak_ref_table { > >> struct cmd_show_table { > >> const struct ovsdb_idl_table_class *table; > >> const struct ovsdb_idl_column *name_column; > >> -co
Re: [ovs-discuss] Including bfd status for tunnel endpoints on ovs-vsctl show
Ok, I have modified it to just show bfd_status, for rexample, in a 3 controller + 1 compute environment, with a router and a vm on the compute: [heat-admin@overcloud-controller-2 ovs]$ sudo utilities/ovs-vsctl show a4cf68a7-07e9-4ed4-a317-016cb610c821 Bridge br-int fail_mode: secure Port "ovn-7e8b8a-0" Interface "ovn-7e8b8a-0" type: geneve options: {csum="true", key=flow, remote_ip="172.16.0.5"} bfd_status: {diagnostic="No Diagnostic", flap_count="1", forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, state=up} Port "ovn-0c37dd-0" Interface "ovn-0c37dd-0" type: geneve options: {csum="true", key=flow, remote_ip="172.16.0.7"} bfd_status: {diagnostic="No Diagnostic", flap_count="1", forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, state=up} Port br-int Interface br-int type: internal Port "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4" Interface "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4" type: patch options: {peer="patch-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4-to-br-int"} Port "ovn-9f2335-0" Interface "ovn-9f2335-0" type: geneve options: {csum="true", key=flow, remote_ip="172.16.0.11"} bfd_status: {diagnostic="No Diagnostic", flap_count="1", forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, state=up} Bridge br-ex fail_mode: standalone Port br-ex Interface br-ex ... if I admin disable that single router with " neutron router-update router1 --admin-state-up false" (the bfd should be disabled all around because it's not necessary, and ovs-vswitchd takes care of clearing up bfd_status): [heat-admin@overcloud-controller-2 ovs]$ sudo utilities/ovs-vsctl show a4cf68a7-07e9-4ed4-a317-016cb610c821 Bridge br-int fail_mode: secure Port "ovn-7e8b8a-0" Interface "ovn-7e8b8a-0" type: geneve options: {csum="true", key=flow, remote_ip="172.16.0.5"} Port "ovn-0c37dd-0" Interface "ovn-0c37dd-0" type: geneve options: {csum="true", key=flow, remote_ip="172.16.0.7"} Port br-int Interface br-int type: internal Port "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4" Interface "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4" type: patch options: {peer="patch-provnet-b47ffac1-1704-4e97-85ef-f4fb478e18c4-to-br-int"} Port "ovn-9f2335-0" Interface "ovn-9f2335-0" type: geneve options: {csum="true", key=flow, remote_ip="172.16.0.11"} Bridge br-ex fail_mode: standalone ... On Fri, Mar 9, 2018 at 9:32 AM Miguel Angel Ajo Pelayo wrote: > Thank you Ben, I'll finish it, test it properly and submit the patch. > > I don't know if it makes any sense to add a filter where the dictionary > has only a key "enabled" and it's "false", > or it's really not worth it. I'll check out how it looks with a real > deployment and get back here with the results. > > On Thu, Mar 8, 2018 at 7:12 PM Ben Pfaff wrote: > >> On Thu, Mar 08, 2018 at 04:43:50PM +, Miguel Angel Ajo Pelayo wrote: >> > Ok, looking at the code, it seems like we may only need to do this? >> > >> > diff --git a/utilities/ovs-vsctl.c b/utilities/ovs-vsctl.c >> > index 21fa18d..2ac60bf 100644 >> > --- a/utilities/ovs-vsctl.c >> > +++ b/utilities/ovs-vsctl.c >> > @@ -1018,7 +1018,9 @@ static struct cmd_show_table cmd_show_tables[] = { >> > &ovsrec_interface_col_name, >> > {&ovsrec_interface_col_type, >> >&ovsrec_interface_col_options, >> > - &ovsrec_interface_col_error}, >> > + &ovsrec_interface_col_error, >> > + &ovsrec_interface_col_bfd, >> > + &ovsrec_interface_col_bfd_status}, >> > {NULL, NULL, NULL} >> > }, >> >> Yes, you also need to increase the size of columns[] in cmd_show_table: >> >> diff --git a/lib/db-ctl-base.h b/lib/db-ctl-base.h >> index eb444270535b..5d8532a7bde2 100644 >> --- a/lib/db-ctl-base.h >> +++ b/lib/db-ctl-base.h >> @@ -197,7 +197,7 @@ struct weak_ref_table { >> struct cmd_show_table { >> const struct ovsdb_idl_table_class *table; >> const struct ovsdb_idl_column *name_column; >> -const struct ovsdb_idl_column *columns[3]; /* Seems like a good >> number. */ >> +const struct ovsdb_idl_column *columns[5]; /* Seems like a good >> number. */ >> const struct weak_ref_table wref_table; >> }; >> >> > But that would render something like: >> > >> > >> > [heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-vsctl show >> > 5f35518a-ab34-49a4-a227-e487e251
Re: [ovs-discuss] bond-rebalance-interval
Jeez, first thing in the morning and a having my eyes fresh I found it. I had copied and pasted my scripts back and forth so much and had and missed a "space" in the line of the command that I wrote. False alarm. I fixed that and rebooted. Problem solved! On Thu, Mar 8, 2018 at 11:44 PM, Justin Pettit wrote: > Can you provide the command and options that you used? I assume it was > ovs-vsctl. > > --Justin > > > On Mar 8, 2018, at 8:11 PM, Chris Boley wrote: > > Justin I set the value at 5000 > I tried it via my interfaces file stanza and also by way of the > extra-options: in a live command. > It produced no errors in either method but when I look at the bridge, > there it is.. 1 every time. > > On Thu, Mar 8, 2018 at 9:06 PM Justin Pettit wrote: > >> It should work. How are you setting it? >> >> --Justin >> >> >> > On Mar 8, 2018, at 5:57 PM, Chris Boley wrote: >> > >> > bond-rebalance-interval=1 >> > >> > If I set this option to any other setting such as 5000 for example it >> always shows 1 on the output of an sudo ovs-vsctl list port vbond0 >> command. >> > >> > My version is 2.5.2. Is this rebalance interval negotiated between the >> Cisco etherchannel that I am peering with happens to depend on what the >> peer is willing to do or am I missing something? >> > >> > Thank you, >> > CB >> > >> > ___ >> > discuss mailing list >> > disc...@openvswitch.org >> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss >> >> ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
Re: [ovs-discuss] Including bfd status for tunnel endpoints on ovs-vsctl show
Thank you Ben, I'll finish it, test it properly and submit the patch. I don't know if it makes any sense to add a filter where the dictionary has only a key "enabled" and it's "false", or it's really not worth it. I'll check out how it looks with a real deployment and get back here with the results. On Thu, Mar 8, 2018 at 7:12 PM Ben Pfaff wrote: > On Thu, Mar 08, 2018 at 04:43:50PM +, Miguel Angel Ajo Pelayo wrote: > > Ok, looking at the code, it seems like we may only need to do this? > > > > diff --git a/utilities/ovs-vsctl.c b/utilities/ovs-vsctl.c > > index 21fa18d..2ac60bf 100644 > > --- a/utilities/ovs-vsctl.c > > +++ b/utilities/ovs-vsctl.c > > @@ -1018,7 +1018,9 @@ static struct cmd_show_table cmd_show_tables[] = { > > &ovsrec_interface_col_name, > > {&ovsrec_interface_col_type, > >&ovsrec_interface_col_options, > > - &ovsrec_interface_col_error}, > > + &ovsrec_interface_col_error, > > + &ovsrec_interface_col_bfd, > > + &ovsrec_interface_col_bfd_status}, > > {NULL, NULL, NULL} > > }, > > Yes, you also need to increase the size of columns[] in cmd_show_table: > > diff --git a/lib/db-ctl-base.h b/lib/db-ctl-base.h > index eb444270535b..5d8532a7bde2 100644 > --- a/lib/db-ctl-base.h > +++ b/lib/db-ctl-base.h > @@ -197,7 +197,7 @@ struct weak_ref_table { > struct cmd_show_table { > const struct ovsdb_idl_table_class *table; > const struct ovsdb_idl_column *name_column; > -const struct ovsdb_idl_column *columns[3]; /* Seems like a good > number. */ > +const struct ovsdb_idl_column *columns[5]; /* Seems like a good > number. */ > const struct weak_ref_table wref_table; > }; > > > But that would render something like: > > > > > > [heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-vsctl show > > 5f35518a-ab34-49a4-a227-e487e251b7e3 > > Manager "ptcp:6640:127.0.0.1" > > is_connected: true > > Bridge br-int > > fail_mode: secure > > Port "ovn-14d60a-0" > > Interface "ovn-14d60a-0" > > type: geneve > > options: {csum="true", key=flow, remote_ip="172.16.0.12"} > > bfd: {enable="true"} > > bfd_status: {diagnostic="No Diagnostic", flap_count="1", > > forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up, > > state=up} > > > > > > I'm partly guessing here based on what I see around lib/db-ctl-base.c and > > doing a little bit of debugging. > > > > @Ben is there any way of filtering out those columns when > > bfd:enabled="false" or not set ? > > It appears that that's already what happens, at least for the "not set" > case. I doubt there are many controllers that explicitly set enable to > false. > ___ discuss mailing list disc...@openvswitch.org https://mail.openvswitch.org/mailman/listinfo/ovs-discuss