I recommend trying the patches that I posted:
https://mail.openvswitch.org/pipermail/ovs-dev/2021-June/383783.html
https://mail.openvswitch.org/pipermail/ovs-dev/2021-June/383784.html

On Tue, Jun 15, 2021 at 07:24:06AM +0000, Saurabh Deokate wrote:
> Hi Ben,
> 
> Here is the output for ovs-vsctl list controller 
> 
> [root@172-31-64-26-aws-eu-central-1c ~]# ovs-vsctl list controller
> _uuid : eb56176a-ad32-4eb0-9cd8-7ab3bd448a68
> connection_mode : out-of-band
> controller_burst_limit: []
> controller_queue_size: []
> controller_rate_limit: []
> enable_async_messages: []
> external_ids : {}
> inactivity_probe : 0
> is_connected : true
> local_gateway : []
> local_ip : []
> local_netmask : []
> max_backoff : []
> other_config : {}
> role : other
> status : {last_error="Connection refused", sec_since_connect="42606", 
> sec_since_disconnect="42614", state=ACTIVE}
> target : "tcp:127.0.0.1:6653"
> type : []
> 
> Let me know if you need any other details.
> 
> ~Saurabh.
> 
> On 11/06/21, 4:03 AM, "Ben Pfaff" <b...@ovn.org> wrote:
> 
>     On Mon, Jun 07, 2021 at 02:51:58PM +0000, Saurabh Deokate wrote:
>     > Hi Team,
>     > 
>     > We are seeing an issue in OVS 2.14.0 after moving from 2.8.0. We first 
> set the controller on the bridge and then set inactivity probe for our 
> controller to 0 to disable new connection attempts by ovs. After this we 
> start our controller to serve request. But in the new version of OVS somehow 
> we still see inactivity probe kicking in after every 5s and trying to 
> reconnect. This issue is triggered when we are in the middle of handling a 
> packet in our controller (i.e. OFController) which is blocked for almost 40s.
>     > 
>     > 
>     > Kernel version: CentOS Linux release 7.9.2009
>     > Output of ovs-vsctl list controller command shows inactivity_probe: 0
>     > 
>     > Below is the snippet from ovs-vswitchd.log
>     > 
>     > 021-05-11T22:32:55.378Z|00608|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connected
>     > 
> 2021-05-11T22:33:05.382Z|00609|connmgr|INFO|br0.uvms<->tcp:127.0.0.1:6653: 44 
> flow_mods 10 s ago (44 adds)
>     > 2021-05-11T22:33:05.386Z|00610|rconn|ERR|br0.uvms<->tcp:127.0.0.1:6653: 
> no response to inactivity probe after 5 seconds, disconnecting
>     > 
> 2021-05-11T22:33:06.406Z|00611|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connecting...
>     > 
> 2021-05-11T22:33:06.438Z|00612|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connected
>     > 2021-05-11T22:33:16.438Z|00613|rconn|ERR|br0.uvms<->tcp:127.0.0.1:6653: 
> no response to inactivity probe after 5 seconds, disconnecting
>     > 
> 2021-05-11T22:33:17.921Z|00614|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connecting...
>     > 
> 2021-05-11T22:33:18.108Z|00615|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connected
>     > 2021-05-11T22:33:28.110Z|00616|rconn|ERR|br0.uvms<->tcp:127.0.0.1:6653: 
> no response to inactivity probe after 5 seconds, disconnecting
>     > 
> 2021-05-11T22:33:29.433Z|00617|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connecting...
>     > 
> 2021-05-11T22:33:29.933Z|00618|rconn|INFO|br0.uvms<->tcp:127.0.0.1:6653: 
> connected
>     > 
>     > 
>     > Can you please help us find out what could be wrong with this 
> configuration and what is the expected behaviour from ovs switch when the 
> receiver on the controller is blocked for long.
> 
>     Hmm, I can't reproduce this with current OVS.  I do see a problem with
>     the fail-open implementation; I'll see a patch for that.
> 
>     Can you show the output of "ovs-vsctl list controller"?
> 
_______________________________________________
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to