[ovs-discuss] Issue with setting ovs in a vm

2018-03-14 Thread Ashish Karki
Hi,
We are Creating a  ovs-bridge and adding a port to it in a vm. Usually when
we add the port to the ovs-br the mac of the port gets attached to the
bridge. But this is not the case when done inside vm
Instead br keeps its random mac.
Just to give you the reference we are trying to deploy openstack via
kolla-ansible inside a vm and br-ex keeps getting random mac instead of
neutron external interface.
Any help would be appriciated

Br,
Ashish Karki
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Way to get average time spent

2018-03-14 Thread Krish
 Justin

Thanks for telling me about perf tool. I think its a really good tool for
finding hotspots. But I don't know how to test OVS caches with the perf
tool.

Greg
Can you please throw some light on this matter. If perf is the right tool
for getting time spent in EMC,datapath classifier and packet header
extraction?
If yes, please tell me the way to do that? Which part I should perf for?


Thanks and Regards


On Wed, Mar 14, 2018 at 4:31 PM, Krish  wrote:

> Justin
>
> Thanks for telling me about perf tool. I think its a really good tool for
> finding hotspots. But I don't know how to test OVS caches with the perf
> tool.
>
> Greg
> Can you please throw some light on this matter. If perf is the right tool
> for getting time spent in EMC,datapath classifier and packet header
> extraction?
> If yes, please tell me the way to do that? Which part I should perf for?
>
>
> Thanks and Regards
>
> On Tue, Mar 13, 2018 at 2:08 AM, Justin Pettit  wrote:
>
>> Greg (cc'd) could probably provide you a better answer, but I suspect the
>> perf tool is a good place to start.
>>
>> --Justin
>>
>>
>> > On Mar 12, 2018, at 3:24 AM, Krish  wrote:
>> >
>> > Hi users,
>> >
>> > I need to get the average time spent in packet extraction then in first
>> level cache , second level cache. I don't know the way to measure that.
>> >
>> > Can anyone please help me pointing to right direction?
>> > I think, I need to modify some code. Please guide me if I am right or
>> wrong.
>> >
>> >
>> > Looking forward for a response from someone.
>> >
>> > Thanks
>> > ___
>> > discuss mailing list
>> > disc...@openvswitch.org
>> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>>
>>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Discrepancy between ofproto/trace output and dpctl dump-flows output

2018-03-14 Thread Amar Padmanabhan
Awesome thanks Ben!
- Amar

On 3/14/18, 4:49 PM, "Ben Pfaff"  wrote:

I finally fixed this upstream in Git, and the fix will be in the next
releases on the 2.8 and 2.9 branches.

On Thu, Dec 07, 2017 at 06:06:22AM +, Amar Padmanabhan wrote:
> Yup thanks we are unblocked!
> - Amar
> 
> On 12/6/17, 12:36 PM, "Ben Pfaff"  wrote:
> 
> OK.
> 
> In the meantime, you can add "eth()" to the flows you're tracing to 
get
> the expected results.
> 
> On Wed, Dec 06, 2017 at 07:02:42PM +, Amar Padmanabhan wrote:
> > Thanks Ben for taking a look,
> > Regards,
> > - Amar
> > 
> > On 12/6/17, 10:17 AM, "Ben Pfaff"  wrote:
> > 
> > On Wed, Dec 06, 2017 at 05:58:41AM +, Amar Padmanabhan 
wrote:
> > > We are debugging a setup and are seeing something that we are 
finding it hard to explain.
> > > 
> > > 1 - Here is the ovs-dpctl dump-flows output.
> > > 
recirc_id(0),in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no),
 packets:550, bytes:53900, used:0.364s, 
actions:userspace(pid=3276048382,slow_path(controller))
> > 
> > OK, the above datapath flow just says that packets in this flow 
have to
> > be handled in the userspace slow path because 
> > 
> > > 2 - We are now trying to trace this flow and are not seeing 
the output to controller flow getting hit in the trace.
> > > sudo ovs-appctl ofproto/trace 
"in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)"
> > > Flow: 
packet_type=(1,0x800),in_port=32770,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0
> > > bridge("gtp_br0")
> > > -
> > > 0. priority 0 resubmit(,1)
> > > 1. in_port=32770, priority 10 set_field:0->metadata 
resubmit(,2)
> > > 2. priority 0 resubmit(,3)
> > > 3. No match. drop Final flow: unchanged Megaflow: 
recirc_id=0,packet_type=(1,0x800),in_port=32770,nw_frag=no Datapath actions: 
drop ---> Why isn’t the output to controller flow getting hit?
> > > 
> > > 
> > > 3 - We are also seeing the flow counts go up for the output 
to controller flow per the ofctl dump-flows output (pasting relevant flows)
> > > 
> > > NXST_FLOW reply (xid=0x4): cookie=0x0, duration=1482.245s, 
table=0, n_packets=1408, n_bytes=148464, idle_age=1, priority=0 
actions=resubmit(,1)
> > > cookie=0x0, duration=1482.244s, table=1, n_packets=1283, 
n_bytes=123662, idle_age=1, priority=10,in_port=32770 
actions=set_field:0->metadata,resubmit(,2)
> > > cookie=0x0, duration=1482.244s, table=2, n_packets=1247, 
n_bytes=122150, idle_age=1, priority=0 actions=resubmit(,3)
> > > cookie=0x0, duration=1482.245s, table=3, n_packets=1245, 
n_bytes=122010, idle_age=1, priority=0,ip,metadata=0,nw_dst=192.168.128.0/24 
actions=CONTROLLER:65535 ---> Notice that this is getting hit as well
> > 
> > OK, I spent a few minutes to mock up your environment (thanks 
for all
> > the details!) and experiment.  It looks like the problem is 
actually a
> > mismatch between the formatting and parsing code for datapath 
flows.  If
> > I run:
> > 
> > ovs-appctl ofproto/trace 
"in_port(3),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)"
> > 
> > that is, add "eth()" to the datapath flow, then I get the 
expected
> > results:
> > 
> > $ ovs-appctl ofproto/trace 
"in_port(1),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)"
> > Flow: 
ip,in_port=32770,vlan_tci=0x,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0
> > 
> > bridge("br0")
> > -
> >  0. priority 0
> > resubmit(,1)
> >  1. in_port=32770, priority 10
> > load:0->OXM_OF_METADATA[]
> > resubmit(,2)
> >  2. priority 0
> > resubmit(,3)
> >  3. ip,metadata=0,nw_dst=192.168.128.0/24, priority 0
> > CONTROLLER:65535
> > 
> > Final flow: unchanged
> > Megaflow: 
recirc_id=0,eth,ip,in_port=32770,nw_dst=192.168.128.0/24,nw_frag=no
> > Datapath actions: drop
> > This flow is handled by the userspace slow path because it:
> > - Sends "packet-in" messages to the OpenFlow 
controller.
> >

Re: [ovs-discuss] Discrepancy between ofproto/trace output and dpctl dump-flows output

2018-03-14 Thread Ben Pfaff
I finally fixed this upstream in Git, and the fix will be in the next
releases on the 2.8 and 2.9 branches.

On Thu, Dec 07, 2017 at 06:06:22AM +, Amar Padmanabhan wrote:
> Yup thanks we are unblocked!
> - Amar
> 
> On 12/6/17, 12:36 PM, "Ben Pfaff"  wrote:
> 
> OK.
> 
> In the meantime, you can add "eth()" to the flows you're tracing to get
> the expected results.
> 
> On Wed, Dec 06, 2017 at 07:02:42PM +, Amar Padmanabhan wrote:
> > Thanks Ben for taking a look,
> > Regards,
> > - Amar
> > 
> > On 12/6/17, 10:17 AM, "Ben Pfaff"  wrote:
> > 
> > On Wed, Dec 06, 2017 at 05:58:41AM +, Amar Padmanabhan wrote:
> > > We are debugging a setup and are seeing something that we are 
> finding it hard to explain.
> > > 
> > > 1 - Here is the ovs-dpctl dump-flows output.
> > > 
> recirc_id(0),in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no),
>  packets:550, bytes:53900, used:0.364s, 
> actions:userspace(pid=3276048382,slow_path(controller))
> > 
> > OK, the above datapath flow just says that packets in this flow 
> have to
> > be handled in the userspace slow path because 
> > 
> > > 2 - We are now trying to trace this flow and are not seeing the 
> output to controller flow getting hit in the trace.
> > > sudo ovs-appctl ofproto/trace 
> "in_port(3),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)"
> > > Flow: 
> packet_type=(1,0x800),in_port=32770,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0
> > > bridge("gtp_br0")
> > > -
> > > 0. priority 0 resubmit(,1)
> > > 1. in_port=32770, priority 10 set_field:0->metadata resubmit(,2)
> > > 2. priority 0 resubmit(,3)
> > > 3. No match. drop Final flow: unchanged Megaflow: 
> recirc_id=0,packet_type=(1,0x800),in_port=32770,nw_frag=no Datapath actions: 
> drop ---> Why isn’t the output to controller flow getting hit?
> > > 
> > > 
> > > 3 - We are also seeing the flow counts go up for the output to 
> controller flow per the ofctl dump-flows output (pasting relevant flows)
> > > 
> > > NXST_FLOW reply (xid=0x4): cookie=0x0, duration=1482.245s, 
> table=0, n_packets=1408, n_bytes=148464, idle_age=1, priority=0 
> actions=resubmit(,1)
> > > cookie=0x0, duration=1482.244s, table=1, n_packets=1283, 
> n_bytes=123662, idle_age=1, priority=10,in_port=32770 
> actions=set_field:0->metadata,resubmit(,2)
> > > cookie=0x0, duration=1482.244s, table=2, n_packets=1247, 
> n_bytes=122150, idle_age=1, priority=0 actions=resubmit(,3)
> > > cookie=0x0, duration=1482.245s, table=3, n_packets=1245, 
> n_bytes=122010, idle_age=1, priority=0,ip,metadata=0,nw_dst=192.168.128.0/24 
> actions=CONTROLLER:65535 ---> Notice that this is getting hit as well
> > 
> > OK, I spent a few minutes to mock up your environment (thanks for 
> all
> > the details!) and experiment.  It looks like the problem is 
> actually a
> > mismatch between the formatting and parsing code for datapath 
> flows.  If
> > I run:
> > 
> > ovs-appctl ofproto/trace 
> "in_port(3),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)"
> > 
> > that is, add "eth()" to the datapath flow, then I get the expected
> > results:
> > 
> > $ ovs-appctl ofproto/trace 
> "in_port(1),eth(),eth_type(0x0800),ipv4(dst=192.168.128.0/255.255.255.0,frag=no)"
> > Flow: 
> ip,in_port=32770,vlan_tci=0x,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,nw_src=0.0.0.0,nw_dst=192.168.128.0,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=0
> > 
> > bridge("br0")
> > -
> >  0. priority 0
> > resubmit(,1)
> >  1. in_port=32770, priority 10
> > load:0->OXM_OF_METADATA[]
> > resubmit(,2)
> >  2. priority 0
> > resubmit(,3)
> >  3. ip,metadata=0,nw_dst=192.168.128.0/24, priority 0
> > CONTROLLER:65535
> > 
> > Final flow: unchanged
> > Megaflow: 
> recirc_id=0,eth,ip,in_port=32770,nw_dst=192.168.128.0/24,nw_frag=no
> > Datapath actions: drop
> > This flow is handled by the userspace slow path because it:
> > - Sends "packet-in" messages to the OpenFlow controller.
> > 
> > Clearly that's a bug.  I'll see what I can do about it.
> > 
> > > Also, Whoever improved the output of ofproto/trace thanks a ton!
> > 
> > That was me :-)  You're welcome.
> > 
> > 
> 
> 
___
discuss mailing list
disc...@openvswitch.org

[ovs-discuss] Fwd: Including bfd status for tunnel endpoints on ovs-vsctl show

2018-03-14 Thread Eran Kuris
Ben,
take a look at my question to Miguel and his comment & question for you.

thanks

-- Forwarded message --
From: Miguel Angel Ajo Pelayo 
Date: Sun, Mar 11, 2018 at 11:23 AM
Subject: Re: [ovs-discuss] Including bfd status for tunnel endpoints on
ovs-vsctl show
To: Eran Kuris 


No, I agree it would be great to have something like that. But it's not
straight forward because the code is dumping tables and dictionaries from
the database as they are. And the code is not specific to ovs-vsctl. It's
abstracted for all the command line utilities in one place. Very nice to
maintain, but of course it's a trade off on flexibility.

@ben do you have any idea about how to include the host names on tunnel
endpoints (not necessarily on the bfd_status, of course)

On Sun, Mar 11, 2018, 7:56 AM Eran Kuris  wrote:

> Hi Miguel,
>
> Is it possible to add node name next to node IP address?
> look the example:
> $ sudo utilities/ovs-vsctl show
> a4cf68a7-07e9-4ed4-a317-016cb610c821
> Bridge br-int
> fail_mode: secure
> Port "ovn-7e8b8a-0"
> Interface "ovn-7e8b8a-0"
> type: geneve
> options: {csum="true", key=flow, remote_ip="172.16.0.5" -*
> controller-0*}
>
> Port "ovn-0c37dd-0"
> Interface "ovn-0c37dd-0"
> type: geneve
> options: {csum="true", key=flow, remote_ip="172.16.0.7"-
> * controller-1*}
>
>
>
>
>
>
>
> ERAN KURIS
>
> NEUTRON senior Quality ENGINEER
>
> Red Hat Israel 
>
> Yerushalayim Rd 34, Ra'anana
>
> eku...@redhat.com
> 
>
> On Fri, Mar 9, 2018 at 7:43 PM, Miguel Angel Ajo Pelayo <
> majop...@redhat.com> wrote:
>
>> Ok, I have modified it to just show bfd_status, for rexample, in a 3
>> controller + 1 compute
>> environment, with a router and a vm on the compute:
>>
>> [heat-admin@overcloud-controller-2 ovs]$ sudo utilities/ovs-vsctl show
>> a4cf68a7-07e9-4ed4-a317-016cb610c821
>> Bridge br-int
>> fail_mode: secure
>> Port "ovn-7e8b8a-0"
>> Interface "ovn-7e8b8a-0"
>> type: geneve
>> options: {csum="true", key=flow, remote_ip="172.16.0.5"}
>> bfd_status: {diagnostic="No Diagnostic", flap_count="1",
>> forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up,
>> state=up}
>> Port "ovn-0c37dd-0"
>> Interface "ovn-0c37dd-0"
>> type: geneve
>> options: {csum="true", key=flow, remote_ip="172.16.0.7"}
>> bfd_status: {diagnostic="No Diagnostic", flap_count="1",
>> forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up,
>> state=up}
>> Port br-int
>> Interface br-int
>> type: internal
>> Port "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-
>> f4fb478e18c4"
>> Interface "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-
>> f4fb478e18c4"
>> type: patch
>> options: {peer="patch-provnet-b47ffac1-
>> 1704-4e97-85ef-f4fb478e18c4-to-br-int"}
>> Port "ovn-9f2335-0"
>> Interface "ovn-9f2335-0"
>> type: geneve
>> options: {csum="true", key=flow, remote_ip="172.16.0.11"}
>> bfd_status: {diagnostic="No Diagnostic", flap_count="1",
>> forwarding="true", remote_diagnostic="No Diagnostic", remote_state=up,
>> state=up}
>> Bridge br-ex
>> fail_mode: standalone
>> Port br-ex
>> Interface br-ex
>> ...
>>
>>
>> if I admin disable that single router with " neutron router-update
>> router1 --admin-state-up false"
>> (the bfd should be disabled all around because it's not necessary, and
>> ovs-vswitchd takes care
>> of clearing up bfd_status):
>>
>> [heat-admin@overcloud-controller-2 ovs]$ sudo utilities/ovs-vsctl show
>> a4cf68a7-07e9-4ed4-a317-016cb610c821
>> Bridge br-int
>> fail_mode: secure
>> Port "ovn-7e8b8a-0"
>> Interface "ovn-7e8b8a-0"
>> type: geneve
>> options: {csum="true", key=flow, remote_ip="172.16.0.5"}
>> Port "ovn-0c37dd-0"
>> Interface "ovn-0c37dd-0"
>> type: geneve
>> options: {csum="true", key=flow, remote_ip="172.16.0.7"}
>> Port br-int
>> Interface br-int
>> type: internal
>> Port "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-
>> f4fb478e18c4"
>> Interface "patch-br-int-to-provnet-b47ffac1-1704-4e97-85ef-
>> f4fb478e18c4"
>> type: patch
>> options: {peer="patch-provnet-b47ffac1-
>> 1704-4e97-85ef-f4fb478e18c4-to-br-int"}
>> Port "ovn-9f2335-0"
>> Interface "ovn-9f2335-0"
>> type: geneve
>> options: {csum="true", key=flow, remote_ip="172.16.0.11"}
>> 

Re: [ovs-discuss] ovn-controller periodically reporting status

2018-03-14 Thread Ben Pfaff
On Tue, Mar 13, 2018 at 05:46:41PM +0530, Anil Venkata wrote:
> In Openstack neutron reference implementation, all agents will be
> periodically reporting their status to neutron server. Similarly in
> Openstack OVN based deployment, we want ovn-controller to periodically
> report its status to neutron server.

What kind of monitoring do you need?  If it's just a matter of finding
out that the controllers are still alive, then the existing "nb_cfg"
column in Chassis can suffice.  When neutron wants to check on the
controllers, it can increment nb_cfg in NB_Global and then see which
controllers update nb_cfg in Chassis in response.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] raft ovsdb clustering

2018-03-14 Thread aginwala
Hi Numan:

I tried on new nodes (kernel : 4.4.0-104-generic , Ubuntu 16.04)with fresh
installation and it worked super fine for both sb and nb dbs. Seems like
some kernel issue on the previous nodes when I re-installed raft patch as I
was running different ovs version on those nodes before.


For 2 HVs, I now set ovn-remote="tcp:10.169.125.152:6642, tcp:
10.169.125.131:6642, tcp:10.148.181.162:6642"  and started controller and
it works super fine.


Did some failover testing by rebooting/killing the leader (10.169.125.152)
and bringing it back up and it works as expected. Nothing weird noted so
far.

# check-cluster gives below data one of the node(10.148.181.162) post
leader failure

ovsdb-tool check-cluster /etc/openvswitch/ovnsb_db.db
ovsdb-tool: leader /etc/openvswitch/ovnsb_db.db for term 2 has log entries
only up to index 18446744073709551615, but index 9 was committed in a
previous term (e.g. by /etc/openvswitch/ovnsb_db.db)


For check-cluster, are we planning to add more output showing which node is
active(leader), etc in upcoming versions ?


Thanks a ton for helping sort this out.  I think the patch looks good to be
merged post addressing of the comments by Justin along with the man page
details for ovsdb-tool.


I will do some more crash testing for the cluster along with the scale test
and keep you posted if something unexpected is noted.



Regards,



On Tue, Mar 13, 2018 at 11:07 PM, Numan Siddique 
wrote:

>
>
> On Wed, Mar 14, 2018 at 7:51 AM, aginwala  wrote:
>
>> Sure.
>>
>> To add on , I also ran for nb db too using different port  and Node2
>> crashes with same error :
>> # Node 2
>> /usr/share/openvswitch/scripts/ovn-ctl --db-nb-addr=10.99.152.138
>> --db-nb-port=6641 --db-nb-cluster-remote-addr="tcp:10.99.152.148:6645"
>> --db-nb-cluster-local-addr="tcp:10.99.152.138:6645" start_nb_ovsdb
>> ovsdb-server: ovsdb error: /etc/openvswitch/ovnnb_db.db: cannot identify
>> file type
>>
>>
>>
> Hi Aliasgar,
>
> It worked for me. Can you delete the old db files in /etc/openvswitch/ and
> try running the commands again ?
>
> Below are the commands I ran in my setup.
>
> Node 1
> ---
> sudo /usr/share/openvswitch/scripts/ovn-ctl  --db-sb-addr=192.168.121.91
> --db-sb-port=6642 --db-sb-create-insecure-remote=yes
> --db-sb-cluster-local-addr=tcp:192.168.121.91:6644 start_sb_ovsdb
>
> Node 2
> -
> sudo /usr/share/openvswitch/scripts/ovn-ctl  --db-sb-addr=192.168.121.87
> --db-sb-port=6642 --db-sb-create-insecure-remote=yes
> --db-sb-cluster-local-addr="tcp:192.168.121.87:6644"
> --db-sb-cluster-remote-addr="tcp:192.168.121.91:6644"  start_sb_ovsdb
>
> Node 3
> -
> sudo /usr/share/openvswitch/scripts/ovn-ctl  --db-sb-addr=192.168.121.78
> --db-sb-port=6642 --db-sb-create-insecure-remote=yes
> --db-sb-cluster-local-addr="tcp:192.168.121.78:6644"
> --db-sb-cluster-remote-addr="tcp:192.168.121.91:6644"  start_sb_ovsdb
>
>
>
> Thanks
> Numan
>
>
>
>
>
>>
>> On Tue, Mar 13, 2018 at 9:40 AM, Numan Siddique 
>> wrote:
>>
>>>
>>>
>>> On Tue, Mar 13, 2018 at 9:46 PM, aginwala  wrote:
>>>
 Thanks Numan for the response.

 There is no command start_cluster_sb_ovsdb in the source code too. Is
 that in a separate commit somewhere? Hence, I used start_sb_ovsdb
 which I think would not be a right choice?

>>>
>>> Sorry, I meant start_sb_ovsdb. Strange that it didn't work for you. Let
>>> me try it out again and update this thread.
>>>
>>> Thanks
>>> Numan
>>>
>>>

 # Node1  came up as expected.
 ovn-ctl --db-sb-addr=10.99.152.148 --db-sb-port=6642
 --db-sb-create-insecure-remote=yes --db-sb-cluster-local-addr="tcp:
 10.99.152.148:6644" start_sb_ovsdb.

 # verifying its a clustered db with ovsdb-tool db-local-address
 /etc/openvswitch/ovnsb_db.db
 tcp:10.99.152.148:6644
 # ovn-sbctl show works fine and chassis are being populated correctly.

 #Node 2 fails with error:
 /usr/share/openvswitch/scripts/ovn-ctl --db-sb-addr=10.99.152.138
 --db-sb-port=6642 --db-sb-create-insecure-remote=yes
 --db-sb-cluster-remote-addr="tcp:10.99.152.148:6644"
 --db-sb-cluster-local-addr="tcp:10.99.152.138:6644" start_sb_ovsdb
 ovsdb-server: ovsdb error: /etc/openvswitch/ovnsb_db.db: cannot
 identify file type

 # So i did start the sb db the usual way using start_ovsdb to just get
 the db file created and killed the sb pid and re-ran the command which gave
 actual error where it complains for join-cluster command that is being
 called internally
 /usr/share/openvswitch/scripts/ovn-ctl --db-sb-addr=10.99.152.138
 --db-sb-port=6642 --db-sb-create-insecure-remote=yes
 --db-sb-cluster-remote-addr="tcp:10.99.152.148:6644"
 --db-sb-cluster-local-addr="tcp:10.99.152.138:6644" start_sb_ovsdb
 ovsdb-tool: /etc/openvswitch/ovnsb_db.db: not a clustered database
  * Backing up database to