Re: [ovs-discuss] raft ovsdb clustering

2018-03-13 Thread aginwala
Sure.

To add on , I also ran for nb db too using different port  and Node2
crashes with same error :
# Node 2
/usr/share/openvswitch/scripts/ovn-ctl --db-nb-addr=10.99.152.138
--db-nb-port=6641 --db-nb-cluster-remote-addr="tcp:10.99.152.148:6645"
--db-nb-cluster-local-addr="tcp:10.99.152.138:6645" start_nb_ovsdb
ovsdb-server: ovsdb error: /etc/openvswitch/ovnnb_db.db: cannot identify
file type



On Tue, Mar 13, 2018 at 9:40 AM, Numan Siddique  wrote:

>
>
> On Tue, Mar 13, 2018 at 9:46 PM, aginwala  wrote:
>
>> Thanks Numan for the response.
>>
>> There is no command start_cluster_sb_ovsdb in the source code too. Is
>> that in a separate commit somewhere? Hence, I used start_sb_ovsdb which
>> I think would not be a right choice?
>>
>
> Sorry, I meant start_sb_ovsdb. Strange that it didn't work for you. Let me
> try it out again and update this thread.
>
> Thanks
> Numan
>
>
>>
>> # Node1  came up as expected.
>> ovn-ctl --db-sb-addr=10.99.152.148 --db-sb-port=6642
>> --db-sb-create-insecure-remote=yes --db-sb-cluster-local-addr="tcp:
>> 10.99.152.148:6644" start_sb_ovsdb.
>>
>> # verifying its a clustered db with ovsdb-tool db-local-address
>> /etc/openvswitch/ovnsb_db.db
>> tcp:10.99.152.148:6644
>> # ovn-sbctl show works fine and chassis are being populated correctly.
>>
>> #Node 2 fails with error:
>> /usr/share/openvswitch/scripts/ovn-ctl --db-sb-addr=10.99.152.138
>> --db-sb-port=6642 --db-sb-create-insecure-remote=yes
>> --db-sb-cluster-remote-addr="tcp:10.99.152.148:6644"
>> --db-sb-cluster-local-addr="tcp:10.99.152.138:6644" start_sb_ovsdb
>> ovsdb-server: ovsdb error: /etc/openvswitch/ovnsb_db.db: cannot identify
>> file type
>>
>> # So i did start the sb db the usual way using start_ovsdb to just get
>> the db file created and killed the sb pid and re-ran the command which gave
>> actual error where it complains for join-cluster command that is being
>> called internally
>> /usr/share/openvswitch/scripts/ovn-ctl --db-sb-addr=10.99.152.138
>> --db-sb-port=6642 --db-sb-create-insecure-remote=yes
>> --db-sb-cluster-remote-addr="tcp:10.99.152.148:6644"
>> --db-sb-cluster-local-addr="tcp:10.99.152.138:6644" start_sb_ovsdb
>> ovsdb-tool: /etc/openvswitch/ovnsb_db.db: not a clustered database
>>  * Backing up database to /etc/openvswitch/ovnsb_db.db.b
>> ackup1.15.0-70426956
>> ovsdb-tool: 'join-cluster' command requires at least 4 arguments
>>  * Creating cluster database /etc/openvswitch/ovnsb_db.db from existing
>> one
>>
>>
>> # based on above error I killed the sb db pid again and  try to create a
>> local cluster on node  then re-ran the join operation as per the source
>> code function.
>> ovsdb-tool join-cluster /etc/openvswitch/ovnsb_db.db OVN_Southbound tcp:
>> 10.99.152.138:6644 tcp:10.99.152.148:6644 which still complains
>> ovsdb-tool: I/O error: /etc/openvswitch/ovnsb_db.db: create failed (File
>> exists)
>>
>>
>> # Node 3: I did not try as I am assuming the same failure as node 2
>>
>>
>> Let me know may know further.
>>
>>
>> On Tue, Mar 13, 2018 at 3:08 AM, Numan Siddique 
>> wrote:
>>
>>> Hi Aliasgar,
>>>
>>> On Tue, Mar 13, 2018 at 7:11 AM, aginwala  wrote:
>>>
 Hi Ben/Noman:

 I am trying to setup 3 node southbound db cluster  using raft10
  in review.

 # Node 1 create-cluster
 ovsdb-tool create-cluster /etc/openvswitch/ovnsb_db.db
 /root/ovs-reviews/ovn/ovn-sb.ovsschema tcp:10.99.152.148:6642

>>>
>>> A different port is used for RAFT. So you have to choose another port
>>> like 6644 for example.
>>>
>>

 # Node 2
 ovsdb-tool join-cluster /etc/openvswitch/ovnsb_db.db OVN_Southbound tcp:
 10.99.152.138:6642 tcp:10.99.152.148:6642 --cid
 5dfcb678-bb1d-4377-b02d-a380edec2982

 #Node 3
 ovsdb-tool join-cluster /etc/openvswitch/ovnsb_db.db OVN_Southbound tcp:
 10.99.152.101:6642 tcp:10.99.152.138:6642 tcp:10.99.152.148:6642 --cid
 5dfcb678-bb1d-4377-b02d-a380edec2982

 # ovn remote is set to all 3 nodes
 external_ids:ovn-remote="tcp:10.99.152.148:6642, tcp:10.99.152.138:6642,
 tcp:10.99.152.101:6642"

>>>
 # Starting sb db on node 1 using below command on node 1:

 ovsdb-server --detach --monitor -vconsole:off -vraft -vjsonrpc
 --log-file=/var/log/openvswitch/ovsdb-server-sb.log
 --pidfile=/var/run/openvswitch/ovnsb_db.pid
 --remote=db:OVN_Southbound,SB_Global,connections
 --unixctl=ovnsb_db.ctl --private-key=db:OVN_Southbound,SSL,private_key
 --certificate=db:OVN_Southbound,SSL,certificate
 --ca-cert=db:OVN_Southbound,SSL,ca_cert 
 --ssl-protocols=db:OVN_Southbound,SSL,ssl_protocols
 --ssl-ciphers=db:OVN_Southbound,SSL,ssl_ciphers
 --remote=punix:/var/run/openvswitch/ovnsb_db.sock
 /etc/openvswitch/ovnsb_db.db

 # check-cluster is returning nothing
 ovsdb-tool check-cluster /etc/openvswitch/ovnsb_db.db

Re: [ovs-discuss] raft ovsdb clustering

2018-03-13 Thread Numan Siddique
On Tue, Mar 13, 2018 at 9:46 PM, aginwala  wrote:

> Thanks Numan for the response.
>
> There is no command start_cluster_sb_ovsdb in the source code too. Is
> that in a separate commit somewhere? Hence, I used start_sb_ovsdb which I
> think would not be a right choice?
>

Sorry, I meant start_sb_ovsdb. Strange that it didn't work for you. Let me
try it out again and update this thread.

Thanks
Numan


>
> # Node1  came up as expected.
> ovn-ctl --db-sb-addr=10.99.152.148 --db-sb-port=6642
> --db-sb-create-insecure-remote=yes --db-sb-cluster-local-addr="tcp:
> 10.99.152.148:6644" start_sb_ovsdb.
>
> # verifying its a clustered db with ovsdb-tool db-local-address
> /etc/openvswitch/ovnsb_db.db
> tcp:10.99.152.148:6644
> # ovn-sbctl show works fine and chassis are being populated correctly.
>
> #Node 2 fails with error:
> /usr/share/openvswitch/scripts/ovn-ctl --db-sb-addr=10.99.152.138
> --db-sb-port=6642 --db-sb-create-insecure-remote=yes
> --db-sb-cluster-remote-addr="tcp:10.99.152.148:6644"
> --db-sb-cluster-local-addr="tcp:10.99.152.138:6644" start_sb_ovsdb
> ovsdb-server: ovsdb error: /etc/openvswitch/ovnsb_db.db: cannot identify
> file type
>
> # So i did start the sb db the usual way using start_ovsdb to just get the
> db file created and killed the sb pid and re-ran the command which gave
> actual error where it complains for join-cluster command that is being
> called internally
> /usr/share/openvswitch/scripts/ovn-ctl --db-sb-addr=10.99.152.138
> --db-sb-port=6642 --db-sb-create-insecure-remote=yes
> --db-sb-cluster-remote-addr="tcp:10.99.152.148:6644"
> --db-sb-cluster-local-addr="tcp:10.99.152.138:6644" start_sb_ovsdb
> ovsdb-tool: /etc/openvswitch/ovnsb_db.db: not a clustered database
>  * Backing up database to /etc/openvswitch/ovnsb_db.db.b
> ackup1.15.0-70426956
> ovsdb-tool: 'join-cluster' command requires at least 4 arguments
>  * Creating cluster database /etc/openvswitch/ovnsb_db.db from existing one
>
>
> # based on above error I killed the sb db pid again and  try to create a
> local cluster on node  then re-ran the join operation as per the source
> code function.
> ovsdb-tool join-cluster /etc/openvswitch/ovnsb_db.db OVN_Southbound tcp:
> 10.99.152.138:6644 tcp:10.99.152.148:6644 which still complains
> ovsdb-tool: I/O error: /etc/openvswitch/ovnsb_db.db: create failed (File
> exists)
>
>
> # Node 3: I did not try as I am assuming the same failure as node 2
>
>
> Let me know may know further.
>
>
> On Tue, Mar 13, 2018 at 3:08 AM, Numan Siddique 
> wrote:
>
>> Hi Aliasgar,
>>
>> On Tue, Mar 13, 2018 at 7:11 AM, aginwala  wrote:
>>
>>> Hi Ben/Noman:
>>>
>>> I am trying to setup 3 node southbound db cluster  using raft10
>>>  in review.
>>>
>>> # Node 1 create-cluster
>>> ovsdb-tool create-cluster /etc/openvswitch/ovnsb_db.db
>>> /root/ovs-reviews/ovn/ovn-sb.ovsschema tcp:10.99.152.148:6642
>>>
>>
>> A different port is used for RAFT. So you have to choose another port
>> like 6644 for example.
>>
>
>>>
>>> # Node 2
>>> ovsdb-tool join-cluster /etc/openvswitch/ovnsb_db.db OVN_Southbound tcp:
>>> 10.99.152.138:6642 tcp:10.99.152.148:6642 --cid
>>> 5dfcb678-bb1d-4377-b02d-a380edec2982
>>>
>>> #Node 3
>>> ovsdb-tool join-cluster /etc/openvswitch/ovnsb_db.db OVN_Southbound tcp:
>>> 10.99.152.101:6642 tcp:10.99.152.138:6642 tcp:10.99.152.148:6642 --cid
>>> 5dfcb678-bb1d-4377-b02d-a380edec2982
>>>
>>> # ovn remote is set to all 3 nodes
>>> external_ids:ovn-remote="tcp:10.99.152.148:6642, tcp:10.99.152.138:6642,
>>> tcp:10.99.152.101:6642"
>>>
>>
>>> # Starting sb db on node 1 using below command on node 1:
>>>
>>> ovsdb-server --detach --monitor -vconsole:off -vraft -vjsonrpc
>>> --log-file=/var/log/openvswitch/ovsdb-server-sb.log
>>> --pidfile=/var/run/openvswitch/ovnsb_db.pid
>>> --remote=db:OVN_Southbound,SB_Global,connections --unixctl=ovnsb_db.ctl
>>> --private-key=db:OVN_Southbound,SSL,private_key
>>> --certificate=db:OVN_Southbound,SSL,certificate
>>> --ca-cert=db:OVN_Southbound,SSL,ca_cert 
>>> --ssl-protocols=db:OVN_Southbound,SSL,ssl_protocols
>>> --ssl-ciphers=db:OVN_Southbound,SSL,ssl_ciphers
>>> --remote=punix:/var/run/openvswitch/ovnsb_db.sock
>>> /etc/openvswitch/ovnsb_db.db
>>>
>>> # check-cluster is returning nothing
>>> ovsdb-tool check-cluster /etc/openvswitch/ovnsb_db.db
>>>
>>> # ovsdb-server-sb.log below shows the leader is elected with only one
>>> server and there are rbac related debug logs with rpc replies and empty
>>> params with no errors
>>>
>>> 2018-03-13T01:12:02Z|2|raft|DBG|server 63d1 added to configuration
>>> 2018-03-13T01:12:02Z|3|raft|INFO|term 6: starting election
>>> 2018-03-13T01:12:02Z|4|raft|INFO|term 6: elected leader by 1+ of 1
>>> servers
>>>
>>>
>>> Now Starting the ovsdb-server on the other clusters fails saying
>>> ovsdb-server: ovsdb error: /etc/openvswitch/ovnsb_db.db: cannot identify
>>> file type
>>>
>>>
>>> Also noticed that 

Re: [ovs-discuss] raft ovsdb clustering

2018-03-13 Thread aginwala
Thanks Numan for the response.

There is no command start_cluster_sb_ovsdb in the source code too. Is that
in a separate commit somewhere? Hence, I used start_sb_ovsdb which I think
would not be a right choice?

# Node1  came up as expected.
ovn-ctl --db-sb-addr=10.99.152.148 --db-sb-port=6642
--db-sb-create-insecure-remote=yes --db-sb-cluster-local-addr="tcp:
10.99.152.148:6644" start_sb_ovsdb.

# verifying its a clustered db with ovsdb-tool db-local-address
/etc/openvswitch/ovnsb_db.db
tcp:10.99.152.148:6644
# ovn-sbctl show works fine and chassis are being populated correctly.

#Node 2 fails with error:
/usr/share/openvswitch/scripts/ovn-ctl --db-sb-addr=10.99.152.138
--db-sb-port=6642 --db-sb-create-insecure-remote=yes
--db-sb-cluster-remote-addr="tcp:10.99.152.148:6644"
--db-sb-cluster-local-addr="tcp:10.99.152.138:6644" start_sb_ovsdb
ovsdb-server: ovsdb error: /etc/openvswitch/ovnsb_db.db: cannot identify
file type

# So i did start the sb db the usual way using start_ovsdb to just get the
db file created and killed the sb pid and re-ran the command which gave
actual error where it complains for join-cluster command that is being
called internally
/usr/share/openvswitch/scripts/ovn-ctl --db-sb-addr=10.99.152.138
--db-sb-port=6642 --db-sb-create-insecure-remote=yes
--db-sb-cluster-remote-addr="tcp:10.99.152.148:6644"
--db-sb-cluster-local-addr="tcp:10.99.152.138:6644" start_sb_ovsdb
ovsdb-tool: /etc/openvswitch/ovnsb_db.db: not a clustered database
 * Backing up database to /etc/openvswitch/ovnsb_db.db.backup1.15.0-70426956
ovsdb-tool: 'join-cluster' command requires at least 4 arguments
 * Creating cluster database /etc/openvswitch/ovnsb_db.db from existing one


# based on above error I killed the sb db pid again and  try to create a
local cluster on node  then re-ran the join operation as per the source
code function.
ovsdb-tool join-cluster /etc/openvswitch/ovnsb_db.db OVN_Southbound tcp:
10.99.152.138:6644 tcp:10.99.152.148:6644 which still complains
ovsdb-tool: I/O error: /etc/openvswitch/ovnsb_db.db: create failed (File
exists)


# Node 3: I did not try as I am assuming the same failure as node 2


Let me know may know further.

On Tue, Mar 13, 2018 at 3:08 AM, Numan Siddique  wrote:

> Hi Aliasgar,
>
> On Tue, Mar 13, 2018 at 7:11 AM, aginwala  wrote:
>
>> Hi Ben/Noman:
>>
>> I am trying to setup 3 node southbound db cluster  using raft10
>>  in review.
>>
>> # Node 1 create-cluster
>> ovsdb-tool create-cluster /etc/openvswitch/ovnsb_db.db
>> /root/ovs-reviews/ovn/ovn-sb.ovsschema tcp:10.99.152.148:6642
>>
>
> A different port is used for RAFT. So you have to choose another port like
> 6644 for example.
>

>>
>> # Node 2
>> ovsdb-tool join-cluster /etc/openvswitch/ovnsb_db.db OVN_Southbound tcp:
>> 10.99.152.138:6642 tcp:10.99.152.148:6642 --cid
>> 5dfcb678-bb1d-4377-b02d-a380edec2982
>>
>> #Node 3
>> ovsdb-tool join-cluster /etc/openvswitch/ovnsb_db.db OVN_Southbound tcp:
>> 10.99.152.101:6642 tcp:10.99.152.138:6642 tcp:10.99.152.148:6642 --cid
>> 5dfcb678-bb1d-4377-b02d-a380edec2982
>>
>> # ovn remote is set to all 3 nodes
>> external_ids:ovn-remote="tcp:10.99.152.148:6642, tcp:10.99.152.138:6642,
>> tcp:10.99.152.101:6642"
>>
>
>> # Starting sb db on node 1 using below command on node 1:
>>
>> ovsdb-server --detach --monitor -vconsole:off -vraft -vjsonrpc
>> --log-file=/var/log/openvswitch/ovsdb-server-sb.log
>> --pidfile=/var/run/openvswitch/ovnsb_db.pid
>> --remote=db:OVN_Southbound,SB_Global,connections --unixctl=ovnsb_db.ctl
>> --private-key=db:OVN_Southbound,SSL,private_key
>> --certificate=db:OVN_Southbound,SSL,certificate
>> --ca-cert=db:OVN_Southbound,SSL,ca_cert 
>> --ssl-protocols=db:OVN_Southbound,SSL,ssl_protocols
>> --ssl-ciphers=db:OVN_Southbound,SSL,ssl_ciphers
>> --remote=punix:/var/run/openvswitch/ovnsb_db.sock
>> /etc/openvswitch/ovnsb_db.db
>>
>> # check-cluster is returning nothing
>> ovsdb-tool check-cluster /etc/openvswitch/ovnsb_db.db
>>
>> # ovsdb-server-sb.log below shows the leader is elected with only one
>> server and there are rbac related debug logs with rpc replies and empty
>> params with no errors
>>
>> 2018-03-13T01:12:02Z|2|raft|DBG|server 63d1 added to configuration
>> 2018-03-13T01:12:02Z|3|raft|INFO|term 6: starting election
>> 2018-03-13T01:12:02Z|4|raft|INFO|term 6: elected leader by 1+ of 1
>> servers
>>
>>
>> Now Starting the ovsdb-server on the other clusters fails saying
>> ovsdb-server: ovsdb error: /etc/openvswitch/ovnsb_db.db: cannot identify
>> file type
>>
>>
>> Also noticed that man ovsdb-tool is missing cluster details. Might want
>> to address it in the same patch or different.
>>
>>
>> Please advise to what is missing here for running ovn-sbctl show as this
>> command hangs.
>>
>>
>>
>
> I think you can use the ovn-ctl command "start_cluster_sb_ovsdb" for your
> testing (atleast for now)
>
> For your setup, I think you can start the cluster as

[ovs-discuss] ovn-controller periodically reporting status

2018-03-13 Thread Anil Venkata
In Openstack neutron reference implementation, all agents will be
periodically reporting their status to neutron server. Similarly in
Openstack OVN based deployment, we want ovn-controller to periodically
report its status to neutron server.

We can follow two approaches for this:

1) ovn-controller periodically writing timestamp(along with its name and
type) in SBDB chassis table

  smap_add(_ids, "OVN_CONTROLER_TYPE:ovn-controller1", timestamp);

  sbrec_chassis_set_external_ids(chassis_rec, _ids);

Then networking-ovn watching and processing the timestamp and updating
neutron DB.

2) Alternatively, use OVSDB server monitoring

  As ovn-controller is a client to OVSDB server(for SBDB), OVSDB server
periodically monitors this connection and updates the status in
"Connection" table.  But when the connection method is inbound (eg: ptcp or
punix), it updates only "n_connections" in status field and doesn't write
connection details in the OVSDB. Pros and Cons with this approach

Pros:

Using existing ovsdb-server monitoring(and no need to spawn a thread in
ovn-controller for reporting timestamp)

Cons:

a) ovsdb-server will only have remote ip address and port as part of
connection. How this information will be used to identify remote ovsdb
client(i.e ovn-controller)?

One approach is, OVSDB client(ovn-controller), after creating connection,
can add ip address and port in a new table in SBDB. Then ovsdb-server can
search this table with connection's ip address and port and update the
connection status(only if there is a change in status) in the resulting
row. Networking-ovn can watch this table and update neutron DB accordingly.
This require changes in all OVSDB clients and OVSDB server, though we have
requirement for only ovn-controller.

b) If a deployment wants to disable monitoring, can set "inactivity_probe"
to 0. Then we can't have status reporting. Here we are tightly coupling
status reporting with inactivity_probe.

Please suggest which approach will be better?

Note: I have proposed a spec [1] in networking-ovn for this. Reviews can be
helpful :)


Thanks

Anil

[1]
https://review.openstack.org/#/c/552447/1/doc/source/contributor/design/status_reporting.rst
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] raft ovsdb clustering

2018-03-13 Thread Numan Siddique
Hi Aliasgar,

On Tue, Mar 13, 2018 at 7:11 AM, aginwala  wrote:

> Hi Ben/Noman:
>
> I am trying to setup 3 node southbound db cluster  using raft10
>  in review.
>
> # Node 1 create-cluster
> ovsdb-tool create-cluster /etc/openvswitch/ovnsb_db.db
> /root/ovs-reviews/ovn/ovn-sb.ovsschema tcp:10.99.152.148:6642
>

A different port is used for RAFT. So you have to choose another port like
6644 for example.

>
>
> # Node 2
> ovsdb-tool join-cluster /etc/openvswitch/ovnsb_db.db OVN_Southbound tcp:
> 10.99.152.138:6642 tcp:10.99.152.148:6642 --cid 5dfcb678-bb1d-4377-b02d-
> a380edec2982
>
> #Node 3
> ovsdb-tool join-cluster /etc/openvswitch/ovnsb_db.db OVN_Southbound tcp:
> 10.99.152.101:6642 tcp:10.99.152.138:6642 tcp:10.99.152.148:6642 --cid
> 5dfcb678-bb1d-4377-b02d-a380edec2982
>
> # ovn remote is set to all 3 nodes
> external_ids:ovn-remote="tcp:10.99.152.148:6642, tcp:10.99.152.138:6642,
> tcp:10.99.152.101:6642"
>

> # Starting sb db on node 1 using below command on node 1:
>
> ovsdb-server --detach --monitor -vconsole:off -vraft -vjsonrpc
> --log-file=/var/log/openvswitch/ovsdb-server-sb.log 
> --pidfile=/var/run/openvswitch/ovnsb_db.pid
> --remote=db:OVN_Southbound,SB_Global,connections --unixctl=ovnsb_db.ctl
> --private-key=db:OVN_Southbound,SSL,private_key 
> --certificate=db:OVN_Southbound,SSL,certificate
> --ca-cert=db:OVN_Southbound,SSL,ca_cert 
> --ssl-protocols=db:OVN_Southbound,SSL,ssl_protocols
> --ssl-ciphers=db:OVN_Southbound,SSL,ssl_ciphers 
> --remote=punix:/var/run/openvswitch/ovnsb_db.sock
> /etc/openvswitch/ovnsb_db.db
>
> # check-cluster is returning nothing
> ovsdb-tool check-cluster /etc/openvswitch/ovnsb_db.db
>
> # ovsdb-server-sb.log below shows the leader is elected with only one
> server and there are rbac related debug logs with rpc replies and empty
> params with no errors
>
> 2018-03-13T01:12:02Z|2|raft|DBG|server 63d1 added to configuration
> 2018-03-13T01:12:02Z|3|raft|INFO|term 6: starting election
> 2018-03-13T01:12:02Z|4|raft|INFO|term 6: elected leader by 1+ of 1
> servers
>
>
> Now Starting the ovsdb-server on the other clusters fails saying
> ovsdb-server: ovsdb error: /etc/openvswitch/ovnsb_db.db: cannot identify
> file type
>
>
> Also noticed that man ovsdb-tool is missing cluster details. Might want to
> address it in the same patch or different.
>
>
> Please advise to what is missing here for running ovn-sbctl show as this
> command hangs.
>
>
>

I think you can use the ovn-ctl command "start_cluster_sb_ovsdb" for your
testing (atleast for now)

For your setup, I think you can start the cluster as

# Node 1
ovn-ctl --db-sb-addr=10.99.152.148 --db-sb-port=6642
--db-sb-create-insecure-remote=yes --db-sb-cluster-local-addr="tcp:
10.99.152.148:6644" start_cluster_sb_ovsdb

# Node 2
ovn-ctl --db-sb-addr=10.99.152.138 --db-sb-port=6642
--db-sb-create-insecure-remote=yes
--db-sb-cluster-local-addr="tcp:10.99.152.138:6644"
--db-sb-cluster-remote-addr="tcp:10.99.152.148:6644" start_cluster_sb_ovsdb

# Node 3
ovn-ctl --db-sb-addr=10.99.152.101 --db-sb-port=6642
--db-sb-create-insecure-remote=yes
--db-sb-cluster-local-addr="tcp:10.99.152.101:6644"
--db-sb-cluster-remote-addr="tcp:10.99.152.148:6644" start_cluster_sb_ovsdb


Let me know how it goes.

Thanks
Numan



>
>
>
>
>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss