[ovs-discuss] did ovs support conntrack for ipv6

2017-11-30 Thread 王嵘
hi
   Did ovs support conntrack for ipv6?
I searched the "CONFIG_NF_CONNTRACK_IPV6", but found nothing.

[root@localhost ovs]# grep -r "CONFIG_NF_CONNTRACK_IPV6" ./


Thanks a lot.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovndb_servers can't be promoted

2017-11-30 Thread Numan Siddique
On Thu, Nov 30, 2017 at 1:15 PM, Hui Xiang  wrote:

> Hi Numan,
>
> Thanks for helping, I am following your pcs example, but still with no
> lucky,
>
> 1. Before running any configuration, I stopped all of the ovsdb-server for
> OVN, and ovn-northd. Deleted ovnnb_active.conf/ovnsb_active.conf.
>
> 2. Since I have already had an vip in the cluster, so I chose to use it,
> it's status is OK.
> [root@node-1 ~]# pcs resource show
>  vip__management_old (ocf::es:ns_IPaddr2): Started node-1.domain.tld
>
> 3. Use pcs to create ovndb-servers and constraint
> [root@node-1 ~]# pcs resource create tst-ovndb ocf:ovn:ovndb-servers
> manage_northd=yes master_ip=192.168.0.2 nb_master_port=6641
> sb_master_port=6642 master
>  ([root@node-1 ~]# pcs resource meta tst-ovndb-master notify=true
>   Error: unable to find a resource/clone/master/group:
> tst-ovndb-master) ## returned error, so I changed into below command.
>

Hi HuiXiang,
This command is very important. Without which, pacemaker do not notify the
status change and ovsdb-servers would not be promoted or demoted.
Hence  you don't see the notify action getting called in ovn ocf script.

Can you try with the other command which I shared in my previous email.
These commands work fine for me.

Let me  know how it goes.

Thanks
Numan


[root@node-1 ~]# pcs resource master tst-ovndb-master tst-ovndb notify=true
> [root@node-1 ~]# pcs constraint colocation add master tst-ovndb-master
> with vip__management_old
>
> 4. pcs status
> [root@node-1 ~]# pcs status
>  vip__management_old (ocf::es:ns_IPaddr2): Started node-1.domain.tld
>  Master/Slave Set: tst-ovndb-master [tst-ovndb]
>  Stopped: [ node-1.domain.tld node-2.domain.tld node-3.domain.tld ]
>
> 5. pcs resource show XXX
> [root@node-1 ~]# pcs resource show  vip__management_old
>  Resource: vip__management_old (class=ocf provider=es type=ns_IPaddr2)
>   Attributes: nic=br-mgmt base_veth=br-mgmt-hapr ns_veth=hapr-m
> ip=192.168.0.2 iflabel=ka cidr_netmask=24 ns=haproxy gateway=none
> gateway_metric=0 iptables_start_rules=false iptables_stop_rules=false
> iptables_comment=default-comment
>   Meta Attrs: migration-threshold=3 failure-timeout=60
> resource-stickiness=1
>   Operations: monitor interval=3 timeout=30 (vip__management_old-monitor-
> 3)
>   start interval=0 timeout=30 (vip__management_old-start-0)
>   stop interval=0 timeout=30 (vip__management_old-stop-0)
> [root@node-1 ~]# pcs resource show tst-ovndb-master
>  Master: tst-ovndb-master
>   Meta Attrs: notify=true
>   Resource: tst-ovndb (class=ocf provider=ovn type=ovndb-servers)
>Attributes: manage_northd=yes master_ip=192.168.0.2 nb_master_port=6641
> sb_master_port=6642
>Operations: start interval=0s timeout=30s (tst-ovndb-start-timeout-30s)
>stop interval=0s timeout=20s (tst-ovndb-stop-timeout-20s)
>promote interval=0s timeout=50s (tst-ovndb-promote-timeout-
> 50s)
>demote interval=0s timeout=50s
> (tst-ovndb-demote-timeout-50s)
>monitor interval=30s timeout=20s
> (tst-ovndb-monitor-interval-30s)
>monitor interval=10s role=Master timeout=20s
> (tst-ovndb-monitor-interval-10s-role-Master)
>monitor interval=30s role=Slave timeout=20s
> (tst-ovndb-monitor-interval-30s-role-Slave)
>
>
> 6. I have put log in every ovndb-servers op, seems only the monitor op is
> being called, no promoted by the pacemaker DC:
> <30>Nov 30 15:22:19 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
> ovsdb_server_monitor
> <30>Nov 30 15:22:19 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
> ovsdb_server_check_status
> <30>Nov 30 15:22:19 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
> return OCFOCF_NOT_RUNNINGG
> <30>Nov 30 15:22:20 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
> ovsdb_server_master_update: 7}
> <30>Nov 30 15:22:20 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
> ovsdb_server_master_update end}
> <30>Nov 30 15:22:20 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
> monitor is going to return 7
> <30>Nov 30 15:22:20 node-1 ovndb-servers(undef)[2980970]: INFO: metadata
> exit OCF_SUCCESS}
>
>
> Please take a look,  thank you very much.
> Hui.
>
>
>
>
> On Wed, Nov 29, 2017 at 11:03 PM, Numan Siddique 
> wrote:
>
>>
>>
>> On Wed, Nov 29, 2017 at 4:16 PM, Hui Xiang  wrote:
>>
>>> FYI, If I have configured a good ovndb-server cluster with one active
>>> two slaves, then start pacemaker ovn-servers resource agents, they are all
>>> becoming slaves...
>>>
>>
>> You don't need to start ovndb-servers. When you create pacemaker
>> resources it would automatically start them and promote on of them.
>>
>> One thing which is very important is to create an IPaddr2 resource before
>> and add a colocation constraint so that pacemaker would promote the
>> ovsdb-server in the node
>> where IPaddr2 resource is running. This IPaddr2 resource ip should be
>> your master ip.
>>
>> Can you please do "pcs resource show " and share
>> the output ?

Re: [ovs-discuss] did ovs support conntrack for ipv6

2017-11-30 Thread Kei Nohguchi
On Nov 30, 2017, at 1:23 AM, 王嵘  wrote:
> 
> hi
>Did ovs support conntrack for ipv6?

Yes.

> I searched the "CONFIG_NF_CONNTRACK_IPV6", but found nothing.
> 
> [root@localhost ovs]# grep -r "CONFIG_NF_CONNTRACK_IPV6" ./
> 
> 
> Thanks a lot.
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Tx/Rx count not increasing OVS-DPDK

2017-11-30 Thread Mooney, Sean K


From: abhishek jain [mailto:ashujain9...@gmail.com]
Sent: Thursday, November 30, 2017 5:34 AM
To: Stokes, Ian ; Ben Pfaff 
Cc: ovs-discuss@openvswitch.org; Mooney, Sean K 
Subject: Re: [ovs-discuss] Tx/Rx count not increasing OVS-DPDK

Hello Team
I'm able to solve the issue.I had missed configuring Huge pages on ubuntu.
[Mooney, Sean K] Im glad you have resolved your issue. The hugepage requirement 
is documented
here for future reference 
https://docs.openstack.org/ocata/networking-guide/config-ovs-dpdk.html
unfortunetly there is currently no good way to detect this edge case due to how 
nova,neutron and os-vif interact with each
other and ovs so documenting this requirement Is the best we can do.
Thanks for the time.
Regards
Abhishek Jain

On Thu, Nov 30, 2017 at 10:25 AM, abhishek jain 
mailto:ashujain9...@gmail.com>> wrote:
Hi Sean,Stokes

Thanks for looking this out,below are the inline answers to your queries..


Could you provide more detail on the components you are running (OVS and DPDK 
versions etc.).

I'm using OVS version 2.4.1 on ubuntu 14.04.5 LTS.



Just to clarify, do you mean the stats for the device within the VM (i.e. your 
using something like ifconfig to check the rx/tx stats) or do you mean the OVS 
DPDK stats for the ports connected to the VMs themselves (for example vhost 
ports

With Tx/Rx count not increasing on VM,I'm referring to the VM itself and I'm 
checking the same by running ifconfig inside VM.



Do you mean your able to ping the IP of the VMs internally to them i.e. ping 
local host essentially?
Yes,I'm able to ping localhost from respective VMs.

Regards
Abhishek Jain

On Wed, Nov 29, 2017 at 11:36 PM, Stokes, Ian 
mailto:ian.sto...@intel.com>> wrote:
> Hi Team
>
> I'm having 2 VMs running with ovs-dpdk as a networking agent on openstack
> compute node.

Could you provide more detail on the components you are running (OVS and DPDK 
versions etc.).

> When I'm checking the external connectivity of the VMs by pinging to the
> external world,the Tx/Rx count of the VMs is not increasing.
>

Just to clarify, do you mean the stats for the device within the VM (i.e. your 
using something like ifconfig to check the rx/tx stats) or do you mean the OVS 
DPDK stats for the ports connected to the VMs themselves (for example vhost 
ports).

>
> However I'm able to ping the local-Ip of the respective Vms.

Do you mean your able to ping the IP of the VMs internally to them i.e. ping 
local host essentially?

CC'ing Sean Mooney as I'm not the most experienced with OpenStack and Sean 
might be able to help.

Thanks
Ian
>
> Let me know the possible solution for this.
> Regards
> Abhishek Jain


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] connecting two physical ovs to extend ports

2017-11-30 Thread Orabuntu-LXC
Could you please paste in output of "ovs-vsctl show" for the switches
involved?

-- 
Gilbert Standen
Creator Orabuntu-LXC
914-261-4594
gilb...@orabuntu-lxc.com
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] clarification on ovs-vsctl documentation

2017-11-30 Thread Shivaram Mysore
Thanks Ben for the clarification.

On Wed, Nov 29, 2017 at 10:03 PM, Ben Pfaff  wrote:

> On Wed, Nov 29, 2017 at 05:41:03PM -0800, Shivaram Mysore wrote:
> > Hello,
> >
> > I am referring to http://openvswitch.org/support/dist-docs/ovs-vsctl.8.
> txt
> >
> > sFlow
> >Configure  bridge  br0 to send sFlow records to a collector on
> 10.0.0.1
> >at port 6343, using eth1´s IP address as the source, with
> specific sam‐
> >pling parameters:
> >
> >   ovs-vsctl----id=@screate   sFlow   agent=eth1
>  tar‐
> >   get=\"10.0.0.1:6343\" header=128 sampling=64 polling=10 \
> >
> >   -- set Bridge br0 sflow=@s
> >
> >Deconfigure sFlow from br0, which also destroys the sFlow record
> (since
> >it is now unreferenced):
> >
> >   ovs-vsctl -- clear Bridge br0 sflow
> >
> >
> > I have 2 questions about the above section:
> >
> >1. shouldn't "tar-get" be "target" ?
>
> nroff hyphenated when it inserted the line break.  Probably there is
> some way to disable that.
>
> >2. should n't "Bridge" be "bridge" ?
>
> We use a convention of capitalizing names of OVSDB tables.
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovndb_servers can't be promoted

2017-11-30 Thread Numan Siddique
Hi HuiXiang,
Even I am seeing the issue where no node is promoted as master. I will test
more, fix and and submit patch set v3.

Thanks
Numan


On Thu, Nov 30, 2017 at 4:10 PM, Numan Siddique  wrote:

>
>
> On Thu, Nov 30, 2017 at 1:15 PM, Hui Xiang  wrote:
>
>> Hi Numan,
>>
>> Thanks for helping, I am following your pcs example, but still with no
>> lucky,
>>
>> 1. Before running any configuration, I stopped all of the ovsdb-server
>> for OVN, and ovn-northd. Deleted ovnnb_active.conf/ovnsb_active.conf.
>>
>> 2. Since I have already had an vip in the cluster, so I chose to use it,
>> it's status is OK.
>> [root@node-1 ~]# pcs resource show
>>  vip__management_old (ocf::es:ns_IPaddr2): Started node-1.domain.tld
>>
>> 3. Use pcs to create ovndb-servers and constraint
>> [root@node-1 ~]# pcs resource create tst-ovndb ocf:ovn:ovndb-servers
>> manage_northd=yes master_ip=192.168.0.2 nb_master_port=6641
>> sb_master_port=6642 master
>>  ([root@node-1 ~]# pcs resource meta tst-ovndb-master notify=true
>>   Error: unable to find a resource/clone/master/group:
>> tst-ovndb-master) ## returned error, so I changed into below command.
>>
>
> Hi HuiXiang,
> This command is very important. Without which, pacemaker do not notify the
> status change and ovsdb-servers would not be promoted or demoted.
> Hence  you don't see the notify action getting called in ovn ocf script.
>
> Can you try with the other command which I shared in my previous email.
> These commands work fine for me.
>
> Let me  know how it goes.
>
> Thanks
> Numan
>
>
> [root@node-1 ~]# pcs resource master tst-ovndb-master tst-ovndb
>> notify=true
>> [root@node-1 ~]# pcs constraint colocation add master tst-ovndb-master
>> with vip__management_old
>>
>> 4. pcs status
>> [root@node-1 ~]# pcs status
>>  vip__management_old (ocf::es:ns_IPaddr2): Started node-1.domain.tld
>>  Master/Slave Set: tst-ovndb-master [tst-ovndb]
>>  Stopped: [ node-1.domain.tld node-2.domain.tld node-3.domain.tld ]
>>
>> 5. pcs resource show XXX
>> [root@node-1 ~]# pcs resource show  vip__management_old
>>  Resource: vip__management_old (class=ocf provider=es type=ns_IPaddr2)
>>   Attributes: nic=br-mgmt base_veth=br-mgmt-hapr ns_veth=hapr-m
>> ip=192.168.0.2 iflabel=ka cidr_netmask=24 ns=haproxy gateway=none
>> gateway_metric=0 iptables_start_rules=false iptables_stop_rules=false
>> iptables_comment=default-comment
>>   Meta Attrs: migration-threshold=3 failure-timeout=60
>> resource-stickiness=1
>>   Operations: monitor interval=3 timeout=30 (vip__management_old-monitor-3
>> )
>>   start interval=0 timeout=30 (vip__management_old-start-0)
>>   stop interval=0 timeout=30 (vip__management_old-stop-0)
>> [root@node-1 ~]# pcs resource show tst-ovndb-master
>>  Master: tst-ovndb-master
>>   Meta Attrs: notify=true
>>   Resource: tst-ovndb (class=ocf provider=ovn type=ovndb-servers)
>>Attributes: manage_northd=yes master_ip=192.168.0.2
>> nb_master_port=6641 sb_master_port=6642
>>Operations: start interval=0s timeout=30s (tst-ovndb-start-timeout-30s)
>>stop interval=0s timeout=20s (tst-ovndb-stop-timeout-20s)
>>promote interval=0s timeout=50s
>> (tst-ovndb-promote-timeout-50s)
>>demote interval=0s timeout=50s
>> (tst-ovndb-demote-timeout-50s)
>>monitor interval=30s timeout=20s
>> (tst-ovndb-monitor-interval-30s)
>>monitor interval=10s role=Master timeout=20s
>> (tst-ovndb-monitor-interval-10s-role-Master)
>>monitor interval=30s role=Slave timeout=20s
>> (tst-ovndb-monitor-interval-30s-role-Slave)
>>
>>
>> 6. I have put log in every ovndb-servers op, seems only the monitor op is
>> being called, no promoted by the pacemaker DC:
>> <30>Nov 30 15:22:19 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>> ovsdb_server_monitor
>> <30>Nov 30 15:22:19 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>> ovsdb_server_check_status
>> <30>Nov 30 15:22:19 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>> return OCFOCF_NOT_RUNNINGG
>> <30>Nov 30 15:22:20 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>> ovsdb_server_master_update: 7}
>> <30>Nov 30 15:22:20 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>> ovsdb_server_master_update end}
>> <30>Nov 30 15:22:20 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>> monitor is going to return 7
>> <30>Nov 30 15:22:20 node-1 ovndb-servers(undef)[2980970]: INFO: metadata
>> exit OCF_SUCCESS}
>>
>>
>> Please take a look,  thank you very much.
>> Hui.
>>
>>
>>
>>
>> On Wed, Nov 29, 2017 at 11:03 PM, Numan Siddique 
>> wrote:
>>
>>>
>>>
>>> On Wed, Nov 29, 2017 at 4:16 PM, Hui Xiang  wrote:
>>>
 FYI, If I have configured a good ovndb-server cluster with one active
 two slaves, then start pacemaker ovn-servers resource agents, they are all
 becoming slaves...

>>>
>>> You don't need to start ovndb-servers. When you create pacemaker
>>> resources it would automatically start them and promote on of them.
>>>
>

Re: [ovs-discuss] clarification on sflow documentation

2017-11-30 Thread Ben Pfaff
On Wed, Nov 29, 2017 at 05:45:11PM -0800, Shivaram Mysore wrote:
> Hello,
> I am referring to http://docs.openvswitch.org/en/latest/howto/sflow/
> 
> $ ovs-vsctl -- --id=@sflow create sflow agent=${AGENT_IP} \
> target="${COLLECTOR_IP}:${COLLECTOR_PORT}" header=${HEADER_BYTES} \
> sampling=${SAMPLING_N} polling=${POLLING_SECS} \
>   -- set bridge br0 sflow=@sflow
> 
> In the above, the quotes need to be escaped otherwise, the command will not
> work.  I think it should be fixed to:
> 
> $ ovs-vsctl -- --id=@sflow create sflow agent=${AGENT_IP} \
> target=*\*"${COLLECTOR_IP}:${COLLECTOR_PORT}*\*" header=${HEADER_BYTES} \
> sampling=${SAMPLING_N} polling=${POLLING_SECS} \
>   -- set bridge br0 sflow=@sflow

Thanks for the report.  I sent out a patch for review:
https://patchwork.ozlabs.org/patch/843145/
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] OVS with one interface

2017-11-30 Thread S hj
Hello,


I have three openvswitch as follows,

ovs1 --- ovs2---ovs3

Each ovs only has *one interface* to communicate with other ovs.
I want to add some flow rules to ovs2, to forward ip and arp packets form
ovs1 to ovs3 when I run "ip address_ovs1 ping ip address_ovs3".

I tried different flows, but they don not work.
for example:
ovs-ofctl add-flows br2 "table=0, arp, nw_dst=10.0.0.3,
actions=goto_table:1"
ovs-ofctl add-flows br2 "table=1, actions=nw_dst=10.0.0.3, in_port(or
normal)"

Do you have any suggestion?
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] ovndb_servers can't be promoted

2017-11-30 Thread Hui Xiang
Thanks Numan, in my environment, it's worse, it's even not getting started
and the monitor is only called once other than repeatedly for both
master/slave or none, do you know if any problem could cause pacemaker have
this decision? other resource are good.

On Fri, Dec 1, 2017 at 2:08 AM, Numan Siddique  wrote:

> Hi HuiXiang,
> Even I am seeing the issue where no node is promoted as master. I will
> test more, fix and and submit patch set v3.
>
> Thanks
> Numan
>
>
> On Thu, Nov 30, 2017 at 4:10 PM, Numan Siddique 
> wrote:
>
>>
>>
>> On Thu, Nov 30, 2017 at 1:15 PM, Hui Xiang  wrote:
>>
>>> Hi Numan,
>>>
>>> Thanks for helping, I am following your pcs example, but still with no
>>> lucky,
>>>
>>> 1. Before running any configuration, I stopped all of the ovsdb-server
>>> for OVN, and ovn-northd. Deleted ovnnb_active.conf/ovnsb_active.conf.
>>>
>>> 2. Since I have already had an vip in the cluster, so I chose to use it,
>>> it's status is OK.
>>> [root@node-1 ~]# pcs resource show
>>>  vip__management_old (ocf::es:ns_IPaddr2): Started node-1.domain.tld
>>>
>>> 3. Use pcs to create ovndb-servers and constraint
>>> [root@node-1 ~]# pcs resource create tst-ovndb ocf:ovn:ovndb-servers
>>> manage_northd=yes master_ip=192.168.0.2 nb_master_port=6641
>>> sb_master_port=6642 master
>>>  ([root@node-1 ~]# pcs resource meta tst-ovndb-master notify=true
>>>   Error: unable to find a resource/clone/master/group:
>>> tst-ovndb-master) ## returned error, so I changed into below command.
>>>
>>
>> Hi HuiXiang,
>> This command is very important. Without which, pacemaker do not notify
>> the status change and ovsdb-servers would not be promoted or demoted.
>> Hence  you don't see the notify action getting called in ovn ocf script.
>>
>> Can you try with the other command which I shared in my previous email.
>> These commands work fine for me.
>>
>> Let me  know how it goes.
>>
>> Thanks
>> Numan
>>
>>
>> [root@node-1 ~]# pcs resource master tst-ovndb-master tst-ovndb
>>> notify=true
>>> [root@node-1 ~]# pcs constraint colocation add master tst-ovndb-master
>>> with vip__management_old
>>>
>>> 4. pcs status
>>> [root@node-1 ~]# pcs status
>>>  vip__management_old (ocf::es:ns_IPaddr2): Started node-1.domain.tld
>>>  Master/Slave Set: tst-ovndb-master [tst-ovndb]
>>>  Stopped: [ node-1.domain.tld node-2.domain.tld node-3.domain.tld ]
>>>
>>> 5. pcs resource show XXX
>>> [root@node-1 ~]# pcs resource show  vip__management_old
>>>  Resource: vip__management_old (class=ocf provider=es type=ns_IPaddr2)
>>>   Attributes: nic=br-mgmt base_veth=br-mgmt-hapr ns_veth=hapr-m
>>> ip=192.168.0.2 iflabel=ka cidr_netmask=24 ns=haproxy gateway=none
>>> gateway_metric=0 iptables_start_rules=false iptables_stop_rules=false
>>> iptables_comment=default-comment
>>>   Meta Attrs: migration-threshold=3 failure-timeout=60
>>> resource-stickiness=1
>>>   Operations: monitor interval=3 timeout=30
>>> (vip__management_old-monitor-3)
>>>   start interval=0 timeout=30 (vip__management_old-start-0)
>>>   stop interval=0 timeout=30 (vip__management_old-stop-0)
>>> [root@node-1 ~]# pcs resource show tst-ovndb-master
>>>  Master: tst-ovndb-master
>>>   Meta Attrs: notify=true
>>>   Resource: tst-ovndb (class=ocf provider=ovn type=ovndb-servers)
>>>Attributes: manage_northd=yes master_ip=192.168.0.2
>>> nb_master_port=6641 sb_master_port=6642
>>>Operations: start interval=0s timeout=30s
>>> (tst-ovndb-start-timeout-30s)
>>>stop interval=0s timeout=20s (tst-ovndb-stop-timeout-20s)
>>>promote interval=0s timeout=50s
>>> (tst-ovndb-promote-timeout-50s)
>>>demote interval=0s timeout=50s
>>> (tst-ovndb-demote-timeout-50s)
>>>monitor interval=30s timeout=20s
>>> (tst-ovndb-monitor-interval-30s)
>>>monitor interval=10s role=Master timeout=20s
>>> (tst-ovndb-monitor-interval-10s-role-Master)
>>>monitor interval=30s role=Slave timeout=20s
>>> (tst-ovndb-monitor-interval-30s-role-Slave)
>>>
>>>
>>> 6. I have put log in every ovndb-servers op, seems only the monitor op
>>> is being called, no promoted by the pacemaker DC:
>>> <30>Nov 30 15:22:19 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>>> ovsdb_server_monitor
>>> <30>Nov 30 15:22:19 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>>> ovsdb_server_check_status
>>> <30>Nov 30 15:22:19 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>>> return OCFOCF_NOT_RUNNINGG
>>> <30>Nov 30 15:22:20 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>>> ovsdb_server_master_update: 7}
>>> <30>Nov 30 15:22:20 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>>> ovsdb_server_master_update end}
>>> <30>Nov 30 15:22:20 node-1 ovndb-servers(tst-ovndb)[2980860]: INFO:
>>> monitor is going to return 7
>>> <30>Nov 30 15:22:20 node-1 ovndb-servers(undef)[2980970]: INFO: metadata
>>> exit OCF_SUCCESS}
>>>
>>>
>>> Please take a look,  thank you very much.
>>> Hui.
>>>
>>>
>>>
>>>
>>> On Wed, Nov 29, 201