Re: [ovs-discuss] limit packet in message number by port, table and bridge

2019-08-28 Thread kai xi fan
Get it. Thanks.

Ben Pfaff  于2019年8月29日周四 下午12:06写道:

> No.  Read the "Controller Rate Limiting" section in
> ovs-vswitchd.conf.db(5).
>
> On Thu, Aug 29, 2019 at 11:33:11AM +0800, kai xi fan wrote:
> > you mean by meter rate-limiting ?
> >
> > Ben Pfaff  于2019年8月29日周四 上午12:38写道:
> >
> > > On Wed, Aug 28, 2019 at 03:26:31PM +0800, kai xi fan wrote:
> > > > There is no limit on the number of packet in message send to
> controller
> > > > now.
> > > > We have faced a lots of problems for this when user is doing ip
> scanning.
> > > > ovs will send a packet in message to controller after table lookup
> miss.
> > > > I think that it's better to take the design from hardware switch
> asic:
> > > ovs
> > > > could limit the packet in message number by input port, table and
> bridge.
> > >
> > > OVS has had this feature for over 10 years.
> > >
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Three or more switches topology in Daisy chain

2019-08-28 Thread Vikas Kumar
hello everyone,
could anyone please suggest me how i can achieve the below requirement

  s1<--->s2<--->s3
   | |  |
   | |  |
  H1  H2  H3

suppose in the above scenario s1 is connected with H1 on port s1-eth1 ,  s1
is connected with s2 on the ports s1-eth2 and s2-eth2 respectively, H2 is
connected with s2 on port s2-eth1, s2 and s3 are connected with each other
on the ports s2-eth3 and s3-eth2 respectively, H3 is connected with s3 on
the port s3-eth1.
  Now, i am apply prio
qdisc on s1-eth1, s1-eth2, s2-eth1, s2-eth2, s2-eth3, s3-eth1, s3-eth2
ports.
   suppose i want to send
packets from H1 to H3 of different TOS values. I have observed that packets
are getting shaped on s1 switch only, while packets are passing through s2
switch, so at s2 switch also packets shaping should happen, but its not
happening.
 could anyone suggest me why is
this shaping not happening at s2 switch? if i want to achieve the shaping
of packets at s2 switch also in that case how i can achieve? any suggestion
is welcomed.

Thanks
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] limit packet in message number by port, table and bridge

2019-08-28 Thread Ben Pfaff
No.  Read the "Controller Rate Limiting" section in
ovs-vswitchd.conf.db(5).

On Thu, Aug 29, 2019 at 11:33:11AM +0800, kai xi fan wrote:
> you mean by meter rate-limiting ?
> 
> Ben Pfaff  于2019年8月29日周四 上午12:38写道:
> 
> > On Wed, Aug 28, 2019 at 03:26:31PM +0800, kai xi fan wrote:
> > > There is no limit on the number of packet in message send to controller
> > > now.
> > > We have faced a lots of problems for this when user is doing ip scanning.
> > > ovs will send a packet in message to controller after table lookup miss.
> > > I think that it's better to take the design from hardware switch asic:
> > ovs
> > > could limit the packet in message number by input port, table and bridge.
> >
> > OVS has had this feature for over 10 years.
> >
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] limit packet in message number by port, table and bridge

2019-08-28 Thread kai xi fan
you mean by meter rate-limiting ?

Ben Pfaff  于2019年8月29日周四 上午12:38写道:

> On Wed, Aug 28, 2019 at 03:26:31PM +0800, kai xi fan wrote:
> > There is no limit on the number of packet in message send to controller
> > now.
> > We have faced a lots of problems for this when user is doing ip scanning.
> > ovs will send a packet in message to controller after table lookup miss.
> > I think that it's better to take the design from hardware switch asic:
> ovs
> > could limit the packet in message number by input port, table and bridge.
>
> OVS has had this feature for over 10 years.
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Query on DEC_ttl action implementation in datapath

2019-08-28 Thread bindiya Kurle
Thanks. Is there any forum where i can discuss the design for same?

Regards,
Bindiya

On Thu, Aug 29, 2019 at 1:22 AM Justin Pettit  wrote:

> I understand.  This has been discussed previously on the mailing list.
> You are welcome to submit a patch upstream to provide that capability, but
> I suspect you will get some resistance--especially from the Linux kernel
> community.
>
> --Justin
>
>
> > On Aug 27, 2019, at 11:25 PM, bindiya Kurle 
> wrote:
> >
> > Hi Justin,
> >
> > Thanks for the clarification. I agree to your point ,but consider a
> use-case if I have routing decision only based on destination ip then as
> per current implementation, ovs will add 2 flows (in data path)for packets
> coming from different source and if the TTL happens to be different for
> them .This will reduce number of flows that can be supported with ovs. If
> decrement TTL  was done in kernel ,it would have ended in adding one flow
> only.
> >
> > Regards,
> > Bindiya
> >
> > Regards,
> > Bindiya
> >
> > On Tue, Aug 27, 2019 at 9:48 PM Justin Pettit  wrote:
> > I think it was considered cleaner from an ABI perspective, since it
> doesn't require another action, since "set" was already supported.  In
> practice, I don't think it's a problem, since usually a TTL decrement is
> associated with a routing decision, and TTLs tend to be fairly static
> between two hosts.
> >
> > --Justin
> >
> >
> > > On Aug 27, 2019, at 1:11 AM, bindiya Kurle 
> wrote:
> > >
> > > hi ,
> > > I have a question related to dec_ttl action implemented in datapath.
> > > when  dec_ttl action is configured in OVS following action get added
> in datapath.
> > >
> > > recirc_id(0),in_port(2),eth(),eth_type(0x0800),ipv4(ttl=64,frag=no),
> packets:3, bytes:294, used:0.068s, actions:set(ipv4(ttl=63)),3,
> > >
> > > if packet comes with different TTL on same port then one more action
> get added in datapath.
> > > for ex:
> > > recirc_id(0),in_port(2),eth(),eth_type(0x0800),ipv4(ttl=9,frag=no),
> packets:3, bytes:294, used:0.068s, actions:set(ipv4(ttl=8)),3,
> > >
> > > Could someone please explain why dec_ttl is implemeted as a set
> action  rather than dec_ttl action.
> > >
> > >
> > > I mean , why for different ttl one more rule get added rather than
> just adding it as  following as done in userspace
> > >
> > > recirc_id(0),in_port(3),eth(),eth_type(0x0800),ipv4(frag=no),
> packets:3, bytes:294, used:0.737s, actions:dec_ttl,2
> > >
> > > Regards,
> > > Bindiya
> > > ___
> > > discuss mailing list
> > > disc...@openvswitch.org
> > > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Hypervisor down during upgrade OVS 2.10.x to 2.10.y

2019-08-28 Thread Jin, Liang via discuss

Hi,
We upgrade the OVS recently from one version 2.10 to another version 2.10.  on 
some HV upgrade, the HV is down when running force reload kernel.
In the ovs-ctl log, kill ovs-vswitch is failed, but the script is still going 
to reload the modules.
```
ovsdb-server is running with pid 2431
ovs-vswitchd is running with pid 2507
Thu Aug 22 23:13:49 UTC 2019:stop
2019-08-22T23:13:50Z|1|fatal_signal|WARN|terminating with signal 14 (Alarm 
clock)
Alarm clock
2019-08-22T23:13:51Z|1|fatal_signal|WARN|terminating with signal 14 (Alarm 
clock)
Alarm clock
* Exiting ovs-vswitchd (2507)
* Killing ovs-vswitchd (2507)
* Killing ovs-vswitchd (2507) with SIGKILL
* Killing ovs-vswitchd (2507) failed
* Exiting ovsdb-server (2431)
Thu Aug 22 23:14:58 UTC 2019:load-kmod
Thu Aug 22 23:14:58 UTC 2019:start --system-id=random --no-full-hostname
/usr/share/openvswitch/scripts/ovs-ctl: unknown option "--no-full-hostname" 
(use --help for help)
* Starting ovsdb-server
* Configuring Open vSwitch system IDs
* ovs-vswitchd is already running
* Enabling remote OVSDB managers
ovsdb-server is running with pid 3860447
ovs-vswitchd is running with pid 2507
ovsdb-server is running with pid 3860447
ovs-vswitchd is running with pid 2507
Thu Aug 22 23:15:09 UTC 2019:load-kmod
Thu Aug 22 23:15:09 UTC 2019:force-reload-kmod --system-id=random 
--no-full-hostname
/usr/share/openvswitch/scripts/ovs-ctl: unknown option "--no-full-hostname" 
(use --help for help)
* Detected internal interfaces: br-int
Thu Aug 22 23:37:08 UTC 2019:stop
2019-08-22T23:37:09Z|1|fatal_signal|WARN|terminating with signal 14 (Alarm 
clock)
Alarm clock
2019-08-22T23:37:10Z|1|fatal_signal|WARN|terminating with signal 14 (Alarm 
clock)
Alarm clock
* Exiting ovs-vswitchd (2507)
* Killing ovs-vswitchd (2507)
* Killing ovs-vswitchd (2507) with SIGKILL
* Killing ovs-vswitchd (2507) failed
* Exiting ovsdb-server (3860447)
Thu Aug 22 23:40:42 UTC 2019:load-kmod
* Inserting openvswitch module
Thu Aug 22 23:40:42 UTC 2019:start --system-id=random --no-full-hostname
/usr/share/openvswitch/scripts/ovs-ctl: unknown option "--no-full-hostname" 
(use --help for help)
* Starting ovsdb-server
* Configuring Open vSwitch system IDs
* Starting ovs-vswitchd
* Enabling remote OVSDB managers
ovsdb-server is running with pid 2399
ovs-vswitchd is running with pid 2440
ovsdb-server is running with pid 2399
ovs-vswitchd is running with pid 2440
Thu Aug 22 23:46:18 UTC 2019:load-kmod
Thu Aug 22 23:46:18 UTC 2019:force-reload-kmod --system-id=random 
--no-full-hostname
/usr/share/openvswitch/scripts/ovs-ctl: unknown option "--no-full-hostname" 
(use --help for help)
* Detected internal interfaces: br-int br0
* Saving flows
* Exiting ovsdb-server (2399)
* Starting ovsdb-server
* Configuring Open vSwitch system IDs
* Flush old conntrack entries
* Exiting ovs-vswitchd (2440)
* Saving interface configuration
* Removing datapath: system@ovs-system
* Removing openvswitch module
rmmod: ERROR: Module vxlan is in use by: i40e
* Forcing removal of vxlan module
* Inserting openvswitch module
* Starting ovs-vswitchd
* Restoring saved flows
* Enabling remote OVSDB managers
* Restoring interface configuration
```

But in kern.log, we see the log as below, the process could not exit because 
waiting br0 release,  and then, the ovs-ctl try to `kill term` and `kill -9` 
the process, it does not work, because kernel is in infinity loop.  Then, 
ovs-ctl try to save the flows, when save flow, core dump happened in kernel. 
Then HV is down until restart it.
```
Aug 22 16:13:45 slx11c-9gjm kernel: [21177057.998961] device br0 left 
promiscuous mode
Aug 22 16:13:55 slx11c-9gjm kernel: [21177068.044859] unregister_netdevice: 
waiting for br0 to become free. Usage count = 1
Aug 22 16:14:05 slx11c-9gjm kernel: [21177078.068429] unregister_netdevice: 
waiting for br0 to become free. Usage count = 1
Aug 22 16:14:15 slx11c-9gjm kernel: [21177088.312001] unregister_netdevice: 
waiting for br0 to become free. Usage count = 1
Aug 22 16:14:25 slx11c-9gjm kernel: [21177098.359556] unregister_netdevice: 
waiting for br0 to become free. Usage count = 1
Aug 22 16:14:35 slx11c-9gjm kernel: [21177108.603058] unregister_netdevice: 
waiting for br0 to become free. Usage count = 1
Aug 22 16:14:45 slx11c-9gjm kernel: [21177118.658643] unregister_netdevice: 
waiting for br0 to become free. Usage count = 1
Aug 22 16:14:55 slx11c-9gjm kernel: [21177128.678211] unregister_netdevice: 
waiting for br0 to become free. Usage count = 1
Aug 22 16:15:05 slx11c-9gjm kernel: [21177138.721732] unregister_netdevice: 
waiting for br0 to become free. Usage count = 1
Aug 22 16:15:15 slx11c-9gjm kernel: [21177148.817317] unregister_netdevice: 
waiting for br0 to become free. Usage count = 1
Aug 22 16:15:25 slx11c-9gjm kernel: [21177158.828853] unregister_netdevice: 
waiting for br0 to become free. Usage count = 1
Aug 22 16:15:35 slx11c-9gjm kernel: [21177168.860459] unregister_netdevice: 
waiting for br0 to become free. Usage count = 1
Aug 

Re: [ovs-discuss] Query on DEC_ttl action implementation in datapath

2019-08-28 Thread Justin Pettit
I understand.  This has been discussed previously on the mailing list.  You are 
welcome to submit a patch upstream to provide that capability, but I suspect 
you will get some resistance--especially from the Linux kernel community.

--Justin


> On Aug 27, 2019, at 11:25 PM, bindiya Kurle  wrote:
> 
> Hi Justin,
> 
> Thanks for the clarification. I agree to your point ,but consider a use-case 
> if I have routing decision only based on destination ip then as per current 
> implementation, ovs will add 2 flows (in data path)for packets coming from 
> different source and if the TTL happens to be different for them .This will 
> reduce number of flows that can be supported with ovs. If decrement TTL  was 
> done in kernel ,it would have ended in adding one flow only.
> 
> Regards,
> Bindiya
> 
> Regards,
> Bindiya
> 
> On Tue, Aug 27, 2019 at 9:48 PM Justin Pettit  wrote:
> I think it was considered cleaner from an ABI perspective, since it doesn't 
> require another action, since "set" was already supported.  In practice, I 
> don't think it's a problem, since usually a TTL decrement is associated with 
> a routing decision, and TTLs tend to be fairly static between two hosts.
> 
> --Justin
> 
> 
> > On Aug 27, 2019, at 1:11 AM, bindiya Kurle  wrote:
> > 
> > hi ,
> > I have a question related to dec_ttl action implemented in datapath.
> > when  dec_ttl action is configured in OVS following action get added in 
> > datapath.
> > 
> > recirc_id(0),in_port(2),eth(),eth_type(0x0800),ipv4(ttl=64,frag=no), 
> > packets:3, bytes:294, used:0.068s, actions:set(ipv4(ttl=63)),3,
> > 
> > if packet comes with different TTL on same port then one more action get 
> > added in datapath.
> > for ex:
> > recirc_id(0),in_port(2),eth(),eth_type(0x0800),ipv4(ttl=9,frag=no), 
> > packets:3, bytes:294, used:0.068s, actions:set(ipv4(ttl=8)),3,  
> > 
> > Could someone please explain why dec_ttl is implemeted as a set action  
> > rather than dec_ttl action.
> > 
> > 
> > I mean , why for different ttl one more rule get added rather than  just 
> > adding it as  following as done in userspace
> > 
> > recirc_id(0),in_port(3),eth(),eth_type(0x0800),ipv4(frag=no), packets:3, 
> > bytes:294, used:0.737s, actions:dec_ttl,2 
> > 
> > Regards,
> > Bindiya
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> 

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] GRE with link-local remote_ip: "not a valid IPv6 address"

2019-08-28 Thread Ben Pfaff
On Tue, Aug 27, 2019 at 12:09:14PM +0200, Sven Gebauer wrote:
> Hi all,
> 
> I'm trying to create a GRE tunnel with an IPv6 link-local address as the
> remote_ip. According to the changelog, this should be supported since
> v2.8.0. However...

IPv6 link-local addresses are supported for use in contexts where OVS is
creating a socket, most notably for connections to controllers.  In
other contexts, there's nowhere for OVS to put the scope, so it can't
support it.  In this case, the kernel flow structure doesn't have a
scope in it, so there's nothing that userspace OVS can do about it.  If
this feature were to be added, it would require adding the scope
(probably in multiple places) to that kernel flow structure, which would
probably be difficult for reasons of backward compatibility and
coordination with (and justification to) the kernel network stack
developers.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] SSL errors with OVS on Alpine Linux

2019-08-28 Thread Ben Pfaff
On Tue, Aug 27, 2019 at 06:12:00PM -0700, Shivaram Mysore wrote:
> I am running OVS on Alpine Linux
>  (
> ovs-vswitchd
> (Open vSwitch) 2.10.1) and getting SSL errors.  I have a similar setup on
> Ubuntu and other COTS switches with same/similar certs and I don't have any
> issues.
> 
> I am getting errors like below
> 
> 2019-08-28T00:55:38.905Z|01201|rconn|INFO|ovs-br0<->ssl:10.22.23.97:6653:
> connecting...
> 2019-08-28T00:55:38.905Z|01202|stream_ssl|ERR|Certificate must be
> configured to use SSL

I'd look farther back in the log to see whether OVS reported a problem
reading or parsing the certificate in question.

I am not aware of anything special SSL-related on Alpine.  I do not
remember other reports of SSL problems specific to Alpine.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] limit packet in message number by port, table and bridge

2019-08-28 Thread Ben Pfaff
On Wed, Aug 28, 2019 at 03:26:31PM +0800, kai xi fan wrote:
> There is no limit on the number of packet in message send to controller
> now.
> We have faced a lots of problems for this when user is doing ip scanning.
> ovs will send a packet in message to controller after table lookup miss.
> I think that it's better to take the design from hardware switch asic: ovs
> could limit the packet in message number by input port, table and bridge.

OVS has had this feature for over 10 years.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs-dev] Regarding TSO using AF_PACKET in OVS

2019-08-28 Thread Ben Pfaff
On Wed, Aug 28, 2019 at 04:40:41PM +0530, Ramana Reddy wrote:
> Hi Ben, Justin, Jesse and All,
> 
> Hope everyone doing great.

For what it's worth, I'm not the kernel module maintainer and I don't
really have an answer to this question.  I hope that someone else pops
up to answer it.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Regarding TSO using AF_PACKET in OVS

2019-08-28 Thread Ramana Reddy
Hi Ben, Justin, Jesse and All,

Hope everyone doing great.

During my work, I create a socket using AF_PACKET with virtio_net_hdr and
filled all the fields in the virtio_net_hdr
and the flag to VIRTIO_NET_GSO_TCPV4 to enable TSO using af_packet.

vnet->gso_type = VIRTIO_NET_HDR_GSO_TCPV4;

The code is not working when I am trying to send large packets over OVS
VXLAN tunnel. It's dropping the large packets and
the application is resending the MTU sized packets. The code is working
fine in non ovs case (directly sending without ovs).

I understood that UDP tunneling offloading (VXLAN) not happening because of
nic may not support this offloading feature,
but when I send iperf (AF_INET) traffic, though the offloading is not
happening, but the large packets are fragmented and the
VXLAN tunneling sending the fragmented packets.  Why are we seeing this
different behavior?

What is the expected behavior in AF_PACKET in OVS? Does OVS support
AF_PACKET offloading mechanism?
If not, how we can avoid the retransmission of the packets. What are things
to be done so that the kernel fragments
large packets and send them out without dropping ( in case if offloading
feature not available)?

Looking forward to your reply.

Regards,
Ramana
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovn][clustered] Confusing to create ovsdb-server clustered databases

2019-08-28 Thread Zufar Dhiyaulhaq
Hi Daniel,

Yes, I will move this discussion to openstack-discuss.

Thank you.

Best Regards,
Zufar Dhiyaulhaq


On Wed, Aug 28, 2019 at 10:02 PM Daniel Alvarez Sanchez 
wrote:

> On Wed, Aug 28, 2019 at 4:49 PM Zufar Dhiyaulhaq
>  wrote:
> >
> > Hi Numan,
> >
> > Yes, it's working. I think the networking-ovn plugin in OpenStack has
> some bugs. let me use a single IP first or maybe I can use pacemaker to
> create the VIP.
>
> Thanks Zufar, mind patching networking-ovn / reporting the bug on
> launchpad / moving the discussion to openstack-discuss mailing list?
>
> Thanks a lot!
> Daniel
>
> >
> > [root@zu-ovn-controller0 ~]# ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:
> 10.101.101.101:6641,tcp:10.101.101.102:6641  ls-add sw0
> > [root@zu-ovn-controller0 ~]# ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:
> 10.101.101.101:6641,tcp:10.101.101.102:6641  show
> > switch 5d3ea060-f92f-41ab-8143-6a6534bbba98 (sw0)
> > [root@zu-ovn-controller0 ~]#
> >
> > [root@zu-ovn-controller1 ~]#  ovn-nbctl --db=tcp:10.101.101.100:6641
> ,tcp:10.101.101.101:6641,tcp:10.101.101.102:6641  show
> > switch 5d3ea060-f92f-41ab-8143-6a6534bbba98 (sw0)
> >
> > Thank you very much :)
> >
> > Best Regards,
> > Zufar Dhiyaulhaq
> >
> >
> > On Wed, Aug 28, 2019 at 7:17 PM Numan Siddique 
> wrote:
> >>
> >>
> >>
> >> On Wed, Aug 28, 2019 at 4:45 PM Zufar Dhiyaulhaq <
> zu...@onf-ambassador.org> wrote:
> >>>
> >>> Hi Numan,
> >>>
> >>> I have tried the command but output nothing.
> >>>
> >>> [root@zu-ovn-controller0 ~]# ovn-nbctl --db=tcp:10.101.101.100:6641
> ,tcp:10.101.101.101:6641,tcp:10.101.101.102:6641 show
> >>
> >>
> >> These commands seem to work. Try creating a logical switch like -
> >> ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:
> 10.101.101.102:6641  ls-add sw0
> >> ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:
> 10.101.101.102:6641  show
> >>
> >>
> >>> [root@zu-ovn-controller0 ~]# ovn-sbctl --db=tcp:10.101.101.100:6642
> ,tcp:10.101.101.101:6642,tcp:10.101.101.102:6642 show
> >>> Chassis "1ee48dd1-d520-476d-82d3-3d4651132f47"
> >>> hostname: "zu-ovn-compute0"
> >>> Encap geneve
> >>> ip: "10.101.101.103"
> >>> options: {csum="true"}
> >>> Chassis "cd1a2535-522a-4571-8eac-8394681846a3"
> >>> hostname: "zu-ovn-compute2"
> >>> Encap geneve
> >>> ip: "10.101.101.105"
> >>> options: {csum="true"}
> >>> Chassis "a5b59592-f511-4a7a-b37d-93f933c35ea5"
> >>> hostname: "zu-ovn-compute1"
> >>> Encap geneve
> >>> ip: "10.101.101.104"
> >>> options: {csum="true"}
> >>> [root@zu-ovn-controller0 ~]# tail -f
> /var/log/openvswitch/ovsdb-server-nb.log
> >>> 2019-08-28T09:12:31.190Z|00031|reconnect|INFO|tcp:10.101.101.102:6643:
> connection attempt failed (No route to host)
> >>> 2019-08-28T09:12:31.190Z|00032|reconnect|INFO|tcp:10.101.101.102:6643:
> waiting 2 seconds before reconnect
> >>> 2019-08-28T09:12:33.191Z|00033|reconnect|INFO|tcp:10.101.101.102:6643:
> connecting...
> >>> 2019-08-28T09:12:33.192Z|00034|reconnect|INFO|tcp:10.101.101.102:6643:
> connection attempt failed (No route to host)
> >>> 2019-08-28T09:12:33.192Z|00035|reconnect|INFO|tcp:10.101.101.102:6643:
> waiting 4 seconds before reconnect
> >>> 2019-08-28T09:12:37.192Z|00036|reconnect|INFO|tcp:10.101.101.102:6643:
> connecting...
> >>> 2019-08-28T09:12:37.192Z|00037|reconnect|INFO|tcp:10.101.101.102:6643:
> connection attempt failed (No route to host)
> >>> 2019-08-28T09:12:37.192Z|00038|reconnect|INFO|tcp:10.101.101.102:6643:
> continuing to reconnect in the background but suppressing further logging
> >>> 2019-08-28T09:22:52.597Z|00039|reconnect|INFO|tcp:10.101.101.101:6643:
> connected
> >>> 2019-08-28T09:23:01.279Z|00040|reconnect|INFO|tcp:10.101.101.102:6643:
> connected
> >>>
> >>> I have tried with ovn-ctl to create the clustered-databases, but the
> problem is same, stuck when creating neutron resources. I think it because
> the ovn-northd run in 3 nodes, but neutron only run on a single controller.
> >>
> >>
> >> I don't think that's the issue.
> >> The issue seems to be that networking-ovn is not tested with connecting
> to clustered db.
> >> Try passing just one remote to neutron server and see if it works.
> >>
> >>
> >> May be you can ask this question in the openstack ML to get more
> attention.
> >>
> >> Numan
> >>
> >>>
> >>> this is the step: http://paste.openstack.org/show/766470/
> >>> should I try the step first? but check with passing the remote URL to
> command?
> >>>
> >>> Best Regards,
> >>> Zufar Dhiyaulhaq
> >>>
> >>>
> >>> On Wed, Aug 28, 2019 at 6:06 PM Numan Siddique 
> wrote:
> 
> 
> 
>  On Wed, Aug 28, 2019 at 4:04 PM Zufar Dhiyaulhaq <
> zu...@onf-ambassador.org> wrote:
> >
> > [ovn][clustered] Confusing to create ovsdb-server clustered databases
> >
> > Hi Everyone, I have successfully created OpenStack with OVN enabled.
> But the problem comes when I try to cluster the ovsdb-server. My 

Re: [ovs-discuss] [ovn][clustered] Confusing to create ovsdb-server clustered databases

2019-08-28 Thread Daniel Alvarez Sanchez
On Wed, Aug 28, 2019 at 4:49 PM Zufar Dhiyaulhaq
 wrote:
>
> Hi Numan,
>
> Yes, it's working. I think the networking-ovn plugin in OpenStack has some 
> bugs. let me use a single IP first or maybe I can use pacemaker to create the 
> VIP.

Thanks Zufar, mind patching networking-ovn / reporting the bug on
launchpad / moving the discussion to openstack-discuss mailing list?

Thanks a lot!
Daniel

>
> [root@zu-ovn-controller0 ~]# ovn-nbctl 
> --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:10.101.101.102:6641  
> ls-add sw0
> [root@zu-ovn-controller0 ~]# ovn-nbctl 
> --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:10.101.101.102:6641  
> show
> switch 5d3ea060-f92f-41ab-8143-6a6534bbba98 (sw0)
> [root@zu-ovn-controller0 ~]#
>
> [root@zu-ovn-controller1 ~]#  ovn-nbctl 
> --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:10.101.101.102:6641  
> show
> switch 5d3ea060-f92f-41ab-8143-6a6534bbba98 (sw0)
>
> Thank you very much :)
>
> Best Regards,
> Zufar Dhiyaulhaq
>
>
> On Wed, Aug 28, 2019 at 7:17 PM Numan Siddique  wrote:
>>
>>
>>
>> On Wed, Aug 28, 2019 at 4:45 PM Zufar Dhiyaulhaq  
>> wrote:
>>>
>>> Hi Numan,
>>>
>>> I have tried the command but output nothing.
>>>
>>> [root@zu-ovn-controller0 ~]# ovn-nbctl 
>>> --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:10.101.101.102:6641
>>>  show
>>
>>
>> These commands seem to work. Try creating a logical switch like -
>> ovn-nbctl 
>> --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:10.101.101.102:6641 
>>  ls-add sw0
>> ovn-nbctl 
>> --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:10.101.101.102:6641 
>>  show
>>
>>
>>> [root@zu-ovn-controller0 ~]# ovn-sbctl 
>>> --db=tcp:10.101.101.100:6642,tcp:10.101.101.101:6642,tcp:10.101.101.102:6642
>>>  show
>>> Chassis "1ee48dd1-d520-476d-82d3-3d4651132f47"
>>> hostname: "zu-ovn-compute0"
>>> Encap geneve
>>> ip: "10.101.101.103"
>>> options: {csum="true"}
>>> Chassis "cd1a2535-522a-4571-8eac-8394681846a3"
>>> hostname: "zu-ovn-compute2"
>>> Encap geneve
>>> ip: "10.101.101.105"
>>> options: {csum="true"}
>>> Chassis "a5b59592-f511-4a7a-b37d-93f933c35ea5"
>>> hostname: "zu-ovn-compute1"
>>> Encap geneve
>>> ip: "10.101.101.104"
>>> options: {csum="true"}
>>> [root@zu-ovn-controller0 ~]# tail -f 
>>> /var/log/openvswitch/ovsdb-server-nb.log
>>> 2019-08-28T09:12:31.190Z|00031|reconnect|INFO|tcp:10.101.101.102:6643: 
>>> connection attempt failed (No route to host)
>>> 2019-08-28T09:12:31.190Z|00032|reconnect|INFO|tcp:10.101.101.102:6643: 
>>> waiting 2 seconds before reconnect
>>> 2019-08-28T09:12:33.191Z|00033|reconnect|INFO|tcp:10.101.101.102:6643: 
>>> connecting...
>>> 2019-08-28T09:12:33.192Z|00034|reconnect|INFO|tcp:10.101.101.102:6643: 
>>> connection attempt failed (No route to host)
>>> 2019-08-28T09:12:33.192Z|00035|reconnect|INFO|tcp:10.101.101.102:6643: 
>>> waiting 4 seconds before reconnect
>>> 2019-08-28T09:12:37.192Z|00036|reconnect|INFO|tcp:10.101.101.102:6643: 
>>> connecting...
>>> 2019-08-28T09:12:37.192Z|00037|reconnect|INFO|tcp:10.101.101.102:6643: 
>>> connection attempt failed (No route to host)
>>> 2019-08-28T09:12:37.192Z|00038|reconnect|INFO|tcp:10.101.101.102:6643: 
>>> continuing to reconnect in the background but suppressing further logging
>>> 2019-08-28T09:22:52.597Z|00039|reconnect|INFO|tcp:10.101.101.101:6643: 
>>> connected
>>> 2019-08-28T09:23:01.279Z|00040|reconnect|INFO|tcp:10.101.101.102:6643: 
>>> connected
>>>
>>> I have tried with ovn-ctl to create the clustered-databases, but the 
>>> problem is same, stuck when creating neutron resources. I think it because 
>>> the ovn-northd run in 3 nodes, but neutron only run on a single controller.
>>
>>
>> I don't think that's the issue.
>> The issue seems to be that networking-ovn is not tested with connecting to 
>> clustered db.
>> Try passing just one remote to neutron server and see if it works.
>>
>>
>> May be you can ask this question in the openstack ML to get more attention.
>>
>> Numan
>>
>>>
>>> this is the step: http://paste.openstack.org/show/766470/
>>> should I try the step first? but check with passing the remote URL to 
>>> command?
>>>
>>> Best Regards,
>>> Zufar Dhiyaulhaq
>>>
>>>
>>> On Wed, Aug 28, 2019 at 6:06 PM Numan Siddique  wrote:



 On Wed, Aug 28, 2019 at 4:04 PM Zufar Dhiyaulhaq 
  wrote:
>
> [ovn][clustered] Confusing to create ovsdb-server clustered databases
>
> Hi Everyone, I have successfully created OpenStack with OVN enabled. But 
> the problem comes when I try to cluster the ovsdb-server. My scenario is 
> trying to cluster the ovsdb-server databases but only using single 
> ovn-northd.
>
> My cluster:
> - controller0 : 10.100.100.100 / 10.101.101.100 (ovn-northd, 
> ovsdb-server, neutron server)
> - controller1 : 10.100.100.101 / 10.101.101.101 (ovsdb-server)
> - controller2 : 10.100.100.102 / 10.101.101.102 

Re: [ovs-discuss] [ovn][clustered] Confusing to create ovsdb-server clustered databases

2019-08-28 Thread Zufar Dhiyaulhaq
Hi Numan,

Yes, it's working. I think the networking-ovn plugin in OpenStack has some
bugs. let me use a single IP first or maybe I can use pacemaker to create
the VIP.

[root@zu-ovn-controller0 ~]# ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:
10.101.101.101:6641,tcp:10.101.101.102:6641  ls-add sw0
[root@zu-ovn-controller0 ~]# ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:
10.101.101.101:6641,tcp:10.101.101.102:6641  show
switch 5d3ea060-f92f-41ab-8143-6a6534bbba98 (sw0)
[root@zu-ovn-controller0 ~]#

[root@zu-ovn-controller1 ~]#  ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:
10.101.101.101:6641,tcp:10.101.101.102:6641  show
switch 5d3ea060-f92f-41ab-8143-6a6534bbba98 (sw0)

Thank you very much :)

Best Regards,
Zufar Dhiyaulhaq


On Wed, Aug 28, 2019 at 7:17 PM Numan Siddique  wrote:

>
>
> On Wed, Aug 28, 2019 at 4:45 PM Zufar Dhiyaulhaq 
> wrote:
>
>> Hi Numan,
>>
>> I have tried the command but output nothing.
>>
>> [root@zu-ovn-controller0 ~]# ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:
>> 10.101.101.101:6641,tcp:10.101.101.102:6641 show
>>
>
> These commands seem to work. Try creating a logical switch like -
> ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:
> 10.101.101.102:6641  ls-add sw0
> ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:
> 10.101.101.102:6641  show
>
>
> [root@zu-ovn-controller0 ~]# ovn-sbctl --db=tcp:10.101.101.100:6642,tcp:
>> 10.101.101.101:6642,tcp:10.101.101.102:6642 show
>> Chassis "1ee48dd1-d520-476d-82d3-3d4651132f47"
>> hostname: "zu-ovn-compute0"
>> Encap geneve
>> ip: "10.101.101.103"
>> options: {csum="true"}
>> Chassis "cd1a2535-522a-4571-8eac-8394681846a3"
>> hostname: "zu-ovn-compute2"
>> Encap geneve
>> ip: "10.101.101.105"
>> options: {csum="true"}
>> Chassis "a5b59592-f511-4a7a-b37d-93f933c35ea5"
>> hostname: "zu-ovn-compute1"
>> Encap geneve
>> ip: "10.101.101.104"
>> options: {csum="true"}
>> [root@zu-ovn-controller0 ~]# tail -f
>> /var/log/openvswitch/ovsdb-server-nb.log
>> 2019-08-28T09:12:31.190Z|00031|reconnect|INFO|tcp:10.101.101.102:6643:
>> connection attempt failed (No route to host)
>> 2019-08-28T09:12:31.190Z|00032|reconnect|INFO|tcp:10.101.101.102:6643:
>> waiting 2 seconds before reconnect
>> 2019-08-28T09:12:33.191Z|00033|reconnect|INFO|tcp:10.101.101.102:6643:
>> connecting...
>> 2019-08-28T09:12:33.192Z|00034|reconnect|INFO|tcp:10.101.101.102:6643:
>> connection attempt failed (No route to host)
>> 2019-08-28T09:12:33.192Z|00035|reconnect|INFO|tcp:10.101.101.102:6643:
>> waiting 4 seconds before reconnect
>> 2019-08-28T09:12:37.192Z|00036|reconnect|INFO|tcp:10.101.101.102:6643:
>> connecting...
>> 2019-08-28T09:12:37.192Z|00037|reconnect|INFO|tcp:10.101.101.102:6643:
>> connection attempt failed (No route to host)
>> 2019-08-28T09:12:37.192Z|00038|reconnect|INFO|tcp:10.101.101.102:6643:
>> continuing to reconnect in the background but suppressing further logging
>> 2019-08-28T09:22:52.597Z|00039|reconnect|INFO|tcp:10.101.101.101:6643:
>> connected
>>
>> 2019-08-28T09:23:01.279Z|00040|reconnect|INFO|tcp:10.101.101.102:6643:
>> connected
>>
>> I have tried with ovn-ctl to create the clustered-databases, but the
>> problem is same, stuck when creating neutron resources. I think it because
>> the ovn-northd run in 3 nodes, but neutron only run on a single controller.
>>
>
> I don't think that's the issue.
> The issue seems to be that networking-ovn is not tested with connecting to
> clustered db.
> Try passing just one remote to neutron server and see if it works.
>
>
> May be you can ask this question in the openstack ML to get more attention.
>
> Numan
>
>
>> this is the step: http://paste.openstack.org/show/766470/
>> should I try the step first? but check with passing the remote URL to
>> command?
>>
>> Best Regards,
>> Zufar Dhiyaulhaq
>>
>>
>> On Wed, Aug 28, 2019 at 6:06 PM Numan Siddique 
>> wrote:
>>
>>>
>>>
>>> On Wed, Aug 28, 2019 at 4:04 PM Zufar Dhiyaulhaq <
>>> zu...@onf-ambassador.org> wrote:
>>>
 [ovn][clustered] Confusing to create ovsdb-server clustered databases

 Hi Everyone, I have successfully created OpenStack with OVN enabled.
 But the problem comes when I try to cluster the ovsdb-server. My scenario
 is trying to cluster the ovsdb-server databases but only using single
 ovn-northd.

 My cluster:
 - controller0 : 10.100.100.100 / 10.101.101.100 (ovn-northd,
 ovsdb-server, neutron server)
 - controller1 : 10.100.100.101 / 10.101.101.101 (ovsdb-server)
 - controller2 : 10.100.100.102 / 10.101.101.102 (ovsdb-server)
 - compute1 : 10.100.100.103 / 10.101.101.103
 - compute2 : 10.100.100.104 / 10.101.101.104
 - compute3 : 10.100.100.105 / 10.101.101.105

 10.100.100.0/24 : Management Network
 10.101.101.0/24 : Data Network

 I have installed the OpenStack using the manual method. Below is the
 step to create ovsdb-server and neutron services.

Re: [ovs-discuss] [ovn][clustered] Confusing to create ovsdb-server clustered databases

2019-08-28 Thread Numan Siddique
On Wed, Aug 28, 2019 at 4:45 PM Zufar Dhiyaulhaq 
wrote:

> Hi Numan,
>
> I have tried the command but output nothing.
>
> [root@zu-ovn-controller0 ~]# ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:
> 10.101.101.101:6641,tcp:10.101.101.102:6641 show
>

These commands seem to work. Try creating a logical switch like -
ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:
10.101.101.102:6641  ls-add sw0
ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:
10.101.101.102:6641  show


[root@zu-ovn-controller0 ~]# ovn-sbctl --db=tcp:10.101.101.100:6642,tcp:
> 10.101.101.101:6642,tcp:10.101.101.102:6642 show
> Chassis "1ee48dd1-d520-476d-82d3-3d4651132f47"
> hostname: "zu-ovn-compute0"
> Encap geneve
> ip: "10.101.101.103"
> options: {csum="true"}
> Chassis "cd1a2535-522a-4571-8eac-8394681846a3"
> hostname: "zu-ovn-compute2"
> Encap geneve
> ip: "10.101.101.105"
> options: {csum="true"}
> Chassis "a5b59592-f511-4a7a-b37d-93f933c35ea5"
> hostname: "zu-ovn-compute1"
> Encap geneve
> ip: "10.101.101.104"
> options: {csum="true"}
> [root@zu-ovn-controller0 ~]# tail -f
> /var/log/openvswitch/ovsdb-server-nb.log
> 2019-08-28T09:12:31.190Z|00031|reconnect|INFO|tcp:10.101.101.102:6643:
> connection attempt failed (No route to host)
> 2019-08-28T09:12:31.190Z|00032|reconnect|INFO|tcp:10.101.101.102:6643:
> waiting 2 seconds before reconnect
> 2019-08-28T09:12:33.191Z|00033|reconnect|INFO|tcp:10.101.101.102:6643:
> connecting...
> 2019-08-28T09:12:33.192Z|00034|reconnect|INFO|tcp:10.101.101.102:6643:
> connection attempt failed (No route to host)
> 2019-08-28T09:12:33.192Z|00035|reconnect|INFO|tcp:10.101.101.102:6643:
> waiting 4 seconds before reconnect
> 2019-08-28T09:12:37.192Z|00036|reconnect|INFO|tcp:10.101.101.102:6643:
> connecting...
> 2019-08-28T09:12:37.192Z|00037|reconnect|INFO|tcp:10.101.101.102:6643:
> connection attempt failed (No route to host)
> 2019-08-28T09:12:37.192Z|00038|reconnect|INFO|tcp:10.101.101.102:6643:
> continuing to reconnect in the background but suppressing further logging
> 2019-08-28T09:22:52.597Z|00039|reconnect|INFO|tcp:10.101.101.101:6643:
> connected
>
> 2019-08-28T09:23:01.279Z|00040|reconnect|INFO|tcp:10.101.101.102:6643:
> connected
>
> I have tried with ovn-ctl to create the clustered-databases, but the
> problem is same, stuck when creating neutron resources. I think it because
> the ovn-northd run in 3 nodes, but neutron only run on a single controller.
>

I don't think that's the issue.
The issue seems to be that networking-ovn is not tested with connecting to
clustered db.
Try passing just one remote to neutron server and see if it works.


May be you can ask this question in the openstack ML to get more attention.

Numan


> this is the step: http://paste.openstack.org/show/766470/
> should I try the step first? but check with passing the remote URL to
> command?
>
> Best Regards,
> Zufar Dhiyaulhaq
>
>
> On Wed, Aug 28, 2019 at 6:06 PM Numan Siddique 
> wrote:
>
>>
>>
>> On Wed, Aug 28, 2019 at 4:04 PM Zufar Dhiyaulhaq <
>> zu...@onf-ambassador.org> wrote:
>>
>>> [ovn][clustered] Confusing to create ovsdb-server clustered databases
>>>
>>> Hi Everyone, I have successfully created OpenStack with OVN enabled. But
>>> the problem comes when I try to cluster the ovsdb-server. My scenario is
>>> trying to cluster the ovsdb-server databases but only using single
>>> ovn-northd.
>>>
>>> My cluster:
>>> - controller0 : 10.100.100.100 / 10.101.101.100 (ovn-northd,
>>> ovsdb-server, neutron server)
>>> - controller1 : 10.100.100.101 / 10.101.101.101 (ovsdb-server)
>>> - controller2 : 10.100.100.102 / 10.101.101.102 (ovsdb-server)
>>> - compute1 : 10.100.100.103 / 10.101.101.103
>>> - compute2 : 10.100.100.104 / 10.101.101.104
>>> - compute3 : 10.100.100.105 / 10.101.101.105
>>>
>>> 10.100.100.0/24 : Management Network
>>> 10.101.101.0/24 : Data Network
>>>
>>> I have installed the OpenStack using the manual method. Below is the
>>> step to create ovsdb-server and neutron services.
>>> - step 1: bootstrapping ovsdb-server cluster :
>>> http://paste.openstack.org/show/766463/
>>> - step 2: creating neutron service in controller :
>>> http://paste.openstack.org/show/766464/
>>> - step 3: creating neutron service in compute :
>>> http://paste.openstack.org/show/766465/
>>>
>>> But when I try to create neutron resource, Its always hang (only neutron
>>> resource).
>>>
>>> This is the full logs of all nodes, contain:
>>> http://paste.openstack.org/show/766461/
>>> - all openvswitch logs
>>> - port (via netstat)
>>> - step bootstraping ovsdb-server
>>>
>>> Neutron logs in controller0: paste.openstack.org/show/766462/
>>>
>>> Anyone know why stuck? Are my step is wrong?
>>>
>>>
>> Hi Zufar,
>>
>> Have you tried connecting to the clustered dbs using ovn-nbctl ?
>> Does it work fine when you pass the remotes i.e
>> $ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:
>> 

Re: [ovs-discuss] [ovn][clustered] Confusing to create ovsdb-server clustered databases

2019-08-28 Thread Zufar Dhiyaulhaq
Hi Numan,

Sorry for double reply (because not cc to ovs-discuss)

I trying to build a cluster with documentation provided in ovn-ctl manual
page. But the ovn-nbctl output nothing. The port is listening in 6641,
6642, 6643, and 6644.
Below the logs:

[root@zu-ovn-controller0 ~]# /usr/share/openvswitch/scripts/ovn-ctl
start_northd $OVN_NORTHD_OPTS --db-nb-addr=10.101.101.100
--db-nb-create-insecure-remote=yes --db-sb-addr=10.101.101.100
--db-sb-create-insecure-remote=yes
--db-nb-cluster-local-addr=10.101.101.100
--db-sb-cluster-local-addr=10.101.101.100 --ovn-northd-nb-db=tcp:
10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:10.101.101.102:6641
--ovn-northd-sb-db=tcp:10.101.101.100:6642,tcp:10.101.101.101:6642,tcp:
10.101.101.102:6642
Creating cluster database /etc/openvswitch/ovnnb_db.db [  OK  ]
Waiting for OVN_Northbound to come up
2019-08-28T11:28:50Z|1|reconnect|INFO|unix:/var/run/openvswitch/ovnnb_db.sock:
connecting...
2019-08-28T11:28:50Z|2|reconnect|INFO|unix:/var/run/openvswitch/ovnnb_db.sock:
connected
   [  OK  ]
Creating cluster database /etc/openvswitch/ovnsb_db.db [  OK  ]
Waiting for OVN_Southbound to come up
2019-08-28T11:28:51Z|1|reconnect|INFO|unix:/var/run/openvswitch/ovnsb_db.sock:
connecting...
2019-08-28T11:28:51Z|2|reconnect|INFO|unix:/var/run/openvswitch/ovnsb_db.sock:
connected
   [  OK  ]
Starting ovn-northd[  OK  ]
[root@zu-ovn-controller0 ~]# ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:
10.101.101.101:6641,tcp:10.101.101.102:6641 show
[root@zu-ovn-controller0 ~]# ovn-sbctl --db=tcp:10.101.101.100:6642,tcp:
10.101.101.101:6642,tcp:10.101.101.102:6642 show
Chassis "a5b59592-f511-4a7a-b37d-93f933c35ea5"
hostname: "zu-ovn-compute1"
Encap geneve
ip: "10.101.101.104"
options: {csum="true"}
Chassis "cd1a2535-522a-4571-8eac-8394681846a3"
hostname: "zu-ovn-compute2"
Encap geneve
ip: "10.101.101.105"
options: {csum="true"}
Chassis "1ee48dd1-d520-476d-82d3-3d4651132f47"
hostname: "zu-ovn-compute0"
Encap geneve
ip: "10.101.101.103"
options: {csum="true"}

[root@zu-ovn-controller1 ~]# /usr/share/openvswitch/scripts/ovn-ctl
start_northd $OVN_NORTHD_OPTS --db-nb-addr=10.101.101.101
--db-nb-create-insecure-remote=yes --db-sb-addr=10.101.101.101
--db-sb-create-insecure-remote=yes
--db-nb-cluster-local-addr=10.101.101.101
--db-sb-cluster-local-addr=10.101.101.101
--db-nb-cluster-remote-addr=10.101.101.100
--db-sb-cluster-remote-addr=10.101.101.100 --ovn-northd-nb-db=tcp:
10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:10.101.101.102:6641
--ovn-northd-sb-db=tcp:10.101.101.100:6642,tcp:10.101.101.101:6642,tcp:
10.101.101.102:6642
Waiting for OVN_Northbound to come up
2019-08-28T11:29:03Z|1|reconnect|INFO|unix:/var/run/openvswitch/ovnnb_db.sock:
connecting...
2019-08-28T11:29:03Z|2|reconnect|INFO|unix:/var/run/openvswitch/ovnnb_db.sock:
connected
   [  OK  ]
Waiting for OVN_Southbound to come up
2019-08-28T11:29:03Z|1|reconnect|INFO|unix:/var/run/openvswitch/ovnsb_db.sock:
connecting...
2019-08-28T11:29:03Z|2|reconnect|INFO|unix:/var/run/openvswitch/ovnsb_db.sock:
connected
   [  OK  ]
Starting ovn-northd[  OK  ]

[root@zu-ovn-controller2 ~]# /usr/share/openvswitch/scripts/ovn-ctl
start_northd $OVN_NORTHD_OPTS --db-nb-addr=10.101.101.102
--db-nb-create-insecure-remote=yes
--db-nb-cluster-local-addr=10.101.101.102 --db-sb-addr=10.101.101.102
--db-sb-create-insecure-remote=yes
--db-sb-cluster-local-addr=10.101.101.102
--db-nb-cluster-remote-addr=10.101.101.100
--db-sb-cluster-remote-addr=10.101.101.100 --ovn-northd-nb-db=tcp:
10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:10.101.101.102:6641
--ovn-northd-sb-db=tcp:10.101.101.100:6642,tcp:10.101.101.101:6642,tcp:
10.101.101.102:6642
Waiting for OVN_Northbound to come up
2019-08-28T11:29:13Z|1|reconnect|INFO|unix:/var/run/openvswitch/ovnnb_db.sock:
connecting...
2019-08-28T11:29:13Z|2|reconnect|INFO|unix:/var/run/openvswitch/ovnnb_db.sock:
connected
   [  OK  ]
Waiting for OVN_Southbound to come up
2019-08-28T11:29:14Z|1|reconnect|INFO|unix:/var/run/openvswitch/ovnsb_db.sock:
connecting...
2019-08-28T11:29:14Z|2|reconnect|INFO|unix:/var/run/openvswitch/ovnsb_db.sock:
connected
   [  OK  ]
Starting ovn-northd[  OK  ]

Best Regards,
Zufar Dhiyaulhaq


On Wed, Aug 28, 2019 at 6:14 PM Zufar Dhiyaulhaq 
wrote:

> Hi Numan,
>
> I have tried the command but output nothing.
>
> [root@zu-ovn-controller0 ~]# ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:
> 

Re: [ovs-discuss] [ovn][clustered] Confusing to create ovsdb-server clustered databases

2019-08-28 Thread Zufar Dhiyaulhaq
Hi Numan,

I have tried the command but output nothing.

[root@zu-ovn-controller0 ~]# ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:
10.101.101.101:6641,tcp:10.101.101.102:6641 show
[root@zu-ovn-controller0 ~]# ovn-sbctl --db=tcp:10.101.101.100:6642,tcp:
10.101.101.101:6642,tcp:10.101.101.102:6642 show
Chassis "1ee48dd1-d520-476d-82d3-3d4651132f47"
hostname: "zu-ovn-compute0"
Encap geneve
ip: "10.101.101.103"
options: {csum="true"}
Chassis "cd1a2535-522a-4571-8eac-8394681846a3"
hostname: "zu-ovn-compute2"
Encap geneve
ip: "10.101.101.105"
options: {csum="true"}
Chassis "a5b59592-f511-4a7a-b37d-93f933c35ea5"
hostname: "zu-ovn-compute1"
Encap geneve
ip: "10.101.101.104"
options: {csum="true"}
[root@zu-ovn-controller0 ~]# tail -f
/var/log/openvswitch/ovsdb-server-nb.log
2019-08-28T09:12:31.190Z|00031|reconnect|INFO|tcp:10.101.101.102:6643:
connection attempt failed (No route to host)
2019-08-28T09:12:31.190Z|00032|reconnect|INFO|tcp:10.101.101.102:6643:
waiting 2 seconds before reconnect
2019-08-28T09:12:33.191Z|00033|reconnect|INFO|tcp:10.101.101.102:6643:
connecting...
2019-08-28T09:12:33.192Z|00034|reconnect|INFO|tcp:10.101.101.102:6643:
connection attempt failed (No route to host)
2019-08-28T09:12:33.192Z|00035|reconnect|INFO|tcp:10.101.101.102:6643:
waiting 4 seconds before reconnect
2019-08-28T09:12:37.192Z|00036|reconnect|INFO|tcp:10.101.101.102:6643:
connecting...
2019-08-28T09:12:37.192Z|00037|reconnect|INFO|tcp:10.101.101.102:6643:
connection attempt failed (No route to host)
2019-08-28T09:12:37.192Z|00038|reconnect|INFO|tcp:10.101.101.102:6643:
continuing to reconnect in the background but suppressing further logging
2019-08-28T09:22:52.597Z|00039|reconnect|INFO|tcp:10.101.101.101:6643:
connected

2019-08-28T09:23:01.279Z|00040|reconnect|INFO|tcp:10.101.101.102:6643:
connected

I have tried with ovn-ctl to create the clustered-databases, but the
problem is same, stuck when creating neutron resources. I think it because
the ovn-northd run in 3 nodes, but neutron only run on a single controller.

this is the step: http://paste.openstack.org/show/766470/
should I try the step first? but check with passing the remote URL to
command?

Best Regards,
Zufar Dhiyaulhaq


On Wed, Aug 28, 2019 at 6:06 PM Numan Siddique  wrote:

>
>
> On Wed, Aug 28, 2019 at 4:04 PM Zufar Dhiyaulhaq 
> wrote:
>
>> [ovn][clustered] Confusing to create ovsdb-server clustered databases
>>
>> Hi Everyone, I have successfully created OpenStack with OVN enabled. But
>> the problem comes when I try to cluster the ovsdb-server. My scenario is
>> trying to cluster the ovsdb-server databases but only using single
>> ovn-northd.
>>
>> My cluster:
>> - controller0 : 10.100.100.100 / 10.101.101.100 (ovn-northd,
>> ovsdb-server, neutron server)
>> - controller1 : 10.100.100.101 / 10.101.101.101 (ovsdb-server)
>> - controller2 : 10.100.100.102 / 10.101.101.102 (ovsdb-server)
>> - compute1 : 10.100.100.103 / 10.101.101.103
>> - compute2 : 10.100.100.104 / 10.101.101.104
>> - compute3 : 10.100.100.105 / 10.101.101.105
>>
>> 10.100.100.0/24 : Management Network
>> 10.101.101.0/24 : Data Network
>>
>> I have installed the OpenStack using the manual method. Below is the step
>> to create ovsdb-server and neutron services.
>> - step 1: bootstrapping ovsdb-server cluster :
>> http://paste.openstack.org/show/766463/
>> - step 2: creating neutron service in controller :
>> http://paste.openstack.org/show/766464/
>> - step 3: creating neutron service in compute :
>> http://paste.openstack.org/show/766465/
>>
>> But when I try to create neutron resource, Its always hang (only neutron
>> resource).
>>
>> This is the full logs of all nodes, contain:
>> http://paste.openstack.org/show/766461/
>> - all openvswitch logs
>> - port (via netstat)
>> - step bootstraping ovsdb-server
>>
>> Neutron logs in controller0: paste.openstack.org/show/766462/
>>
>> Anyone know why stuck? Are my step is wrong?
>>
>>
> Hi Zufar,
>
> Have you tried connecting to the clustered dbs using ovn-nbctl ?
> Does it work fine when you pass the remotes i.e
> $ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:
> 10.101.101.102:6641 show
>
> If so, then it seems to me you have configured the clustered db properly.
>
> Also please take a look at - "man ovn-ctl" and grep for "Creating a
> clustered db on 3 nodes".
>
> May be the python IDL code or the ovsdbapp which networking-ovn uses
> doesn't support connecting
> with multiple remotes ?
>
> I remember adding the support in the python IDL to accept multiple remotes
> [1], but not sure
> what's the status now.
>
> @Daniel - Do you have any comments/pointers ?
>
> [1] -
> https://github.com/openvswitch/ovs/commit/31e434fc985c682708f5d92bde2ceae452bdaa4f
>
> https://github.com/openvswitch/ovs/commit/257edb1ae07c150023575dfb38ea8e539ad713de
>
> Thanks
> Numan
>
> Thank you.
>>
>> Best Regards,
>> Zufar Dhiyaulhaq
>> 

Re: [ovs-discuss] [ovn][clustered] Confusing to create ovsdb-server clustered databases

2019-08-28 Thread Numan Siddique
On Wed, Aug 28, 2019 at 4:04 PM Zufar Dhiyaulhaq 
wrote:

> [ovn][clustered] Confusing to create ovsdb-server clustered databases
>
> Hi Everyone, I have successfully created OpenStack with OVN enabled. But
> the problem comes when I try to cluster the ovsdb-server. My scenario is
> trying to cluster the ovsdb-server databases but only using single
> ovn-northd.
>
> My cluster:
> - controller0 : 10.100.100.100 / 10.101.101.100 (ovn-northd, ovsdb-server,
> neutron server)
> - controller1 : 10.100.100.101 / 10.101.101.101 (ovsdb-server)
> - controller2 : 10.100.100.102 / 10.101.101.102 (ovsdb-server)
> - compute1 : 10.100.100.103 / 10.101.101.103
> - compute2 : 10.100.100.104 / 10.101.101.104
> - compute3 : 10.100.100.105 / 10.101.101.105
>
> 10.100.100.0/24 : Management Network
> 10.101.101.0/24 : Data Network
>
> I have installed the OpenStack using the manual method. Below is the step
> to create ovsdb-server and neutron services.
> - step 1: bootstrapping ovsdb-server cluster :
> http://paste.openstack.org/show/766463/
> - step 2: creating neutron service in controller :
> http://paste.openstack.org/show/766464/
> - step 3: creating neutron service in compute :
> http://paste.openstack.org/show/766465/
>
> But when I try to create neutron resource, Its always hang (only neutron
> resource).
>
> This is the full logs of all nodes, contain:
> http://paste.openstack.org/show/766461/
> - all openvswitch logs
> - port (via netstat)
> - step bootstraping ovsdb-server
>
> Neutron logs in controller0: paste.openstack.org/show/766462/
>
> Anyone know why stuck? Are my step is wrong?
>
>
Hi Zufar,

Have you tried connecting to the clustered dbs using ovn-nbctl ?
Does it work fine when you pass the remotes i.e
$ovn-nbctl --db=tcp:10.101.101.100:6641,tcp:10.101.101.101:6641,tcp:
10.101.101.102:6641 show

If so, then it seems to me you have configured the clustered db properly.

Also please take a look at - "man ovn-ctl" and grep for "Creating a
clustered db on 3 nodes".

May be the python IDL code or the ovsdbapp which networking-ovn uses
doesn't support connecting
with multiple remotes ?

I remember adding the support in the python IDL to accept multiple remotes
[1], but not sure
what's the status now.

@Daniel - Do you have any comments/pointers ?

[1] -
https://github.com/openvswitch/ovs/commit/31e434fc985c682708f5d92bde2ceae452bdaa4f

https://github.com/openvswitch/ovs/commit/257edb1ae07c150023575dfb38ea8e539ad713de

Thanks
Numan

Thank you.
>
> Best Regards,
> Zufar Dhiyaulhaq
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] [ovn][clustered] Confusing to create ovsdb-server clustered databases

2019-08-28 Thread Zufar Dhiyaulhaq
[ovn][clustered] Confusing to create ovsdb-server clustered databases

Hi Everyone, I have successfully created OpenStack with OVN enabled. But
the problem comes when I try to cluster the ovsdb-server. My scenario is
trying to cluster the ovsdb-server databases but only using single
ovn-northd.

My cluster:
- controller0 : 10.100.100.100 / 10.101.101.100 (ovn-northd, ovsdb-server,
neutron server)
- controller1 : 10.100.100.101 / 10.101.101.101 (ovsdb-server)
- controller2 : 10.100.100.102 / 10.101.101.102 (ovsdb-server)
- compute1 : 10.100.100.103 / 10.101.101.103
- compute2 : 10.100.100.104 / 10.101.101.104
- compute3 : 10.100.100.105 / 10.101.101.105

10.100.100.0/24 : Management Network
10.101.101.0/24 : Data Network

I have installed the OpenStack using the manual method. Below is the step
to create ovsdb-server and neutron services.
- step 1: bootstrapping ovsdb-server cluster :
http://paste.openstack.org/show/766463/
- step 2: creating neutron service in controller :
http://paste.openstack.org/show/766464/
- step 3: creating neutron service in compute :
http://paste.openstack.org/show/766465/

But when I try to create neutron resource, Its always hang (only neutron
resource).

This is the full logs of all nodes, contain:
http://paste.openstack.org/show/766461/
- all openvswitch logs
- port (via netstat)
- step bootstraping ovsdb-server

Neutron logs in controller0: paste.openstack.org/show/766462/

Anyone know why stuck? Are my step is wrong?

Thank you.

Best Regards,
Zufar Dhiyaulhaq
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] limit packet in message number by port, table and bridge

2019-08-28 Thread kai xi fan
There is no limit on the number of packet in message send to controller
now.
We have faced a lots of problems for this when user is doing ip scanning.
ovs will send a packet in message to controller after table lookup miss.
I think that it's better to take the design from hardware switch asic: ovs
could limit the packet in message number by input port, table and bridge.
If it's acceptable, I will try to implement this.

Thanks. Regards
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] Query on DEC_ttl action implementation in datapath

2019-08-28 Thread bindiya Kurle
Hi Justin,

Thanks for the clarification. I agree to your point ,but consider a
use-case if I have routing decision only based on destination ip then as
per current implementation, ovs will add 2 flows (in data path)for packets
coming from different source and if the TTL happens to be different for
them .This will reduce number of flows that can be supported with ovs. If
decrement TTL  was done in kernel ,it would have ended in adding one flow
only.

Regards,
Bindiya

Regards,
Bindiya

On Tue, Aug 27, 2019 at 9:48 PM Justin Pettit  wrote:

> I think it was considered cleaner from an ABI perspective, since it
> doesn't require another action, since "set" was already supported.  In
> practice, I don't think it's a problem, since usually a TTL decrement is
> associated with a routing decision, and TTLs tend to be fairly static
> between two hosts.
>
> --Justin
>
>
> > On Aug 27, 2019, at 1:11 AM, bindiya Kurle 
> wrote:
> >
> > hi ,
> > I have a question related to dec_ttl action implemented in datapath.
> > when  dec_ttl action is configured in OVS following action get added in
> datapath.
> >
> > recirc_id(0),in_port(2),eth(),eth_type(0x0800),ipv4(ttl=64,frag=no),
> packets:3, bytes:294, used:0.068s, actions:set(ipv4(ttl=63)),3,
> >
> > if packet comes with different TTL on same port then one more action get
> added in datapath.
> > for ex:
> > recirc_id(0),in_port(2),eth(),eth_type(0x0800),ipv4(ttl=9,frag=no),
> packets:3, bytes:294, used:0.068s, actions:set(ipv4(ttl=8)),3,
> >
> > Could someone please explain why dec_ttl is implemeted as a set action
> rather than dec_ttl action.
> >
> >
> > I mean , why for different ttl one more rule get added rather than  just
> adding it as  following as done in userspace
> >
> > recirc_id(0),in_port(3),eth(),eth_type(0x0800),ipv4(frag=no), packets:3,
> bytes:294, used:0.737s, actions:dec_ttl,2
> >
> > Regards,
> > Bindiya
> > ___
> > discuss mailing list
> > disc...@openvswitch.org
> > https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss