Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-07 Thread Brendan Doyle


OK so an update, and thanks for the help todate. These steps get my OVN 
cluster up and running:


kubectl create -f ovn-setup.yaml kubectl apply -f ovnkube-db-raft.yaml
However I see warning messages of the like:

2020-07-07T15:15:12Z|1|ovsdb_idl|WARN|OVN_Northbound database lacks 
Forwarding_Group table (database needs upgrade?)


I suspect because the OVN docker image is not the same as the OVN bits 
I'm running on my chassis, or is not the latest.
I guess now its a question of figuring out how to use my own OVN bits 
and not those that are in the

http://docker.io/ovnkube/ovn-daemonset:latest

Thanks

Brendan

On 07/07/2020 11:33, Brendan Doyle wrote:



On 06/07/2020 21:29, Girish Moodalbail wrote:

Hello Brendan,

After you run the './daemonset.sh` script, there will be two DB 
related yaml files in `dist/yaml` folder. The ovnkube-db.yaml brings 
up standalone OVN DBs, whilst the ovnkube-db-raft brings up the OVN 
Clustered DBs. Please do `kubectl apply -f 
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-db-raft.yaml 
`.


Humm "kubectl apply -f" or "kubectl create-f" as per 
https://github.com/ovn-org/ovn-kubernetes/ instructions. And what 
needs to be run before that?

create -f ovn-setup.yaml apply -f ovnkube-db-raft.yaml

What about ovnkube-master.yaml, I think not?



Furthermore, if you read that YAML file the node selector is set to 
nodes with label `k8s.ovn.org/ovnkube-db=true` 
. So, you will need to annotate 
at least 3 nodes with that label.


Yes, would be good to have that in a README.

Thanks I will try again with these.




HTH

Regards,
~Girish

On Mon, Jul 6, 2020 at 8:37 AM Brendan Doyle 
mailto:brendan.do...@oracle.com>> wrote:


So I've tried the steps in

https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d:

|cd $HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/images

./daemonset.sh --image=docker.io/ovnkube/ovn-daemonset-u:latest
 \
--net-cidr=192.168.0.0/16 
--svc-cidr=172.16.1.0/24  \
--gateway-mode="local" \ --k8s-apiserver=https://$MASTER_IP:6443|


|# Create OVN namespace, service accounts, ovnkube-db headless
service, configmap, and policies kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovn-setup.yaml

# Run ovnkube-db deployment. kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-db.yaml

# Run ovnkube-master deployment. kubectl create -f

$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-master.yaml

# Run ovnkube daemonset for nodes kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-node.yaml

And I see nothing in my k8 cluster, seems like it does nothing
[root@ca-rain01 yaml]# kubectl create -f ovnkube-master.yaml
deployment "ovnkube-master" created [root@ca-rain01 yaml]#
kubectl delete deployment ovnkube-master Error from server
(NotFound): deployments.extensions "ovnkube-master" not found Has
anybody got this working, or used any other means to deploy an
OVN cluster as a K8 Statefulset? Brendan |



On 06/07/2020 12:33, Brendan Doyle wrote:

Hi,

So I'm really confused by what you have pointed me to here. As
stated I do NOT
want to use OVN as a CNI. I have a k8s cluster that use flannel
as the CNI. I simply
want to create an OVN "central" cluster as a Stateful set in my
*existing* K8
config.

This repo:

https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d

Seems to be for setting up a K8s cluster to use OVN as the CNI??
Have you tried this?
What IP do the ovn-controllers use to reach the OVN "central
cluster?
It seems to use an OVN docker image from docker.io
, I want to use my own OVN src
Do I use/modify the dist/images/Dockerfile in this repo? that
has loads of references to CNI
like I said I don't want to use OVN as the CNI??


The instructions here

https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst
seem more promising, if not a little confusing:

IN the section "Starting OVN Central services in containers"


Export following variables in .env and place it under project root:

$ OVN_BRANCH=
$ OVN_VERSION=
$ DISTRO=
   

Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-07 Thread Brendan Doyle



On 06/07/2020 21:13, aginwala wrote:

Hi:

Adding the ML too. Folks from k8s can comment on the same to see if 
ovn-k8s repo needs an update in the documentation for you to get the 
setup working when using their specs as is without any code changes in 
addition to using your own custom ovn images, etc. I am getting mail 
failure when adding ovn-k8s google group as I think I don't have 
permission to post there. Also the yaml specs and raft scripts have 
good comments which can give you a clear idea too.


I'm not sure they do, seems like a README would good good, what I'm 
inferring is that you can use different combinations of

yamls to achieve different things.

1) Configure an OVN CNI
create -f ovn-setup.yaml
create -f ovnkube-db.yaml
create -f ovnkube-master.yaml
create -f ovnkube-node.yaml

2) create and start an k8 OVN raft clustered OVN central
create -f ovn-setup.yaml
create -f ovnkube-db-raft.yaml


I'm not sure about the steps for 2, and weather or not other yamls also 
need to be run ovnkube-db.yaml? ovnkube-node.yaml?





Also cc'd Girish who can comment further.


Also things like volumes(PV) for ovn central dedicated nodes, 
monitoring, backing up ovn db,  etc. needs to be considered so that 
when the pod is restarted or ovn version is upgraded, cluster settings 
are retained and cluster health stats are also taken into consideration.


Err, so the yamls don't create these?? so if a POD is restarted the 
NB/SB databases are lost?? really??
I would have thought if a raft clustered was created and a pod in that 
clustered is restarted the cluster

would sync from the other pods? this is worrying.






I got the design aspect of it sorted a week ago and had internal 
review too cc Han as we do not use ovn as CNI too including some 
pending containerizing items for ovn global dbs and ovn interconnect 
controller to use for ovn interconnect. However, it's pending testing 
in k8s with all the specs/tweaks due to some other priorities. As the 
approach taken by ovn-k8s is succinct and already tested, it shouldn't 
be a bottleneck.


I'm not sure I follow, are you saying this is all work in progress?


I agree that overall documentation needs to be consolidated on both 
ovn-k8s side or ovn repo.


On Mon, Jul 6, 2020 at 9:49 AM Brendan Doyle > wrote:


Hi,

I've been trying to follow the instructions at
https://github.com/ovn-org/ovn-kubernetes
to set up an OVN "Central/Master" high availability (HA).  I want to
deploy and manage that
cluster as a Kubernetes service .

I can find lots of stuff on "ovn-kube" but this seems to be using
OVN as
a  kubernetes CNI instead of
Flannel etc.  But this is not what I want to do, I have a kubernetes
cluster using Flannel as the CNI,
now  I want to deploy a HA OVN "Central" as a kubernetes service.
Kind
of like how you can deploy
a MySQL cluster in kubernetes using a SatefulSet deployment.

I have found this:
https://github.com/ovn-org/ovn-kubernetes#readme

But it is not clear to me if this is how to setup OVN as a kubernetes
CNI or it's how to setup a HA OVN central as kubernetes service.

I did try he steps in the READMe above, but they did not seem to
work, then
I have just seen that there is a ovnkube-db-raft.yaml file, this
seems more
promising as it does use a StatefulSet, but I can find no
documentation
on this
file.

Thanks

Brendan



___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-07 Thread Brendan Doyle



On 06/07/2020 21:10, aginwala wrote:



On Mon, Jul 6, 2020 at 4:33 AM Brendan Doyle > wrote:


Hi,

So I'm really confused by what you have pointed me to here. As
stated I do NOT
want to use OVN as a CNI. I have a k8s cluster that use flannel as
the CNI. I simply
want to create an OVN "central" cluster as a Stateful set in my
*existing* K8
config.

This repo:

https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d

Seems to be for setting up a K8s cluster to use OVN as the CNI??



Still wondering about this? the above repo seems to be for creating an 
OVN CNI for kubernetes

Is this correct ???

But it also seems to include yamls for creating an OVN cluster as a k8s 
service via a StatefulSet


Is it necessary to create the OVN CNI in order to use 
ovnkube-db-raft.yam ???


As I have said I have an existing k8s cluster using a flannel CNI, I 
just want to deploy an OVN

central as a StatefulSet to that.


Have you tried this?
What IP do the ovn-controllers use to reach the OVN "central cluster?
It seems to use an OVN docker image from docker.io
, I want to use my own OVN src
Do I use/modify the dist/images/Dockerfile in this repo? that has
loads of references to CNI
like I said I don't want to use OVN as the CNI??

A pre-req for running ovn central as a k8s app is containerize ovn 
central components. Hence, you need to start your own containers using 
docker.
Either you follow the approach from ovn-k8s repo as to how to build 
ovn images or refer to the docker instructions in ovn repo. Since this 
app (ovn central) will run behind a k8s service, ovn-controller should 
point to the service ip of ovn central k8s app. k8s folks can comment 
on how to build image that is in k8s pod specs e.g 
http://docker.io/ovnkube/ovn-daemonset:latest


Yes the Docker image claims to be built using dist/images/Dockerfile, 
which installs more than just the OVN ovn central components




The instructions here

https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst
seem more promising, if not a little confusing:


1)


Start OVN containers using below command:

$ docker run -itd --net=host --name=ovn-nb \
   : ovn-nb-tcp

$ docker run -itd --net=host --name=ovn-sb \
   : ovn-sb-tcp

$ docker run -itd --net=host --name=ovn-northd \
   : ovn-northd-tcp

followed by

2)

$ docker run -e "host_ip=" -e "nb_db_port=" -itd \
   --name=ovn-nb-raft --net=host --privileged : \
   ovn-nb-cluster-create

$ docker run -e "host_ip=" -e "sb_db_port=" -itd \
   --name=ovn-sb-raft --net=host --privileged : \
   ovn-sb-cluster-create

$ docker run -e "OVN_NB_DB=tcp::6641,tcp::6641,\
   tcp::6641" -e "OVN_SB_DB=tcp::6642,tcp::6642,\
   tcp::6642" -itd --name=ovn-northd-raft : \
   ovn-northd-cluster

Does it mean do 1), then 2) or does it mean do 1) for non HA OVN
central *OR* 2)
for HA/clustered OVN Central?

Doc says Start OVN containers in cluster mode using below command on 
node2 and node3 to make them join the peer using below command:. 
Hence, you can even play with just docker on 3 nodes where you run 
step1 on node1 that creates cluster
Ok, is that the 1) above, surely 2) above creates the cluster 
"ovn-nb-cluster-create" ???


and do the join-cluster on rest two nodes to give you a clear idea 
before moving to pod in k8s. Not sure if you need more details to 
update doc. We can always improvise. Upstream ovn-k8s does the same 
for pods where e.g. ovn-kube0 pod creates a cluster and rest two pods 
joins


It's not clear


The docs are not clear, it seems to me the docs intend to say..

"
OVN containers can then be started either as a Stand Alone Database or as a
Clustered Database.


To start OVN containers as a Stand Alone Database use the commands below

$ docker run -itd --net=host --name=ovn-nb \ : ovn-nb-tcp

etc..

To start OVN containers in cluster mode for a 3 node cluster using below 
command on node1:
$ docker run -e "host_ip=" -e "nb_db_port=" -itd \ 
--name=ovn-nb-raft --net=host --privileged : \ 
ovn-nb-cluster-create Then Start OVN containers in cluster mode using 
below command on node2 and node3 to make them join the peer using below 
command:


$ docker run -e "host_ip=" -e "remote_host=" \ 
-e "nb_db_port=" -itd --name=ovn-nb-raft --net=host \ --privileged 
: ovn-nb-cluster-join etc..



Brendan.


Thanks






On 25/06/2020 17:36, aginwala wrote:

Hi:

There are a couple of options as I have been exploring this too:

1. Upstream ovn-k8s patches

(https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d)
uses statefulset and headless service for starting ovn central
raft cluster with 3 replicas. Cluster startup code and pod 

Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-07 Thread Brendan Doyle



On 06/07/2020 21:29, Girish Moodalbail wrote:

Hello Brendan,

After you run the './daemonset.sh` script, there will be two DB 
related yaml files in `dist/yaml` folder. The ovnkube-db.yaml brings 
up standalone OVN DBs, whilst the ovnkube-db-raft brings up the OVN 
Clustered DBs. Please do `kubectl apply -f 
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-db-raft.yaml 
`.


Humm "kubectl apply -f" or "kubectl create-f" as per 
https://github.com/ovn-org/ovn-kubernetes/ instructions. And what needs 
to be run before that?

create -f ovn-setup.yaml apply -f ovnkube-db-raft.yaml

What about ovnkube-master.yaml, I think not?



Furthermore, if you read that YAML file the node selector is set to 
nodes with label `k8s.ovn.org/ovnkube-db=true` 
. So, you will need to annotate 
at least 3 nodes with that label.


Yes, would be good to have that in a README.

Thanks I will try again with these.




HTH

Regards,
~Girish

On Mon, Jul 6, 2020 at 8:37 AM Brendan Doyle > wrote:


So I've tried the steps in

https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d:

|cd $HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/images

./daemonset.sh --image=docker.io/ovnkube/ovn-daemonset-u:latest
 \
--net-cidr=192.168.0.0/16 
--svc-cidr=172.16.1.0/24  \
--gateway-mode="local" \ --k8s-apiserver=https://$MASTER_IP:6443|


|# Create OVN namespace, service accounts, ovnkube-db headless
service, configmap, and policies kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovn-setup.yaml

# Run ovnkube-db deployment. kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-db.yaml

# Run ovnkube-master deployment. kubectl create -f

$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-master.yaml

# Run ovnkube daemonset for nodes kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-node.yaml

And I see nothing in my k8 cluster, seems like it does nothing
[root@ca-rain01 yaml]# kubectl create -f ovnkube-master.yaml
deployment "ovnkube-master" created [root@ca-rain01 yaml]# kubectl
delete deployment ovnkube-master Error from server (NotFound):
deployments.extensions "ovnkube-master" not found Has anybody got
this working, or used any other means to deploy an OVN cluster as
a K8 Statefulset? Brendan |



On 06/07/2020 12:33, Brendan Doyle wrote:

Hi,

So I'm really confused by what you have pointed me to here. As
stated I do NOT
want to use OVN as a CNI. I have a k8s cluster that use flannel
as the CNI. I simply
want to create an OVN "central" cluster as a Stateful set in my
*existing* K8
config.

This repo:

https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d

Seems to be for setting up a K8s cluster to use OVN as the CNI??
Have you tried this?
What IP do the ovn-controllers use to reach the OVN "central cluster?
It seems to use an OVN docker image from docker.io
, I want to use my own OVN src
Do I use/modify the dist/images/Dockerfile in this repo? that has
loads of references to CNI
like I said I don't want to use OVN as the CNI??


The instructions here

https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst
seem more promising, if not a little confusing:

IN the section "Starting OVN Central services in containers"


Export following variables in .env and place it under project root:

$ OVN_BRANCH=
$ OVN_VERSION=
$ DISTRO=
$ KERNEL_VERSION=
$ GITHUB_SRC=
$ DOCKER_REPO=


Does it mean create a file called ".env" and place it in the
toplevel dir of the cloned ovn repo?
Or does it mean just add these to you shell environment (i.e put
them in .bashrc)?

Then we have:

1)


Start OVN containers using below command:

$ docker run -itd --net=host --name=ovn-nb \
   : ovn-nb-tcp

$ docker run -itd --net=host --name=ovn-sb \
   : ovn-sb-tcp

$ docker run -itd --net=host --name=ovn-northd \
   : ovn-northd-tcp

followed by

2)

$ docker run -e "host_ip=" -e "nb_db_port=" -itd \
   --name=ovn-nb-raft --net=host 

Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-06 Thread aginwala
Hi:

Adding the ML too. Folks from k8s can comment on the same to see if ovn-k8s
repo needs an update in the documentation for you to get the setup working
when using their specs as is without any code changes in addition to using
your own custom ovn images, etc. I am getting mail failure when adding
ovn-k8s google group as I think I don't have permission to post there. Also
the yaml specs and raft scripts have good comments which can give you a
clear idea too.

Also cc'd Girish who can comment further.


Also things like volumes(PV) for ovn central dedicated nodes, monitoring,
backing up ovn db,  etc. needs to be considered so that when the pod is
restarted or ovn version is upgraded, cluster settings are retained and
cluster health stats are also taken into consideration.


I got the design aspect of it sorted a week ago and had internal review too
cc Han as we do not use ovn as CNI too including some pending
containerizing items for ovn global dbs and ovn interconnect controller to
use for ovn interconnect. However, it's pending testing in k8s with all the
specs/tweaks due to some other priorities. As the approach taken by ovn-k8s
is succinct and already tested, it shouldn't be a bottleneck.

I agree that overall documentation needs to be consolidated on both ovn-k8s
side or ovn repo.

On Mon, Jul 6, 2020 at 9:49 AM Brendan Doyle 
wrote:

> Hi,
>
> I've been trying to follow the instructions at
> https://github.com/ovn-org/ovn-kubernetes
> to set up an OVN "Central/Master" high availability (HA).  I want to
> deploy and manage that
> cluster as a Kubernetes service .
>
> I can find lots of stuff on "ovn-kube" but this seems to be using OVN as
> a  kubernetes CNI instead of
> Flannel etc.  But this is not what I want to do, I have a kubernetes
> cluster using Flannel as the CNI,
> now  I want to deploy a HA OVN "Central" as a kubernetes service. Kind
> of like how you can deploy
> a MySQL cluster in kubernetes using a SatefulSet deployment.
>
> I have found this:
> https://github.com/ovn-org/ovn-kubernetes#readme
>
> But it is not clear to me if this is how to setup OVN as a kubernetes
> CNI or it's how to setup a HA OVN central as kubernetes service.
>
> I did try he steps in the READMe above, but they did not seem to work, then
> I have just seen that there is a ovnkube-db-raft.yaml file, this seems more
> promising as it does use a StatefulSet, but I can find no documentation
> on this
> file.
>
> Thanks
>
> Brendan
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-06 Thread Girish Moodalbail
Hello Brendan,

After you run the './daemonset.sh` script, there will be two DB related
yaml files in `dist/yaml` folder. The ovnkube-db.yaml brings up standalone
OVN DBs, whilst the ovnkube-db-raft brings up the OVN Clustered DBs. Please
do `kubectl apply -f $HOME/work/src/
github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-db-raft.yaml
`.

Furthermore, if you read that YAML file the node selector is set to nodes
with label `k8s.ovn.org/ovnkube-db=true`. So, you will need to annotate at
least 3 nodes with that label.

HTH

Regards,
~Girish

On Mon, Jul 6, 2020 at 8:37 AM Brendan Doyle 
wrote:

> So I've tried the steps in
> https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d
> :
>
> cd $HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/images
> ./daemonset.sh --image=docker.io/ovnkube/ovn-daemonset-u:latest \
> --net-cidr=192.168.0.0/16 --svc-cidr=172.16.1.0/24 \
> --gateway-mode="local" \
> --k8s-apiserver=https://$MASTER_IP:6443
>
>
> # Create OVN namespace, service accounts, ovnkube-db headless service, 
> configmap, and policies
> kubectl create -f 
> $HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovn-setup.yaml
>
> # Run ovnkube-db deployment.
> kubectl create -f 
> $HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-db.yaml
>
> # Run ovnkube-master deployment.
> kubectl create -f 
> $HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-master.yaml
>
> # Run ovnkube daemonset for nodes
> kubectl create -f 
> $HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-node.yaml
>
>
> And I see nothing in my k8 cluster, seems like it does nothing
>
> [root@ca-rain01 yaml]#  kubectl create -f ovnkube-master.yaml
> deployment "ovnkube-master" created
> [root@ca-rain01 yaml]# kubectl delete deployment ovnkube-master
> Error from server (NotFound): deployments.extensions "ovnkube-master" not 
> found
>
> Has anybody got this working, or used any other means to deploy an OVN 
> cluster as a K8 Statefulset?
>
>
> Brendan
>
>
>
>
> On 06/07/2020 12:33, Brendan Doyle wrote:
>
> Hi,
>
> So I'm really confused by what you have pointed me to here. As stated I do
> NOT
> want to use OVN as a CNI. I have a k8s cluster that use flannel as the
> CNI. I simply
> want to create an OVN "central" cluster as a Stateful set in my *existing*
> K8
> config.
>
> This repo:
>
> https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d
>
> Seems to be for setting up a K8s cluster to use OVN as the CNI??
> Have you tried this?
> What IP do the ovn-controllers use to reach the OVN "central cluster?
> It seems to use an OVN docker image from docker.io, I want to use my own
> OVN src
> Do I use/modify the dist/images/Dockerfile in this repo? that has loads of
> references to CNI
> like I said I don't want to use OVN as the CNI??
>
>
> The instructions here
> https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst
> seem more promising, if not a little confusing:
>
> IN the section "Starting OVN Central services in containers"
>
> Export following variables in .env and place it under project root:
>
> $ OVN_BRANCH=
> $ OVN_VERSION=
> $ DISTRO=
> $ KERNEL_VERSION=
> $ GITHUB_SRC=
> $ DOCKER_REPO=
>
>
> Does it mean create a file called ".env" and place it in the toplevel dir
> of the cloned ovn repo?
> Or does it mean just add these to you shell environment (i.e put them in
> .bashrc)?
>
> Then we have:
>
> 1)
>
> Start OVN containers using below command:
>
> $ docker run -itd --net=host --name=ovn-nb \
>   : ovn-nb-tcp
>
> $ docker run -itd --net=host --name=ovn-sb \
>   : ovn-sb-tcp
>
> $ docker run -itd --net=host --name=ovn-northd \
>   : ovn-northd-tcp
>
> followed by
>
> 2)
>
> $ docker run -e "host_ip=" -e "nb_db_port=" -itd \
>   --name=ovn-nb-raft --net=host --privileged : \
>   ovn-nb-cluster-create
>
> $ docker run -e "host_ip=" -e "sb_db_port=" -itd \
>   --name=ovn-sb-raft --net=host --privileged : \
>   ovn-sb-cluster-create
>
> $ docker run -e "OVN_NB_DB=tcp::6641,tcp::6641,\
>   tcp::6641" -e "OVN_SB_DB=tcp::6642,tcp::6642,\
>   tcp::6642" -itd --name=ovn-northd-raft : \
>   ovn-northd-cluster
>
> Does it mean do 1), then 2) or does it mean do 1) for non HA OVN central
> *OR* 2)
> for HA/clustered OVN Central?
>
> It's not clear
>
> Thanks
>
>
>
>
>
>
> On 25/06/2020 17:36, aginwala wrote:
>
> Hi:
>
> There are a couple of options as I have been exploring this too:
>
> 1. Upstream ovn-k8s patches (
> https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d)
> uses statefulset and headless service for starting ovn central raft cluster
> with 3 replicas. Cluster startup code and pod specs are pretty neat that
> addresses most of the doubts.
>
> OVN components have been containerized too to start them in pods. You can
> also refer 

Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-06 Thread aginwala
On Mon, Jul 6, 2020 at 4:33 AM Brendan Doyle 
wrote:

> Hi,
>
> So I'm really confused by what you have pointed me to here. As stated I do
> NOT
> want to use OVN as a CNI. I have a k8s cluster that use flannel as the
> CNI. I simply
> want to create an OVN "central" cluster as a Stateful set in my *existing*
> K8
> config.
>
> This repo:
>
> https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d
>
> Seems to be for setting up a K8s cluster to use OVN as the CNI??
> Have you tried this?
> What IP do the ovn-controllers use to reach the OVN "central cluster?
> It seems to use an OVN docker image from docker.io, I want to use my own
> OVN src
> Do I use/modify the dist/images/Dockerfile in this repo? that has loads of
> references to CNI
> like I said I don't want to use OVN as the CNI??
>
> A pre-req for running ovn central as a k8s app is containerize ovn central
components. Hence, you need to start your own containers using docker.
Either you follow the approach from ovn-k8s repo as to how to build ovn
images or refer to the docker instructions in ovn repo. Since this app (ovn
central) will run behind a k8s service, ovn-controller should point to the
service ip of ovn central k8s app. k8s folks can comment on how to build
image that is in k8s pod specs e.g
http://docker.io/ovnkube/ovn-daemonset:latest

>
> The instructions here
> https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst
> seem more promising, if not a little confusing:
>
> IN the section "Starting OVN Central services in containers"
>
> Export following variables in .env and place it under project root:
>
> $ OVN_BRANCH=
> $ OVN_VERSION=
> $ DISTRO=
> $ KERNEL_VERSION=
> $ GITHUB_SRC=
> $ DOCKER_REPO=
>
>
> Does it mean create a file called ".env" and place it in the toplevel dir
> of the cloned ovn repo?
> Or does it mean just add these to you shell environment (i.e put them in
> .bashrc)?
>
> You can just export OVN_BRANCH=xx in your shell for all variables and
build your containers with desired distro/version using make build
>
> Then we have:
>
> 1)
>
> Start OVN containers using below command:
>
> $ docker run -itd --net=host --name=ovn-nb \
>   : ovn-nb-tcp
>
> $ docker run -itd --net=host --name=ovn-sb \
>   : ovn-sb-tcp
>
> $ docker run -itd --net=host --name=ovn-northd \
>   : ovn-northd-tcp
>
> followed by
>
> 2)
>
> $ docker run -e "host_ip=" -e "nb_db_port=" -itd \
>   --name=ovn-nb-raft --net=host --privileged : \
>   ovn-nb-cluster-create
>
> $ docker run -e "host_ip=" -e "sb_db_port=" -itd \
>   --name=ovn-sb-raft --net=host --privileged : \
>   ovn-sb-cluster-create
>
> $ docker run -e "OVN_NB_DB=tcp::6641,tcp::6641,\
>   tcp::6641" -e "OVN_SB_DB=tcp::6642,tcp::6642,\
>   tcp::6642" -itd --name=ovn-northd-raft : \
>   ovn-northd-cluster
>
> Does it mean do 1), then 2) or does it mean do 1) for non HA OVN central
> *OR* 2)
> for HA/clustered OVN Central?
>
> Doc says Start OVN containers in cluster mode using below command on
node2 and node3 to make them join the peer using below command:. Hence, you
can even play with just docker on 3 nodes where you run step1 on node1 that
creates cluster and do the join-cluster on rest two nodes to give you a
clear idea before moving to pod in k8s. Not sure if you need more details
to update doc. We can always improvise. Upstream ovn-k8s does the same for
pods where e.g. ovn-kube0 pod creates a cluster and rest two pods joins

> It's not clear
>
> Thanks
>
>
>
>
>
>
> On 25/06/2020 17:36, aginwala wrote:
>
> Hi:
>
> There are a couple of options as I have been exploring this too:
>
> 1. Upstream ovn-k8s patches (
> https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d)
> uses statefulset and headless service for starting ovn central raft cluster
> with 3 replicas. Cluster startup code and pod specs are pretty neat that
> addresses most of the doubts.
>
> OVN components have been containerized too to start them in pods. You can
> also refer to
> https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst
>  for the same and use them to make it work in pod specs too.
>
>
> 2. Write a new ovn operator similar to etcd operator
> https://github.com/coreos/etcd-operator which just takes the count of
> raft replicas and does the job in the background.
>
> I also added ovn-k8s group so they can comment on any other ideas too.
> Hope it helps.
>
>
>
> On Thu, Jun 25, 2020 at 7:15 AM Brendan Doyle 
> wrote:
>
>> Hi,
>>
>> So I'm trying to find information on setting up an OVN "Central/Master"
>> high availability (HA)
>> Not as Active-Backup with Pacemaker, but as a cluster. But I want to
>> deploy and manage that
>> cluster as a Kubernetes service .
>>
>> I can find lots of stuff on "ovn-kube" but this seems to be using OVN as
>> a  kubernetes CNI instead of
>> Flannel etc.  But this is not what I want to do, I 

Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-06 Thread Brendan Doyle
So I've tried the steps in 
https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d:


|cd $HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/images 
./daemonset.sh --image=docker.io/ovnkube/ovn-daemonset-u:latest \ 
--net-cidr=192.168.0.0/16 --svc-cidr=172.16.1.0/24 \ 
--gateway-mode="local" \ --k8s-apiserver=https://$MASTER_IP:6443|



|# Create OVN namespace, service accounts, ovnkube-db headless service, 
configmap, and policies kubectl create -f 
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovn-setup.yaml 
# Run ovnkube-db deployment. kubectl create -f 
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-db.yaml 
# Run ovnkube-master deployment. kubectl create -f 
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-master.yaml 
# Run ovnkube daemonset for nodes kubectl create -f 
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-node.yaml 
And I see nothing in my k8 cluster, seems like it does nothing 
[root@ca-rain01 yaml]# kubectl create -f ovnkube-master.yaml deployment 
"ovnkube-master" created [root@ca-rain01 yaml]# kubectl delete 
deployment ovnkube-master Error from server (NotFound): 
deployments.extensions "ovnkube-master" not found Has anybody got this 
working, or used any other means to deploy an OVN cluster as a K8 
Statefulset? Brendan |




On 06/07/2020 12:33, Brendan Doyle wrote:

Hi,

So I'm really confused by what you have pointed me to here. As stated 
I do NOT
want to use OVN as a CNI. I have a k8s cluster that use flannel as the 
CNI. I simply
want to create an OVN "central" cluster as a Stateful set in my 
*existing* K8

config.

This repo:
https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d

Seems to be for setting up a K8s cluster to use OVN as the CNI??
Have you tried this?
What IP do the ovn-controllers use to reach the OVN "central cluster?
It seems to use an OVN docker image from docker.io, I want to use my 
own OVN src
Do I use/modify the dist/images/Dockerfile in this repo? that has 
loads of references to CNI

like I said I don't want to use OVN as the CNI??


The instructions here 
https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst

seem more promising, if not a little confusing:

IN the section "Starting OVN Central services in containers"


Export following variables in .env and place it under project root:

$ OVN_BRANCH=
$ OVN_VERSION=
$ DISTRO=
$ KERNEL_VERSION=
$ GITHUB_SRC=
$ DOCKER_REPO=


Does it mean create a file called ".env" and place it in the toplevel 
dir of the cloned ovn repo?
Or does it mean just add these to you shell environment (i.e put them 
in .bashrc)?


Then we have:

1)


Start OVN containers using below command:

$ docker run -itd --net=host --name=ovn-nb \
   : ovn-nb-tcp

$ docker run -itd --net=host --name=ovn-sb \
   : ovn-sb-tcp

$ docker run -itd --net=host --name=ovn-northd \
   : ovn-northd-tcp

followed by

2)

$ docker run -e "host_ip=" -e "nb_db_port=" -itd \
   --name=ovn-nb-raft --net=host --privileged : \
   ovn-nb-cluster-create

$ docker run -e "host_ip=" -e "sb_db_port=" -itd \
   --name=ovn-sb-raft --net=host --privileged : \
   ovn-sb-cluster-create

$ docker run -e "OVN_NB_DB=tcp::6641,tcp::6641,\
   tcp::6641" -e "OVN_SB_DB=tcp::6642,tcp::6642,\
   tcp::6642" -itd --name=ovn-northd-raft : \
   ovn-northd-cluster
Does it mean do 1), then 2) or does it mean do 1) for non HA OVN 
central *OR* 2)

for HA/clustered OVN Central?

It's not clear

Thanks






On 25/06/2020 17:36, aginwala wrote:

Hi:

There are a couple of options as I have been exploring this too:

1. Upstream ovn-k8s patches 
(https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d) 
uses statefulset and headless service for starting ovn central raft 
cluster with 3 replicas. Cluster startup code and pod specs are 
pretty neat that addresses most of the doubts.


OVN components have been containerized too to start them in pods. You 
can also refer to 
https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst 
 for the same and use them to make it work in pod specs too.



2. Write a new ovn operator similar to etcd operator 
https://github.com/coreos/etcd-operator which just takes the count of 
raft replicas and does the job in the background.


I also added ovn-k8s group so they can comment on any other ideas 
too. Hope it helps.




On Thu, Jun 25, 2020 at 7:15 AM Brendan Doyle 
mailto:brendan.do...@oracle.com>> wrote:


Hi,

So I'm trying to find information on setting up an OVN
"Central/Master"
high availability (HA)
Not as Active-Backup with Pacemaker, but as a cluster. But I want to
deploy and manage that
cluster as a Kubernetes service .

I can find lots of stuff on "ovn-kube" but this seems to be using
OVN as
a  kubernetes 

Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-06 Thread Brendan Doyle

Hi,

So I'm really confused by what you have pointed me to here. As stated I 
do NOT
want to use OVN as a CNI. I have a k8s cluster that use flannel as the 
CNI. I simply
want to create an OVN "central" cluster as a Stateful set in my 
*existing* K8

config.

This repo:
https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d

Seems to be for setting up a K8s cluster to use OVN as the CNI??
Have you tried this?
What IP do the ovn-controllers use to reach the OVN "central cluster?
It seems to use an OVN docker image from docker.io, I want to use my own 
OVN src
Do I use/modify the dist/images/Dockerfile in this repo? that has loads 
of references to CNI

like I said I don't want to use OVN as the CNI??


The instructions here 
https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst

seem more promising, if not a little confusing:

IN the section "Starting OVN Central services in containers"


Export following variables in .env and place it under project root:

$ OVN_BRANCH=
$ OVN_VERSION=
$ DISTRO=
$ KERNEL_VERSION=
$ GITHUB_SRC=
$ DOCKER_REPO=


Does it mean create a file called ".env" and place it in the toplevel 
dir of the cloned ovn repo?
Or does it mean just add these to you shell environment (i.e put them in 
.bashrc)?


Then we have:

1)


Start OVN containers using below command:

$ docker run -itd --net=host --name=ovn-nb \
   : ovn-nb-tcp

$ docker run -itd --net=host --name=ovn-sb \
   : ovn-sb-tcp

$ docker run -itd --net=host --name=ovn-northd \
   : ovn-northd-tcp

followed by

2)

$ docker run -e "host_ip=" -e "nb_db_port=" -itd \
   --name=ovn-nb-raft --net=host --privileged : \
   ovn-nb-cluster-create

$ docker run -e "host_ip=" -e "sb_db_port=" -itd \
   --name=ovn-sb-raft --net=host --privileged : \
   ovn-sb-cluster-create

$ docker run -e "OVN_NB_DB=tcp::6641,tcp::6641,\
   tcp::6641" -e "OVN_SB_DB=tcp::6642,tcp::6642,\
   tcp::6642" -itd --name=ovn-northd-raft : \
   ovn-northd-cluster
Does it mean do 1), then 2) or does it mean do 1) for non HA OVN central 
*OR* 2)

for HA/clustered OVN Central?

It's not clear

Thanks






On 25/06/2020 17:36, aginwala wrote:

Hi:

There are a couple of options as I have been exploring this too:

1. Upstream ovn-k8s patches 
(https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d) 
uses statefulset and headless service for starting ovn central raft 
cluster with 3 replicas. Cluster startup code and pod specs are pretty 
neat that addresses most of the doubts.


OVN components have been containerized too to start them in pods. You 
can also refer to 
https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst 
 for the same and use them to make it work in pod specs too.



2. Write a new ovn operator similar to etcd operator 
https://github.com/coreos/etcd-operator which just takes the count of 
raft replicas and does the job in the background.


I also added ovn-k8s group so they can comment on any other ideas too. 
Hope it helps.




On Thu, Jun 25, 2020 at 7:15 AM Brendan Doyle 
mailto:brendan.do...@oracle.com>> wrote:


Hi,

So I'm trying to find information on setting up an OVN
"Central/Master"
high availability (HA)
Not as Active-Backup with Pacemaker, but as a cluster. But I want to
deploy and manage that
cluster as a Kubernetes service .

I can find lots of stuff on "ovn-kube" but this seems to be using
OVN as
a  kubernetes CNI instead of
Flannel etc.  But this is not what I want to do, I have a kubernetes
cluster using Flannel as the CNI,
now  I want to deploy a HA OVN "Central" as a kubernetes service.
Kind
of like how you can deploy
a MySQL cluster in kubernetes using a SatefulSet deployment.

I have found this:
https://github.com/ovn-org/ovn-kubernetes#readme

But it is not clear to me if this is how to setup OVN as a kubernetes
CNI or it's how to setup a HA
OVN central as kubernetes service.

Can anybody comment, has anyone done this?


I guess I could run an OVN central as standalone and use a kubernetes
deployment with 3
  replica sets and "export" as a NodePort service. And have a
floating/VIP on my kubernetes
nodes. And direct ovn-controllers to the VIP. So only the pod that
holds
the VIP would service
requests. This would work and give HA, but you don't get the
performance
of an OVN
clustered Database Model, where each OVN central could service
requests.




Thanks


Rdgs
Brendan

___
discuss mailing list
disc...@openvswitch.org 
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss



___
discuss mailing list
disc...@openvswitch.org

Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-06-26 Thread Brendan Doyle

OK thanks so it does seem that the repo I pointed to:


I have found this:
https://github.com/ovn-org/ovn-kubernetes#readme


Is the master branch of
(https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d) 
uses statefulset and headless service for starting ovn central raft 
cluster with 3 replicas. Cluster startup code and pod specs are


And does create an OVN central cluster as a k8 service.

I'll give it a try.

Thanks

On 25/06/2020 17:36, aginwala wrote:

Hi:

There are a couple of options as I have been exploring this too:

1. Upstream ovn-k8s patches 
(https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d) 
uses statefulset and headless service for starting ovn central raft 
cluster with 3 replicas. Cluster startup code and pod specs are pretty 
neat that addresses most of the doubts.


OVN components have been containerized too to start them in pods. You 
can also refer to 
https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst 
 for the same and use them to make it work in pod specs too.



2. Write a new ovn operator similar to etcd operator 
https://github.com/coreos/etcd-operator which just takes the count of 
raft replicas and does the job in the background.


I also added ovn-k8s group so they can comment on any other ideas too. 
Hope it helps.




On Thu, Jun 25, 2020 at 7:15 AM Brendan Doyle 
mailto:brendan.do...@oracle.com>> wrote:


Hi,

So I'm trying to find information on setting up an OVN
"Central/Master"
high availability (HA)
Not as Active-Backup with Pacemaker, but as a cluster. But I want to
deploy and manage that
cluster as a Kubernetes service .

I can find lots of stuff on "ovn-kube" but this seems to be using
OVN as
a  kubernetes CNI instead of
Flannel etc.  But this is not what I want to do, I have a kubernetes
cluster using Flannel as the CNI,
now  I want to deploy a HA OVN "Central" as a kubernetes service.
Kind
of like how you can deploy
a MySQL cluster in kubernetes using a SatefulSet deployment.

I have found this:
https://github.com/ovn-org/ovn-kubernetes#readme

But it is not clear to me if this is how to setup OVN as a kubernetes
CNI or it's how to setup a HA
OVN central as kubernetes service.

Can anybody comment, has anyone done this?


I guess I could run an OVN central as standalone and use a kubernetes
deployment with 3
  replica sets and "export" as a NodePort service. And have a
floating/VIP on my kubernetes
nodes. And direct ovn-controllers to the VIP. So only the pod that
holds
the VIP would service
requests. This would work and give HA, but you don't get the
performance
of an OVN
clustered Database Model, where each OVN central could service
requests.




Thanks


Rdgs
Brendan

___
discuss mailing list
disc...@openvswitch.org 
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss



___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-06-25 Thread aginwala
Hi:

There are a couple of options as I have been exploring this too:

1. Upstream ovn-k8s patches (
https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d)
uses statefulset and headless service for starting ovn central raft cluster
with 3 replicas. Cluster startup code and pod specs are pretty neat that
addresses most of the doubts.

OVN components have been containerized too to start them in pods. You can
also refer to
https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst
 for the same and use them to make it work in pod specs too.


2. Write a new ovn operator similar to etcd operator
https://github.com/coreos/etcd-operator which just takes the count of raft
replicas and does the job in the background.

I also added ovn-k8s group so they can comment on any other ideas too. Hope
it helps.



On Thu, Jun 25, 2020 at 7:15 AM Brendan Doyle 
wrote:

> Hi,
>
> So I'm trying to find information on setting up an OVN "Central/Master"
> high availability (HA)
> Not as Active-Backup with Pacemaker, but as a cluster. But I want to
> deploy and manage that
> cluster as a Kubernetes service .
>
> I can find lots of stuff on "ovn-kube" but this seems to be using OVN as
> a  kubernetes CNI instead of
> Flannel etc.  But this is not what I want to do, I have a kubernetes
> cluster using Flannel as the CNI,
> now  I want to deploy a HA OVN "Central" as a kubernetes service. Kind
> of like how you can deploy
> a MySQL cluster in kubernetes using a SatefulSet deployment.
>
> I have found this:
>   https://github.com/ovn-org/ovn-kubernetes#readme
>
> But it is not clear to me if this is how to setup OVN as a kubernetes
> CNI or it's how to setup a HA
> OVN central as kubernetes service.
>
> Can anybody comment, has anyone done this?
>
>
> I guess I could run an OVN central as standalone and use a kubernetes
> deployment with 3
>   replica sets and "export" as a NodePort service. And have a
> floating/VIP on my kubernetes
> nodes. And direct ovn-controllers to the VIP. So only the pod that holds
> the VIP would service
> requests. This would work and give HA, but you don't get the performance
> of an OVN
> clustered Database Model, where each OVN central could service requests.
>
>
>
>
> Thanks
>
>
> Rdgs
> Brendan
>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] HA OVN "Central" as a kubernetes service

2020-06-25 Thread Brendan Doyle

Hi,

So I'm trying to find information on setting up an OVN "Central/Master" 
high availability (HA)
Not as Active-Backup with Pacemaker, but as a cluster. But I want to 
deploy and manage that

cluster as a Kubernetes service .

I can find lots of stuff on "ovn-kube" but this seems to be using OVN as 
a  kubernetes CNI instead of
Flannel etc.  But this is not what I want to do, I have a kubernetes 
cluster using Flannel as the CNI,
now  I want to deploy a HA OVN "Central" as a kubernetes service. Kind 
of like how you can deploy

a MySQL cluster in kubernetes using a SatefulSet deployment.

I have found this:
 https://github.com/ovn-org/ovn-kubernetes#readme

But it is not clear to me if this is how to setup OVN as a kubernetes 
CNI or it's how to setup a HA

OVN central as kubernetes service.

Can anybody comment, has anyone done this?


I guess I could run an OVN central as standalone and use a kubernetes 
deployment with 3
 replica sets and "export" as a NodePort service. And have a 
floating/VIP on my kubernetes
nodes. And direct ovn-controllers to the VIP. So only the pod that holds 
the VIP would service
requests. This would work and give HA, but you don't get the performance 
of an OVN

clustered Database Model, where each OVN central could service requests.




Thanks


Rdgs
Brendan

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss