[ovs-discuss] Sorry about the rude email the other day

2020-07-07 Thread Allayna Wilson
I've calmed down, sorry I was so livid and I did actually succeed in what I
was trying to accomplish (reverse engineering the python ovs module) and
it's given me a bit of perspective,

https://github.com/0x2152/python-docker-openvswitch-plugin/blob/master/src/docker_network_plugin_service/ovs_json_rpc_native_client.py

I've also seen some of the C code, specifically that for ovs-vsctl which I
used some as reference. Most of what I could gather came from an
e-mail regarding a commit that I can't seem to find anywhere in the source
tree, but more importantly I can't find it in the README.md (no readme.md
here https://github.com/openvswitch/ovs/tree/master/python/ovs ) for the
ovs python module itself which in my mind seems like it could be for a few
reasons:

- The person who wrote the python module never documented it and isn't
around to the task. This can't be too much of a problem given that the
ovs-vsctl client uses the same wire protocol as well as a go client library
that exists (I will not use go, I would sooner do this in C.)

- It's a scam designed to get people to pay you for help (not really a scam
but, personally I couldn't justify that myself if that were the case)

- Working with the json-rpc API means dealing with a very complex columnar
database with a lot of high availability features, varying isolation
levels, and tons of ways to shoot yourself in the foot


I would probably go with #3, I never would have gotten a handle on this if
not for the archived email regarding commits of example code, which you
will find in this log:

https://gist.github.com/0x2152/97317884ad848586eff1cdcf256dd689#gistcomment-3366276

I kept an entire log of everything I did to try to figure this out. The
reason it frustrated me so badly is because people have written screen
scraper python modules to accomplish what this module does, presumably
because they couldn't figure out how to use the ovs python module:
https://github.com/iwaseyusuke/python-ovs-vsctl

It doesn't really matter why it's none of my business anyway but I am
morbidly curious, what is the story with this? I love hearing these kinds
of stories, I can only imagine the list of the stale topics around libvirt
and people trying to get it to work with openvswitch, I never had any luck
getting it to work right unless I wrote the domxml by hand (virt-manager
didn't ever get around to adding support for it.) I love databases though
this one is especially peculiar.

I've made things that work similarly to this upsert logic:
https://github.com/0x2152/python-docker-openvswitch-plugin/blob/master/src/docker_network_plugin_service/ovs_json_rpc_native_client.py#L133


but I also know that when you hide logic using object oriented concepts
like getter setters and operator overloading and such that it's really
intuitive but the learning curve for people who are unfamiliar with the
code is really steep. People who aren't acutely familiar with
getter/setters and database patterns won't catch that. To be fair, I just
barely caught it myself but I've worked with similar things in the past
(I've seen way worse wizardry, this is acceptable IMHO, besides, spelling
it out is a pain when you have a million other things to do.)

Either way I personally like it the way it is i just wish there was some
more documentation / examples up front, it's clear to me though that it's
quite complex it seems to even have some column level locking based on
whatever you specify in the schema helper which seems in my mind like for
it's time way ahead. I know to most it's probably the least impressive
aspect of openvswitch (with VXLAN/vtep, dpdk, etc) but to me, ovsdb is
pretty damn cool.
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-07 Thread Brendan Doyle


OK so an update, and thanks for the help todate. These steps get my OVN 
cluster up and running:


kubectl create -f ovn-setup.yaml kubectl apply -f ovnkube-db-raft.yaml
However I see warning messages of the like:

2020-07-07T15:15:12Z|1|ovsdb_idl|WARN|OVN_Northbound database lacks 
Forwarding_Group table (database needs upgrade?)


I suspect because the OVN docker image is not the same as the OVN bits 
I'm running on my chassis, or is not the latest.
I guess now its a question of figuring out how to use my own OVN bits 
and not those that are in the

http://docker.io/ovnkube/ovn-daemonset:latest

Thanks

Brendan

On 07/07/2020 11:33, Brendan Doyle wrote:



On 06/07/2020 21:29, Girish Moodalbail wrote:

Hello Brendan,

After you run the './daemonset.sh` script, there will be two DB 
related yaml files in `dist/yaml` folder. The ovnkube-db.yaml brings 
up standalone OVN DBs, whilst the ovnkube-db-raft brings up the OVN 
Clustered DBs. Please do `kubectl apply -f 
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-db-raft.yaml 
`.


Humm "kubectl apply -f" or "kubectl create-f" as per 
https://github.com/ovn-org/ovn-kubernetes/ instructions. And what 
needs to be run before that?

create -f ovn-setup.yaml apply -f ovnkube-db-raft.yaml

What about ovnkube-master.yaml, I think not?



Furthermore, if you read that YAML file the node selector is set to 
nodes with label `k8s.ovn.org/ovnkube-db=true` 
. So, you will need to annotate 
at least 3 nodes with that label.


Yes, would be good to have that in a README.

Thanks I will try again with these.




HTH

Regards,
~Girish

On Mon, Jul 6, 2020 at 8:37 AM Brendan Doyle 
mailto:brendan.do...@oracle.com>> wrote:


So I've tried the steps in

https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d:

|cd $HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/images

./daemonset.sh --image=docker.io/ovnkube/ovn-daemonset-u:latest
 \
--net-cidr=192.168.0.0/16 
--svc-cidr=172.16.1.0/24  \
--gateway-mode="local" \ --k8s-apiserver=https://$MASTER_IP:6443|


|# Create OVN namespace, service accounts, ovnkube-db headless
service, configmap, and policies kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovn-setup.yaml

# Run ovnkube-db deployment. kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-db.yaml

# Run ovnkube-master deployment. kubectl create -f

$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-master.yaml

# Run ovnkube daemonset for nodes kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-node.yaml

And I see nothing in my k8 cluster, seems like it does nothing
[root@ca-rain01 yaml]# kubectl create -f ovnkube-master.yaml
deployment "ovnkube-master" created [root@ca-rain01 yaml]#
kubectl delete deployment ovnkube-master Error from server
(NotFound): deployments.extensions "ovnkube-master" not found Has
anybody got this working, or used any other means to deploy an
OVN cluster as a K8 Statefulset? Brendan |



On 06/07/2020 12:33, Brendan Doyle wrote:

Hi,

So I'm really confused by what you have pointed me to here. As
stated I do NOT
want to use OVN as a CNI. I have a k8s cluster that use flannel
as the CNI. I simply
want to create an OVN "central" cluster as a Stateful set in my
*existing* K8
config.

This repo:

https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d

Seems to be for setting up a K8s cluster to use OVN as the CNI??
Have you tried this?
What IP do the ovn-controllers use to reach the OVN "central
cluster?
It seems to use an OVN docker image from docker.io
, I want to use my own OVN src
Do I use/modify the dist/images/Dockerfile in this repo? that
has loads of references to CNI
like I said I don't want to use OVN as the CNI??


The instructions here

https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst
seem more promising, if not a little confusing:

IN the section "Starting OVN Central services in containers"


Export following variables in .env and place it under project root:

$ OVN_BRANCH=
$ OVN_VERSION=
$ DISTRO=
   

Re: [ovs-discuss] [ovs-dev] OVS 2.12/2.13 compilation on Ubuntu Bionic

2020-07-07 Thread James Page
On Tue, Jul 7, 2020 at 8:39 AM Maciej Jozefczyk  wrote:

> Hello,
>
> Thank you for your responses!
>
> Is there any reason not to use the in-tree openvswitch kernel module
>> provided in the Ubuntu kernels?  Ubuntu stopped shipping DKMS modules as
>> part of OVS quite a long time ago as the openvswitch module in the kernel
>> is well maintained and generally up-to-date - and also to avoid this type
>> of breaking change.
>>
>
> Yes. QoS for OVN wasn't really working before until the OVN team started
> using OVS meter actions. Those type of actions are not working properly
> with OVS kernel module shipped by Ubuntu Bionic (up to kernel 4.18.0 [1]),
> so to test this functionality in Neutron upstream gates we compile the
> module from OVS source.
>

As an alternative you could use the HWE kernels provided for Ubuntu 18.04
LTS:

linux-image-generic-hwe-18.04-edge (5.4)
linux-image-generic-hwe-18.04 (5.3)

however that may mean that the default images used for testing in openinfra
might need to be updated to use the latest kernel versions


> This patch is actually on branch-2.12 and branch-2.13.
>> The only thing that is missing is a new stable release (tags).
>> We're going to release new stable versions on all previous branches soon.
>
>
> That is great news. Thank You!
>
> Maciej
>
> On Mon, Jul 6, 2020 at 7:58 PM Ilya Maximets  wrote:
>
>> On 6/29/20 8:45 PM, Gregory Rose wrote:
>> >
>> >
>> > On 6/26/2020 4:57 AM, Maciej Jozefczyk wrote:
>> >> Hello!
>> >>
>> >> I would like to kindly ask You if there is a possibility to cherry-pick
>> >> patch [1] to stable branches OVS 2.12, OVS 2.13 and release new tags
>> for it?
>> >>
>> >> Without this patch we're now unable to compile OVS 2.12 in OpenStack
>> >> Neutron stable releases CI, because it recently started to fail on
>> Ubuntu
>> >> Bionic with an error:
>> >>
>> >> 2020-06-24 14:50:13.975917 | primary |
>> >> /opt/stack/new/ovs/datapath/linux/geneve.c: In function
>> >> ‘geneve_get_v6_dst’:
>> >> 2020-06-24 14:50:13.975993 | primary |
>> >> /opt/stack/new/ovs/datapath/linux/geneve.c:966:15: error: ‘const
>> >> struct ipv6_stub’ has no member named ‘ipv6_dst_lookup’
>> >> 2020-06-24 14:50:13.976026 | primary |   if
>> >> (ipv6_stub->ipv6_dst_lookup(geneve->net, gs6->sock->sk, &dst, fl6)) {
>> >> 2020-06-24 14:50:13.976049 | primary |^
>> >> 2020-06-24 14:50:14.010809 | primary | scripts/Makefile.build:285:
>> >> recipe for target '/opt/stack/new/ovs/datapath/linux/geneve.o' failed
>> >>
>> >> The same happens for OVN 2.13. For now this blocks your CI pipelines.
>> >>
>> >> Can I ask You to backport this patch?
>>
>> This patch is actually on branch-2.12 and branch-2.13.
>> The only thing that is missing is a new stable release (tags).
>> We're going to release new stable versions on all previous branches soon.
>>
>> Best regards, Ilya Maximets.
>>
>> >>
>> >> Thanks,
>> >> Maciej
>> >>
>> >> [1]
>> >>
>> https://github.com/openvswitch/ovs/commit/5519e384f6a17f564fef4c5eb39e471e16c77235
>> >>
>> >>
>> >> ___
>> >> discuss mailing list
>> >> disc...@openvswitch.org
>> >> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>> >>
>> >
>> > Adding OVS Dev list where maybe the maintainers might see this sooner.
>> >
>> > - Greg
>>
>>
>
> --
> Best regards,
> Maciej Józefczyk
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-07 Thread Brendan Doyle



On 06/07/2020 21:13, aginwala wrote:

Hi:

Adding the ML too. Folks from k8s can comment on the same to see if 
ovn-k8s repo needs an update in the documentation for you to get the 
setup working when using their specs as is without any code changes in 
addition to using your own custom ovn images, etc. I am getting mail 
failure when adding ovn-k8s google group as I think I don't have 
permission to post there. Also the yaml specs and raft scripts have 
good comments which can give you a clear idea too.


I'm not sure they do, seems like a README would good good, what I'm 
inferring is that you can use different combinations of

yamls to achieve different things.

1) Configure an OVN CNI
create -f ovn-setup.yaml
create -f ovnkube-db.yaml
create -f ovnkube-master.yaml
create -f ovnkube-node.yaml

2) create and start an k8 OVN raft clustered OVN central
create -f ovn-setup.yaml
create -f ovnkube-db-raft.yaml


I'm not sure about the steps for 2, and weather or not other yamls also 
need to be run ovnkube-db.yaml? ovnkube-node.yaml?





Also cc'd Girish who can comment further.


Also things like volumes(PV) for ovn central dedicated nodes, 
monitoring, backing up ovn db,  etc. needs to be considered so that 
when the pod is restarted or ovn version is upgraded, cluster settings 
are retained and cluster health stats are also taken into consideration.


Err, so the yamls don't create these?? so if a POD is restarted the 
NB/SB databases are lost?? really??
I would have thought if a raft clustered was created and a pod in that 
clustered is restarted the cluster

would sync from the other pods? this is worrying.






I got the design aspect of it sorted a week ago and had internal 
review too cc Han as we do not use ovn as CNI too including some 
pending containerizing items for ovn global dbs and ovn interconnect 
controller to use for ovn interconnect. However, it's pending testing 
in k8s with all the specs/tweaks due to some other priorities. As the 
approach taken by ovn-k8s is succinct and already tested, it shouldn't 
be a bottleneck.


I'm not sure I follow, are you saying this is all work in progress?


I agree that overall documentation needs to be consolidated on both 
ovn-k8s side or ovn repo.


On Mon, Jul 6, 2020 at 9:49 AM Brendan Doyle > wrote:


Hi,

I've been trying to follow the instructions at
https://github.com/ovn-org/ovn-kubernetes
to set up an OVN "Central/Master" high availability (HA).  I want to
deploy and manage that
cluster as a Kubernetes service .

I can find lots of stuff on "ovn-kube" but this seems to be using
OVN as
a  kubernetes CNI instead of
Flannel etc.  But this is not what I want to do, I have a kubernetes
cluster using Flannel as the CNI,
now  I want to deploy a HA OVN "Central" as a kubernetes service.
Kind
of like how you can deploy
a MySQL cluster in kubernetes using a SatefulSet deployment.

I have found this:
https://github.com/ovn-org/ovn-kubernetes#readme

But it is not clear to me if this is how to setup OVN as a kubernetes
CNI or it's how to setup a HA OVN central as kubernetes service.

I did try he steps in the READMe above, but they did not seem to
work, then
I have just seen that there is a ovnkube-db-raft.yaml file, this
seems more
promising as it does use a StatefulSet, but I can find no
documentation
on this
file.

Thanks

Brendan



___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-07 Thread Brendan Doyle



On 06/07/2020 21:10, aginwala wrote:



On Mon, Jul 6, 2020 at 4:33 AM Brendan Doyle > wrote:


Hi,

So I'm really confused by what you have pointed me to here. As
stated I do NOT
want to use OVN as a CNI. I have a k8s cluster that use flannel as
the CNI. I simply
want to create an OVN "central" cluster as a Stateful set in my
*existing* K8
config.

This repo:

https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d

Seems to be for setting up a K8s cluster to use OVN as the CNI??



Still wondering about this? the above repo seems to be for creating an 
OVN CNI for kubernetes

Is this correct ???

But it also seems to include yamls for creating an OVN cluster as a k8s 
service via a StatefulSet


Is it necessary to create the OVN CNI in order to use 
ovnkube-db-raft.yam ???


As I have said I have an existing k8s cluster using a flannel CNI, I 
just want to deploy an OVN

central as a StatefulSet to that.


Have you tried this?
What IP do the ovn-controllers use to reach the OVN "central cluster?
It seems to use an OVN docker image from docker.io
, I want to use my own OVN src
Do I use/modify the dist/images/Dockerfile in this repo? that has
loads of references to CNI
like I said I don't want to use OVN as the CNI??

A pre-req for running ovn central as a k8s app is containerize ovn 
central components. Hence, you need to start your own containers using 
docker.
Either you follow the approach from ovn-k8s repo as to how to build 
ovn images or refer to the docker instructions in ovn repo. Since this 
app (ovn central) will run behind a k8s service, ovn-controller should 
point to the service ip of ovn central k8s app. k8s folks can comment 
on how to build image that is in k8s pod specs e.g 
http://docker.io/ovnkube/ovn-daemonset:latest


Yes the Docker image claims to be built using dist/images/Dockerfile, 
which installs more than just the OVN ovn central components




The instructions here

https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst
seem more promising, if not a little confusing:


1)


Start OVN containers using below command:

$ docker run -itd --net=host --name=ovn-nb \
   : ovn-nb-tcp

$ docker run -itd --net=host --name=ovn-sb \
   : ovn-sb-tcp

$ docker run -itd --net=host --name=ovn-northd \
   : ovn-northd-tcp

followed by

2)

$ docker run -e "host_ip=" -e "nb_db_port=" -itd \
   --name=ovn-nb-raft --net=host --privileged : \
   ovn-nb-cluster-create

$ docker run -e "host_ip=" -e "sb_db_port=" -itd \
   --name=ovn-sb-raft --net=host --privileged : \
   ovn-sb-cluster-create

$ docker run -e "OVN_NB_DB=tcp::6641,tcp::6641,\
   tcp::6641" -e "OVN_SB_DB=tcp::6642,tcp::6642,\
   tcp::6642" -itd --name=ovn-northd-raft : \
   ovn-northd-cluster

Does it mean do 1), then 2) or does it mean do 1) for non HA OVN
central *OR* 2)
for HA/clustered OVN Central?

Doc says Start OVN containers in cluster mode using below command on 
node2 and node3 to make them join the peer using below command:. 
Hence, you can even play with just docker on 3 nodes where you run 
step1 on node1 that creates cluster
Ok, is that the 1) above, surely 2) above creates the cluster 
"ovn-nb-cluster-create" ???


and do the join-cluster on rest two nodes to give you a clear idea 
before moving to pod in k8s. Not sure if you need more details to 
update doc. We can always improvise. Upstream ovn-k8s does the same 
for pods where e.g. ovn-kube0 pod creates a cluster and rest two pods 
joins


It's not clear


The docs are not clear, it seems to me the docs intend to say..

"
OVN containers can then be started either as a Stand Alone Database or as a
Clustered Database.


To start OVN containers as a Stand Alone Database use the commands below

$ docker run -itd --net=host --name=ovn-nb \ : ovn-nb-tcp

etc..

To start OVN containers in cluster mode for a 3 node cluster using below 
command on node1:
$ docker run -e "host_ip=" -e "nb_db_port=" -itd \ 
--name=ovn-nb-raft --net=host --privileged : \ 
ovn-nb-cluster-create Then Start OVN containers in cluster mode using 
below command on node2 and node3 to make them join the peer using below 
command:


$ docker run -e "host_ip=" -e "remote_host=" \ 
-e "nb_db_port=" -itd --name=ovn-nb-raft --net=host \ --privileged 
: ovn-nb-cluster-join etc..



Brendan.


Thanks






On 25/06/2020 17:36, aginwala wrote:

Hi:

There are a couple of options as I have been exploring this too:

1. Upstream ovn-k8s patches

(https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d)
uses statefulset and headless service for starting ovn central
raft cluster with 3 replicas. Cluster startup code and pod sp

Re: [ovs-discuss] HA OVN "Central" as a kubernetes service

2020-07-07 Thread Brendan Doyle



On 06/07/2020 21:29, Girish Moodalbail wrote:

Hello Brendan,

After you run the './daemonset.sh` script, there will be two DB 
related yaml files in `dist/yaml` folder. The ovnkube-db.yaml brings 
up standalone OVN DBs, whilst the ovnkube-db-raft brings up the OVN 
Clustered DBs. Please do `kubectl apply -f 
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-db-raft.yaml 
`.


Humm "kubectl apply -f" or "kubectl create-f" as per 
https://github.com/ovn-org/ovn-kubernetes/ instructions. And what needs 
to be run before that?

create -f ovn-setup.yaml apply -f ovnkube-db-raft.yaml

What about ovnkube-master.yaml, I think not?



Furthermore, if you read that YAML file the node selector is set to 
nodes with label `k8s.ovn.org/ovnkube-db=true` 
. So, you will need to annotate 
at least 3 nodes with that label.


Yes, would be good to have that in a README.

Thanks I will try again with these.




HTH

Regards,
~Girish

On Mon, Jul 6, 2020 at 8:37 AM Brendan Doyle > wrote:


So I've tried the steps in

https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d:

|cd $HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/images

./daemonset.sh --image=docker.io/ovnkube/ovn-daemonset-u:latest
 \
--net-cidr=192.168.0.0/16 
--svc-cidr=172.16.1.0/24  \
--gateway-mode="local" \ --k8s-apiserver=https://$MASTER_IP:6443|


|# Create OVN namespace, service accounts, ovnkube-db headless
service, configmap, and policies kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovn-setup.yaml

# Run ovnkube-db deployment. kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-db.yaml

# Run ovnkube-master deployment. kubectl create -f

$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-master.yaml

# Run ovnkube daemonset for nodes kubectl create -f
$HOME/work/src/github.com/ovn-org/ovn-kubernetes/dist/yaml/ovnkube-node.yaml

And I see nothing in my k8 cluster, seems like it does nothing
[root@ca-rain01 yaml]# kubectl create -f ovnkube-master.yaml
deployment "ovnkube-master" created [root@ca-rain01 yaml]# kubectl
delete deployment ovnkube-master Error from server (NotFound):
deployments.extensions "ovnkube-master" not found Has anybody got
this working, or used any other means to deploy an OVN cluster as
a K8 Statefulset? Brendan |



On 06/07/2020 12:33, Brendan Doyle wrote:

Hi,

So I'm really confused by what you have pointed me to here. As
stated I do NOT
want to use OVN as a CNI. I have a k8s cluster that use flannel
as the CNI. I simply
want to create an OVN "central" cluster as a Stateful set in my
*existing* K8
config.

This repo:

https://github.com/ovn-org/ovn-kubernetes/commit/a07b1a01af7e37b15c2e5f179ffad2b9f25a083d

Seems to be for setting up a K8s cluster to use OVN as the CNI??
Have you tried this?
What IP do the ovn-controllers use to reach the OVN "central cluster?
It seems to use an OVN docker image from docker.io
, I want to use my own OVN src
Do I use/modify the dist/images/Dockerfile in this repo? that has
loads of references to CNI
like I said I don't want to use OVN as the CNI??


The instructions here

https://github.com/ovn-org/ovn/blob/d6b56b1629d5984ef91864510f918e232efb89de/Documentation/intro/install/general.rst
seem more promising, if not a little confusing:

IN the section "Starting OVN Central services in containers"


Export following variables in .env and place it under project root:

$ OVN_BRANCH=
$ OVN_VERSION=
$ DISTRO=
$ KERNEL_VERSION=
$ GITHUB_SRC=
$ DOCKER_REPO=


Does it mean create a file called ".env" and place it in the
toplevel dir of the cloned ovn repo?
Or does it mean just add these to you shell environment (i.e put
them in .bashrc)?

Then we have:

1)


Start OVN containers using below command:

$ docker run -itd --net=host --name=ovn-nb \
   : ovn-nb-tcp

$ docker run -itd --net=host --name=ovn-sb \
   : ovn-sb-tcp

$ docker run -itd --net=host --name=ovn-northd \
   : ovn-northd-tcp

followed by

2)

$ docker run -e "host_ip=" -e "nb_db_port=" -itd \
   --name=ovn-nb-raft --net=host --privil

Re: [ovs-discuss] Question concerning GRE-encapsulated traffic *in* OpenStack provider VLAN network

2020-07-07 Thread Loschwitz,Martin Gerhard
Dear Ben,

> Am 06.07.2020 um 21:44 schrieb Ben Pfaff :
> 
> On Fri, Jun 26, 2020 at 10:10:47AM +, Loschwitz,Martin Gerhard wrote:
>> Folks,
>> 
>> I’m contacting you to find out if a behaviour I see is expected behaviour or 
>> actual misdemeanour in Open vSwitch and/or OVN. I have an OpenStack setup 
>> here that uses OVN. I have configured several VLAN-based provider networks. 
>> What I want to do is use a GRE-tunnel *inside* one of these VLAN networks. 
>> On the target compute node, I see that traffic enters the physical host but 
>> is not forwarded to the bridge to which the VM is connected. I can see that 
>> the flows for that are missing in the flow table.
>> 
>> I fully understand that OVN supports Geneve only, but in this case, I want 
>> my *tenants* to be able to use GRE encapsulation in a provider network. Is 
>> this supposed to work and this is either a bug or a misconfiguration? And if 
>> it is not expected to work, what are possible alternatives?
> 
> I'm surprised this doesn't work.  I'd expect it to work.  At a guess,
> I'd suspect MTU issues, especially if you see that it sometimes works
> for connection setup.


Alright, so the most important information for me is that this should be 
working in theory. I will have a close look at MTU settings, thank you for the 
pointer.

Best regards
Martin
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] please help me to confirm whether there is memleak in ovs-vswitchd

2020-07-07 Thread 王培辉
Hello,

 It seems there is memory leak in ovs-vswitchd. everything is going
fine while I enabled dpdk in ovs-vswitchd and created netdev bridge, but if
I add dpdk ports to netdev bridge, ovs-vswitchd starts to consume memory
until the system runs out of memory,

   I’m using ovs-2.13.0 in CentOS 7.6, please help me to confirm
whether there is memleak in ovs-vswitchd, I tried to use
--ovs-vswitchd-wrapper= valgrind,but it was stuck while excute systemctl
start ovs-vswitchd 

 

see the details below:

 

# ovs-vsctl list op

_uuid   : 800ba786-5c0c-4b67-8565-eb04c7a3f495

bridges : [732be0aa-b377-4b7a-9994-e9e9470ce918,
7cc6e09c-2cc6-4ef1-bc3e-1768476f6222, b873f99b-25b3-4864-bf60-988b2fe95dd6,
fd6d5636-1a50-45df-a9c8-3950aa919506]

cur_cfg : 3991

datapath_types  : [netdev, system]

datapaths   : {}

db_version  : "8.2.0"

dpdk_initialized: true

dpdk_version: "DPDK 19.11.1"

external_ids: {hostname=node-135, ovn-bridge=br-ovn,
ovn-bridge-mappings="default:br-provider,public:br-provider,default1:br-prov
ider,public1:br-provider", rundir="/var/run/openvswitch",
system-id="7cf7596c-ef34-42ad-9c4e-bf9736172d1b"}

iface_types : [dpdk, dpdkr, dpdkvhostuser, dpdkvhostuserclient,
erspan, geneve, gre, internal, ip6erspan, ip6gre, lisp, patch, stt, system,
tap, vxlan]

manager_options : [fd59708c-7125-443a-b0b7-ede3a945b66d]

next_cfg: 3991

other_config: {dpdk-extra="--single-file-segments", dpdk-init=try,
dpdk-socket-limit="1024,1024,1024,1024",
dpdk-socket-mem="1024,1024,1024,1024", pmd-cpu-mask="0xf00f00f01e",
stats-update-interval="1", userspace-tso-enable="true", vlan-limit="2"}

ovs_version : "2.13.1"

 

# ovs-vsctl show

800ba786-5c0c-4b67-8565-eb04c7a3f495

Manager "ptcp:6640:127.0.0.1"

Bridge sw-03

datapath_type: netdev

Port sw-03

Interface sw-03

type: internal

Port sw-03-bond

Interface enp217s0f0

type: dpdk

options: {dpdk-devargs=":d9:00.0"}

Interface enp219s0f0

type: dpdk

options: {dpdk-devargs=":db:00.0"}

 

The top command shows that the memory of ovs-vswitchd keeps growing,

 

top - 14:40:43 up 4 days,  4:48,  5 users,  load average: 3.50, 3.64, 3.51

Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie

%Cpu(s):  9.6 us,  1.0 sy,  0.0 ni, 89.4 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0
st

KiB Mem : 13153808+total, 39495820 free, 91428160 used,   614112 buff/cache

KiB Swap: 30719996 total, 29890564 free,   829432 used. 39331276 avail Mem 

 

   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND


198504 root  10 -10  524.1g   4.6g  26472 S 208.0  3.7   4:25.33
ovs-vswitchd 

top - 14:40:58 up 4 days,  4:49,  5 users,  load average: 3.53, 3.64, 3.51

Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie

%Cpu(s):  5.6 us,  1.1 sy,  0.0 ni, 93.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0
st

KiB Mem : 13153808+total, 38849932 free, 92051616 used,   636544 buff/cache

KiB Swap: 30719996 total, 29893380 free,   826616 used. 38685504 avail Mem 

 

   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND


198504 root  10 -10  524.7g   5.2g  26472 S 217.6  4.1   4:59.89
ovs-vswitchd  

top - 14:41:40 up 4 days,  4:49,  5 users,  load average: 4.00, 3.73, 3.55

Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie

%Cpu(s):  9.1 us,  0.9 sy,  0.0 ni, 90.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0
st

KiB Mem : 13153808+total, 37690576 free, 93230832 used,   616684 buff/cache

KiB Swap: 30719996 total, 29900036 free,   819960 used. 37526172 avail Mem 

 

   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND


198504 root  10 -10  525.9g   6.3g  26472 S 210.0  5.0   6:29.15
ovs-vswitchd 

top - 14:45:14 up 4 days,  4:53,  5 users,  load average: 2.98, 3.39, 3.45

Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie

%Cpu(s):  9.1 us,  0.8 sy,  0.0 ni, 90.1 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0
st

KiB Mem : 13153808+total, 32005748 free, 98914240 used,   618104 buff/cache

KiB Swap: 30719996 total, 29933316 free,   786680 used. 31842124 avail Mem 

 

   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND


198504 root  10 -10  531.2g  11.7g  26472 S 213.3  9.3  14:02.89
ovs-vswitchd

 

Because ovs-vswitchd consumed too much memory, the system oom was triggered,
and it is finally killed

 

If you need more information, please let me know

 

thanks


 



smime.p7s
Description: S/MIME cryptographic signature
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] [ovs-dev] OVS 2.12/2.13 compilation on Ubuntu Bionic

2020-07-07 Thread Maciej Jozefczyk
Hello,

Thank you for your responses!

Is there any reason not to use the in-tree openvswitch kernel module
> provided in the Ubuntu kernels?  Ubuntu stopped shipping DKMS modules as
> part of OVS quite a long time ago as the openvswitch module in the kernel
> is well maintained and generally up-to-date - and also to avoid this type
> of breaking change.
>

Yes. QoS for OVN wasn't really working before until the OVN team started
using OVS meter actions. Those type of actions are not working properly
with OVS kernel module shipped by Ubuntu Bionic (up to kernel 4.18.0 [1]),
so to test this functionality in Neutron upstream gates we compile the
module from OVS source.

This patch is actually on branch-2.12 and branch-2.13.
> The only thing that is missing is a new stable release (tags).
> We're going to release new stable versions on all previous branches soon.


That is great news. Thank You!

Maciej

On Mon, Jul 6, 2020 at 7:58 PM Ilya Maximets  wrote:

> On 6/29/20 8:45 PM, Gregory Rose wrote:
> >
> >
> > On 6/26/2020 4:57 AM, Maciej Jozefczyk wrote:
> >> Hello!
> >>
> >> I would like to kindly ask You if there is a possibility to cherry-pick
> >> patch [1] to stable branches OVS 2.12, OVS 2.13 and release new tags
> for it?
> >>
> >> Without this patch we're now unable to compile OVS 2.12 in OpenStack
> >> Neutron stable releases CI, because it recently started to fail on
> Ubuntu
> >> Bionic with an error:
> >>
> >> 2020-06-24 14:50:13.975917 | primary |
> >> /opt/stack/new/ovs/datapath/linux/geneve.c: In function
> >> ‘geneve_get_v6_dst’:
> >> 2020-06-24 14:50:13.975993 | primary |
> >> /opt/stack/new/ovs/datapath/linux/geneve.c:966:15: error: ‘const
> >> struct ipv6_stub’ has no member named ‘ipv6_dst_lookup’
> >> 2020-06-24 14:50:13.976026 | primary |   if
> >> (ipv6_stub->ipv6_dst_lookup(geneve->net, gs6->sock->sk, &dst, fl6)) {
> >> 2020-06-24 14:50:13.976049 | primary |^
> >> 2020-06-24 14:50:14.010809 | primary | scripts/Makefile.build:285:
> >> recipe for target '/opt/stack/new/ovs/datapath/linux/geneve.o' failed
> >>
> >> The same happens for OVN 2.13. For now this blocks your CI pipelines.
> >>
> >> Can I ask You to backport this patch?
>
> This patch is actually on branch-2.12 and branch-2.13.
> The only thing that is missing is a new stable release (tags).
> We're going to release new stable versions on all previous branches soon.
>
> Best regards, Ilya Maximets.
>
> >>
> >> Thanks,
> >> Maciej
> >>
> >> [1]
> >>
> https://github.com/openvswitch/ovs/commit/5519e384f6a17f564fef4c5eb39e471e16c77235
> >>
> >>
> >> ___
> >> discuss mailing list
> >> disc...@openvswitch.org
> >> https://mail.openvswitch.org/mailman/listinfo/ovs-discuss
> >>
> >
> > Adding OVS Dev list where maybe the maintainers might see this sooner.
> >
> > - Greg
>
>

-- 
Best regards,
Maciej Józefczyk
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] Reg. OVS 2.13 release announcement

2020-07-07 Thread Vishal Deep Ajmera via discuss
Hi,

 

I was wondering if any official announcement had been made for OVS 2.13
release in Feb, 2020. I probably missed it but I could not locate it on the
official openvswitch.org webpage either.

 

Can someone help with the link to the ovs-2.13 tar ball?

 

Warm Regards,

Vishal Ajmera

 



smime.p7s
Description: S/MIME cryptographic signature
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss