[ovs-dev] DERNIERE LIGNE DROITE - NOMADE REGGAE FESTIVAL 5ème EDITION

2019-07-10 Thread info




Dernière Ligne droite pour de bonnes vibrations lors de la 5èm édition 
du Nomade Reggae Festival, l'évènement de l'été en Haute - Savoie,  Les 2,3 et 
4 Août à Frangy (74) 
A 30 Minutes de Genève, d'Annecy, à 1H30 de Lyon et de Grenoble
Avec au programme:
ALBOROSIE*, *SINSEMILIA*,*Anthony B*,  *ISRAEL VIBRATION*,  
*PIERPOLJAK*, *BIGA*RANX * LE PEUPLE DE L'HERBE*, *THE CONGOS*, *KANKA DUB*, 
*MC BROTHER CULTURE*, *KEIDA*, *TIWONY*, I WOKS*,*FEUILLES DE ROOTS*, 
*I-TAWEH*, *ARAT KILO*, *Evidence Music -Derrick Sound* (Little Lion sound)

 PROFITER DE NOTRE OFFRE PROMO - PRÉVENTES:
*Pass Promo 2jours +3ème offert = 60euros (OFFRE LIMITÉE)
   
Pass Promo 1jour = 30euros ht

Sur place: 35 euros/jour
NB: les billets non nominatifs et non remboursables
   
Achetez vos places sur le site de l’événement. Imprimez votre billet à 
domicile ou télécharger sur votre mobile:
www.nomadereggaefestival.com
Et dans les points de vente habituels :
Fnac,Ticketnet, digitick, Carrefour, Géant, sytème U, Inter marché, 
AUCHAN, CORA, CULTURA, E.LECLERC, LE PROGRES, Carrefour, Géant, Système U, 
Intermarché ,
Gratuit - 12 ans


   Un Festival engagé:
Le Nomade Reggae Festival réalise un centre socio - médical  
d'alphabétisation à Sanankoroba, commune rurale au Mali:
En savoir plus: 
https://www.nomadereggaefestival.com/fr/projet/


INFOS PRATIQUES :
rue du stade, 74270 Frangy / 41min d'Annecy/ 27min de 
Saint-Julien-en-Genevois/ 30min de Genève/1h30min de Grenoble et de Lyon.
CAMPING Gratuit 

Ouverture des portes :
Restauration - Animation dès 12H
Debut des Concerts 16h00 
 
Plus d'infos: www.nomadereggaefestival.com






___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] 答复: Why is ovs DPDK much worse than ovs in my test case?

2019-07-10 Thread 杨燚
BTW, offload features are on in my test client1 and server1 (iperf server)

vagrant@client1:~$ ethtool -k enp0s8
Features for enp0s8:
rx-checksumming: on [fixed]
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp6-segmentation: on
udp-fragmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: on [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: on [fixed]
hw-tc-offload: off [fixed]
vagrant@client1:~$

vagrant@server1:~$ ifconfig enp0s8
enp0s8Link encap:Ethernet  HWaddr 08:00:27:c0:a6:0b
  inet addr:192.168.230.101  Bcast:192.168.230.255  Mask:255.255.255.0
  inet6 addr: fe80::a00:27ff:fec0:a60b/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
  RX packets:4228443 errors:0 dropped:0 overruns:0 frame:0
  TX packets:2484988 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:34527894301 (34.5 GB)  TX bytes:528944799 (528.9 MB)

vagrant@server1:~$ ethtool -k enp0s8
Features for enp0s8:
rx-checksumming: on [fixed]
tx-checksumming: on
tx-checksum-ipv4: off [fixed]
tx-checksum-ip-generic: on
tx-checksum-ipv6: off [fixed]
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: off [fixed]
tx-tcp6-segmentation: on
udp-fragmentation-offload: on
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: off [fixed]
receive-hashing: off [fixed]
highdma: on [fixed]
rx-vlan-filter: on [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: on [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-ipip-segmentation: off [fixed]
tx-sit-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: off
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
l2-fwd-offload: off [fixed]
busy-poll: on [fixed]
hw-tc-offload: off [fixed]
vagrant@server1:~$

-邮件原件-
发件人: Yi Yang (杨燚)-云服务集团 
发送时间: 2019年7月11日 8:22
收件人: i.maxim...@samsung.com; ovs-dev@openvswitch.org
抄送: Yi Yang (杨燚)-云服务集团 
主题: 答复: [ovs-dev] Why is ovs DPDK much worse than ovs in my test case?
重要性: 高

Ilya, thank you so much, using 9K MTU for all the virtio interfaces in 
transport path does help (including DPDK port), the data is here.

vagrant@client1:~$ iperf -t 60 -i 10 -c 192.168.230.101

Client connecting to 192.168.230.101, TCP port 5001
TCP window size:  325 KByte (default)

[  3] local 192.168.200.101 port 53956 connected with 192.168.230.101 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec   315 MBytes   264 Mbits/sec
[  3] 10.0-20.0 sec   333 MBytes   280 Mbits/sec
[  3] 20.0-30.0 sec   300 MBytes   252 Mbits/sec
[  3] 30.0-40.0 sec   307 MBytes   258 Mbits/sec
[  3] 40.0-50.0 sec   322 MBytes   270 Mbits/sec
[  3] 50.0-60.0 sec   316 MBytes   265 Mbits/sec
[  3]  0.0-60.0 sec  1.85 GBytes   265 Mbits/sec
vagrant@client1:~$

But it is still much worse than ovs kernel. In my test case, I used VirtualBox 
network, the whole transport path traverses several different VMs, every VM has 
turned on offload features except ovs DPDK VM, I understand tso offload should 
be done on send side, so when the packet is sent out from the send side or 
receive side, it has been segmented by tso to adapt to path MTU, so in ovs 
kernel VM/ovs DPDK VM, the packet size has been MTU of ovs port/DPDK 

[ovs-dev] 答复: Why is ovs DPDK much worse than ovs in my test case?

2019-07-10 Thread 杨燚
Ilya, thank you so much, using 9K MTU for all the virtio interfaces in 
transport path does help (including DPDK port), the data is here.

vagrant@client1:~$ iperf -t 60 -i 10 -c 192.168.230.101

Client connecting to 192.168.230.101, TCP port 5001
TCP window size:  325 KByte (default)

[  3] local 192.168.200.101 port 53956 connected with 192.168.230.101 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec   315 MBytes   264 Mbits/sec
[  3] 10.0-20.0 sec   333 MBytes   280 Mbits/sec
[  3] 20.0-30.0 sec   300 MBytes   252 Mbits/sec
[  3] 30.0-40.0 sec   307 MBytes   258 Mbits/sec
[  3] 40.0-50.0 sec   322 MBytes   270 Mbits/sec
[  3] 50.0-60.0 sec   316 MBytes   265 Mbits/sec
[  3]  0.0-60.0 sec  1.85 GBytes   265 Mbits/sec
vagrant@client1:~$

But it is still much worse than ovs kernel. In my test case, I used VirtualBox 
network, the whole transport path traverses several different VMs, every VM has 
turned on offload features except ovs DPDK VM, I understand tso offload should 
be done on send side, so when the packet is sent out from the send side or 
receive side, it has been segmented by tso to adapt to path MTU, so in ovs 
kernel VM/ovs DPDK VM, the packet size has been MTU of ovs port/DPDK port, so 
it needn't do tso work, right?

-邮件原件-
发件人: Ilya Maximets [mailto:i.maxim...@samsung.com] 
发送时间: 2019年7月10日 18:11
收件人: ovs-dev@openvswitch.org; Yi Yang (杨燚)-云服务集团 
主题: Re: [ovs-dev] Why is ovs DPDK much worse than ovs in my test case?

> Hi, all
> 
> I just use ovs as a static router in my test case, ovs is ran in 
> vagrant VM, ethernet interfaces uses virtio driver, I create two ovs 
> bridges, each one adds one ethernet interface, two bridges are 
> connected by patch port, only default openflow rule is there.
> 
> table=0, priority=0 actions=NORMAL
> Bridge br-int
> Port patch-br-ex
> Interface patch-br-ex
> type: patch
> options: {peer=patch-br-int}
> Port br-int
> Interface br-int
> type: internal
> Port "dpdk0"
> Interface "dpdk0"
> type: dpdk
> options: {dpdk-devargs=":00:08.0"}
> Bridge br-ex
> Port "dpdk1"
> Interface "dpdk1"
> type: dpdk
> options: {dpdk-devargs=":00:09.0"}
> Port patch-br-int
> Interface patch-br-int
> type: patch
> options: {peer=patch-br-ex}
> Port br-ex
> Interface br-ex
> type: internal
> 
> But when I run iperf to do performance benchmark, the result shocked me.
> 
> For ovs nondpdk, the result is
> 
> vagrant at client1:~$ iperf -t 60 -i 10 -c 192.168.230.101
> 
> 
> Client connecting to 192.168.230.101, TCP port 5001 TCP window size: 
> 85.0 KByte (default)
> 
> [  3] local 192.168.200.101 port 53900 connected with 192.168.230.101 
> port
> 5001
> [ ID] Interval   Transfer Bandwidth
> [  3]  0.0-10.0 sec  1.05 GBytes   905 Mbits/sec
> [  3] 10.0-20.0 sec  1.02 GBytes   877 Mbits/sec
> [  3] 20.0-30.0 sec  1.07 GBytes   922 Mbits/sec
> [  3] 30.0-40.0 sec  1.08 GBytes   927 Mbits/sec
> [  3] 40.0-50.0 sec  1.06 GBytes   914 Mbits/sec
> [  3] 50.0-60.0 sec  1.07 GBytes   922 Mbits/sec
> [  3]  0.0-60.0 sec  6.37 GBytes   911 Mbits/sec
> 
> vagrant at client1:~$
> 
> For ovs dpdk, the bandwidth is just about 45Mbits/sec, why? I really 
> don’t understand what happened.
> 
> vagrant at client1:~$ iperf -t 60 -i 10 -c 192.168.230.101
> 
> 
> Client connecting to 192.168.230.101, TCP port 5001 TCP window size: 
> 85.0 KByte (default)
> 
> [  3] local 192.168.200.101 port 53908 connected with 192.168.230.101 
> port
> 5001
> [ ID] Interval   Transfer Bandwidth
> [  3]  0.0-10.0 sec  54.6 MBytes  45.8 Mbits/sec [  3] 10.0-20.0 sec  
> 55.5 MBytes  46.6 Mbits/sec [  3] 20.0-30.0 sec  52.5 MBytes  44.0 
> Mbits/sec [  3] 30.0-40.0 sec  53.6 MBytes  45.0 Mbits/sec [  3] 
> 40.0-50.0 sec  54.0 MBytes  45.3 Mbits/sec [  3] 50.0-60.0 sec  53.9 
> MBytes  45.2 Mbits/sec
> [  3]  0.0-60.0 sec   324 MBytes  45.3 Mbits/sec
> 
> vagrant at client1:~$
> 
> By the way, I tried to pin physical cores to qemu processes which 
> correspond to ovs pmd threads, but it hardly affects on performance.
> 
>   PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
> P
> 16303 yangyi 20   0   9207120 209700 107500 R 99.9  0.1   63:02.37
> EMT-1  1
> 16304 yangyi 20   0   9207120 209700 107500 R 99.9  0.1   69:16.16
> EMT-2  2


Hi.
There might be a lot of reasons 

Re: [ovs-dev] [PATCH 2/2] datapath: fix csum updates for MPLS actions

2019-07-10 Thread Gregory Rose

Thanks Ben!

On 7/10/2019 1:04 PM, Ben Pfaff wrote:

Thanks for the backports (William, thanks for the reviews).  I applied
these to master.


___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH v3] tnl-neigh-cache: Purge learnt neighbors when port/bridge is deleted

2019-07-10 Thread Vasu Dasari
Say an ARP entry is learnt on a OVS port and when such a port is deleted,
learnt entry should be removed from the port. It would have be aged out after
ARP ageout time. This code will clean up immediately.

Added test case(tunnel - neighbor entry add and deletion) in tunnel.at, to
verify neighbors are added and removed on deletion of a ports and bridges.

Discussion for this addition is at:
https://mail.openvswitch.org/pipermail/ovs-discuss/2019-June/048754.html

Signed-off-by: Vasu Dasari 
Acked-by: Flavio Fernandes 
Acked-by: Ben Pfaff 
---
v1 -> v2:
  Incorporate robot comments. Verified the commit with utilities/checkpatch.py 
-1
v2 -> v3:
  Incorporate review comments from Flavio and Ben. Discussions at:
  https://mail.openvswitch.org/pipermail/ovs-dev/2019-July/360318.html
  https://mail.openvswitch.org/pipermail/ovs-dev/2019-July/360424.html

---
 lib/tnl-neigh-cache.c| 20 +++
 lib/tnl-neigh-cache.h|  1 +
 ofproto/ofproto-dpif-xlate.c |  3 +++
 tests/tunnel.at  | 47 
 4 files changed, 71 insertions(+)

diff --git a/lib/tnl-neigh-cache.c b/lib/tnl-neigh-cache.c
index b28f9f1bb..5bda4af7e 100644
--- a/lib/tnl-neigh-cache.c
+++ b/lib/tnl-neigh-cache.c
@@ -220,6 +220,26 @@ tnl_neigh_cache_run(void)
 }
 }
 
+void
+tnl_neigh_flush(const char br_name[IFNAMSIZ])
+{
+struct tnl_neigh_entry *neigh;
+bool changed = false;
+
+ovs_mutex_lock();
+CMAP_FOR_EACH (neigh, cmap_node, ) {
+if (!strcmp(neigh->br_name, br_name)) {
+tnl_neigh_delete(neigh);
+changed = true;
+}
+}
+ovs_mutex_unlock();
+
+if (changed) {
+seq_change(tnl_conf_seq);
+}
+}
+
 static void
 tnl_neigh_cache_flush(struct unixctl_conn *conn, int argc OVS_UNUSED,
 const char *argv[] OVS_UNUSED, void *aux OVS_UNUSED)
diff --git a/lib/tnl-neigh-cache.h b/lib/tnl-neigh-cache.h
index fee8e6a6f..ded9c2f86 100644
--- a/lib/tnl-neigh-cache.h
+++ b/lib/tnl-neigh-cache.h
@@ -37,5 +37,6 @@ int tnl_neigh_lookup(const char dev_name[], const struct 
in6_addr *dst,
  struct eth_addr *mac);
 void tnl_neigh_cache_init(void);
 void tnl_neigh_cache_run(void);
+void tnl_neigh_flush(const char dev_name[]);
 
 #endif
diff --git a/ofproto/ofproto-dpif-xlate.c b/ofproto/ofproto-dpif-xlate.c
index 73966a4e8..28a7fdd84 100644
--- a/ofproto/ofproto-dpif-xlate.c
+++ b/ofproto/ofproto-dpif-xlate.c
@@ -1481,6 +1481,9 @@ xlate_ofport_remove(struct ofport_dpif *ofport)
 ovs_assert(new_xcfg);
 
 xport = xport_lookup(new_xcfg, ofport);
+if (xport) {
+tnl_neigh_flush(netdev_get_name(xport->netdev));
+}
 xlate_xport_remove(new_xcfg, xport);
 }
 
diff --git a/tests/tunnel.at b/tests/tunnel.at
index 035c54f67..fc6f87936 100644
--- a/tests/tunnel.at
+++ b/tests/tunnel.at
@@ -920,3 +920,50 @@ dnl which is not correct
 
 OVS_VSWITCHD_STOP
 AT_CLEANUP
+
+AT_SETUP([tunnel - neighbor entry add and deletion])
+OVS_VSWITCHD_START([add-port br0 p1 -- set Interface p1 type=gre \
+options:remote_ip=1.1.1.1 options:local_ip=2.2.2.2 \
+options:key=5 ofport_request=1 \
+-- add-port br0 p2 -- set Interface p2 type=gre \
+options:local_ip=3.3.3.3 options:remote_ip=4.4.4.4 \
+ofport_request=2])
+AT_CHECK([ovs-vsctl add-br br1 -- set bridge br1 datapath_type=dummy], [0])
+
+dnl Populate tunnel neighbor cache table
+AT_CHECK([
+ovs-appctl tnl/arp/set p1 10.0.0.1 00:00:10:00:00:01
+ovs-appctl tnl/arp/set p1 10.0.0.2 00:00:10:00:00:02
+ovs-appctl tnl/arp/set p2 10.0.1.1 00:00:10:00:01:01
+ovs-appctl tnl/arp/set br0 10.0.2.1 00:00:10:00:02:01
+ovs-appctl tnl/arp/set br0 10.0.2.2 00:00:10:00:02:02
+ovs-appctl tnl/arp/set br1 20.0.0.1 00:00:20:00:00:01
+], [0], [stdout])
+
+AT_CHECK([ovs-appctl tnl/neigh/show | tail -n+3 | sort], [0], [dnl
+10.0.0.1  00:00:10:00:00:01   p1
+10.0.0.2  00:00:10:00:00:02   p1
+10.0.1.1  00:00:10:00:01:01   p2
+10.0.2.1  00:00:10:00:02:01   br0
+10.0.2.2  00:00:10:00:02:02   br0
+20.0.0.1  00:00:20:00:00:01   br1
+])
+
+dnl neighbor table after deleting port p1
+AT_CHECK([ovs-vsctl del-port br0 p1],[0], [stdout])
+AT_CHECK([ovs-appctl tnl/neigh/show | tail -n+3 | grep -w p1 | sort], [0], [dnl
+])
+
+dnl neighbor table after deleting bridge br0
+AT_CHECK([ovs-vsctl del-br br0],[0], [stdout])
+AT_CHECK([ovs-appctl tnl/neigh/show | tail -n+3 | sort], [0], [dnl
+20.0.0.1  00:00:20:00:00:01   br1
+])
+
+dnl neighbor table after deleting bridge br1
+AT_CHECK([ovs-vsctl del-br br1],[0], [stdout])
+AT_CHECK([ovs-appctl tnl/neigh/show | tail -n+3 | sort], [0], [dnl
+])
+
+OVS_VSWITCHD_STOP

Re: [ovs-dev] [PATCH v2 2/3] OVN: Add IGMP SB definitions and ovn-controller support

2019-07-10 Thread Ben Pfaff
On Tue, Jul 09, 2019 at 02:49:18PM +0200, Dumitru Ceara wrote:
> A new action ("igmp") is added to punt IGMP packets on a specific logical
> switch datapath to ovn-controller if IGMP snooping is enabled.

I do not fully understand the new action.  It is defined to take a set
of nested actions inside {}, but I don't see what it actually does with
those actions.  I do not think that it executes them, and the
documentation does not mention them.  Maybe this means that it is a
mistake to have it take them at all?
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] ovsdb-server: drop all connections on read/write status change

2019-07-10 Thread Ben Pfaff
On Tue, Jul 09, 2019 at 12:16:30PM +0200, Daniel Alvarez wrote:
> Prior to this patch, only db change aware connections were dropped
> on a read/write status change. However, current schema in OVN does
> not allow clients to monitor whether a particular DB changes this
> status. In order to accomplish this, we'd need to change the schema
> and adapting ovsdb-server and existing clients.
> 
> Before tackling that, this patch is changing ovsdb-server to drop
> *all* the existing connections upon a read/write status change. This
> will force clients to reconnect and honor the change.
> 
> Reported-at: 
> https://mail.openvswitch.org/pipermail/ovs-discuss/2019-July/048981.html
> Signed-off-by: Daniel Alvarez 

Thanks, applied to master.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 2/2] datapath: fix csum updates for MPLS actions

2019-07-10 Thread Ben Pfaff
Thanks for the backports (William, thanks for the reviews).  I applied
these to master.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] Shutdown SSL connection before closing socket

2019-07-10 Thread Ben Pfaff
On Wed, Jul 10, 2019 at 11:07:16AM -0500, Terry Wilson wrote:
> Without shutting down the SSL connection, log messages like:
> 
> stream_ssl|WARN|SSL_read: unexpected SSL connection close
> jsonrpc|WARN|ssl:127.0.0.1:47052: receive error: Protocol error
> reconnect|WARN|ssl:127.0.0.1:47052: connection dropped (Protocol error)
> 
> would occur whenever the socket is closed. This just adds an
> SSLStream.close() that calls shutdown() and ignores read/write
> errors.
> 
> Signed-off-by: Terry Wilson 

Thanks for the patch.

With this applied, I get two test failures, details below.

##  ##
## Summary of the failures. ##
##  ##
Failed tests:
openvswitch 2.11.90 test suite test groups:

 NUM: FILE-NAME:LINE TEST-GROUP-NAME
  KEYWORDS

 2108: ovsdb-idl.at:351   simple idl, initially empty, various ops - Python2 - 
SSL
  ovsdb server idl positive python with ssl socket
 2439: ovsdb-idl.at:1452  simple idl verify notify - Python2 - SSL
  ovsdb server idl positive python with ssl socket notify

## -- ##
## Detailed failed tests. ##
## -- ##

# -*- compilation -*-
2108. ovsdb-idl.at:351: testing simple idl, initially empty, various ops - 
Python2 - SSL ...
../../tests/ovsdb-idl.at:351: ovsdb-tool create db $abs_srcdir/idltest.ovsschema
stderr:
stdout:
../../tests/ovsdb-idl.at:351: ovsdb-server -vconsole:warn --log-file --detach 
--no-chdir \
 --pidfile \
 --private-key=$PKIDIR/testpki-privkey2.pem \
 --certificate=$PKIDIR/testpki-cert2.pem \
 --ca-cert=$PKIDIR/testpki-cacert.pem \
 --remote=pssl:0:127.0.0.1 db
ovsdb-idl.at:351: waiting until TCP_PORT=`sed -n 's/.*0:.*: listening on port 
\([0-9]*\)$/\1/p' "ovsdb-server.log"` && test X != X"$TCP_PORT"...
ovsdb-idl.at:351: wait succeeded immediately
../../tests/ovsdb-idl.at:351: $PYTHON $srcdir/test-ovsdb.py  -t10 idl 
$srcdir/idltest.ovsschema \
 ssl:127.0.0.1:$TCP_PORT $PKIDIR/testpki-privkey.pem \
 $PKIDIR/testpki-cert.pem $PKIDIR/testpki-cacert.pem '["idltest",
  {"op": "insert",
   "table": "simple",
   "row": {"i": 1,
   "r": 2.0,
   "b": true,
   "s": "mystring",
   "u": ["uuid", "84f5c8f5-ac76-4dbc-a24f-8860eb407fc1"],
   "ia": ["set", [1, 2, 3]],
   "ra": ["set", [-0.5]],
   "ba": ["set", [true]],
   "sa": ["set", ["abc", "def"]],
   "ua": ["set", [["uuid", "69443985-7806-45e2-b35f-574a04e720f9"],
  ["uuid", 
"aad11ef0-816a-4b01-93e6-03b8b4256b98"]]]}},
  {"op": "insert",
   "table": "simple",
   "row": {}}]' \
'["idltest",
  {"op": "update",
   "table": "simple",
   "where": [],
   "row": {"b": true}}]' \
'["idltest",
  {"op": "update",
   "table": "simple",
   "where": [],
   "row": {"r": 123.5}}]' \
'["idltest",
  {"op": "insert",
   "table": "simple",
   "row": {"i": -1,
   "r": 125,
   "b": false,
   "s": "",
   "ia": ["set", [1]],
   "ra": ["set", [1.5]],
   "ba": ["set", [false]],
   "sa": ["set", []],
   "ua": ["set", []]}}]' \
'["idltest",
  {"op": "update",
   "table": "simple",
   "where": [["i", "<", 1]],
   "row": {"s": "newstring"}}]' \
'["idltest",
  {"op": "delete",
   "table": "simple",
   "where": [["i", "==", 0]]}]' \
'reconnect'
stderr:
2019-07-10T19:57:50Z |  0  | reconnect | DBG | ssl:127.0.0.1:38627: entering 
BACKOFF
2019-07-10T19:57:50Z |  1  | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  2  | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  3  | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  4  | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  5  | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  6  | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  7  | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  8  | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  9  | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  10 | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  11 | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  12 | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  13 | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  14 | poller | DBG | [POLLOUT] on fd 3
2019-07-10T19:57:50Z |  15 | reconnect | INFO | ssl:127.0.0.1:38627: 
connecting...
2019-07-10T19:57:50Z |  16 | reconnect | DBG | ssl:127.0.0.1:38627: entering 
CONNECTING
2019-07-10T19:57:50Z |  17 | poller | DBG | [POLLOUT] on fd 4
2019-07-10T19:57:50Z |  18 | poller | DBG | [POLLOUT] on fd 4
2019-07-10T19:57:50Z |  19 | poller | DBG | [POLLOUT] on fd 4
2019-07-10T19:57:50Z |  20 | poller | DBG | [POLLOUT] on 

Re: [ovs-dev] [PATCH v2 0/3] OVN: Add IGMP support

2019-07-10 Thread Mark Michelson

For the series:

Acked-by: Mark Michelson 

On 7/9/19 8:48 AM, Dumitru Ceara wrote:

This series introduces support for IGMP Snooping and IGMP Querier. IGMP
versions v1-v3 are supported for snooping and IGMP queries originated by
ovn-controller are general IGMPv3 queries. The rationale behind using v3 for
querier is that it's backward compatible with v1-v2.

The majority of the code is IP version independent with the thought in mind
that support for MLD snooping for IPv6 will be added next.

Dumitru Ceara (3):
   packets: Add IGMPv3 query packet definitions
   OVN: Add IGMP SB definitions and ovn-controller support
   OVN: Add ovn-northd IGMP support


  include/ovn/actions.h   |6
  lib/packets.c   |   44 ++
  lib/packets.h   |   19 +
  ovn/controller/automake.mk  |2
  ovn/controller/ip-mcast.c   |  164 
  ovn/controller/ip-mcast.h   |   52 +++
  ovn/controller/ovn-controller.c |   23 +
  ovn/controller/pinctrl.c|  787 +++
  ovn/controller/pinctrl.h|2
  ovn/lib/actions.c   |   22 +
  ovn/lib/automake.mk |2
  ovn/lib/ip-mcast-index.c|   40 ++
  ovn/lib/ip-mcast-index.h|   38 ++
  ovn/lib/logical-fields.c|2
  ovn/northd/ovn-northd.c |  454 +-
  ovn/ovn-nb.xml  |   54 +++
  ovn/ovn-sb.ovsschema|   43 ++
  ovn/ovn-sb.xml  |   80 
  ovn/utilities/ovn-sbctl.c   |   53 +++
  ovn/utilities/ovn-trace.c   |   14 +
  tests/ovn.at|  276 ++
  tests/system-ovn.at |  119 ++
  22 files changed, 2260 insertions(+), 36 deletions(-)
  create mode 100644 ovn/controller/ip-mcast.c
  create mode 100644 ovn/controller/ip-mcast.h
  create mode 100644 ovn/lib/ip-mcast-index.c
  create mode 100644 ovn/lib/ip-mcast-index.h


---
v2:
- address reviewer comments.
- fix a memory corruption when reallocating multicast ports in
   ovn_multicast_add_ports.
- add missing NULL checks in pinctrl.c for the case when a logical switch
   configuration gets deleted by ovn-northd and controllers are updating
   IGMP_Group entries.
- Fix allocation of multicast group IDs in ovn-northd. The multicast group
   IDs were allocated globally instead of per-datapath  which was limiting
   the total number of IGMP Groups. At most 32K IGMP groups can be learned
   per datapath regardless of how many datapaths are configured.
- add system-ovn.at test.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev



___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] Shutdown SSL connection before closing socket

2019-07-10 Thread Terry Wilson
from __future__ import print_function
import sys

from ovs import jsonrpc
from ovs import stream
from ovs.unixctl import client


URI='ssl:127.0.0.1:6641'
PRIV='sandbox/ovnnb-privkey.pem'
CERT='sandbox/ovnnb-cert.pem'
CACERT='sandbox/pki/switchca/cacert.pem'
stream.Stream.ssl_set_private_key_file(PRIV)
stream.Stream.ssl_set_certificate_file(CERT)
stream.Stream.ssl_set_ca_cert_file(CACERT)


class SSLClient(client.UnixctlClient):
@classmethod
def create(cls, uri):
error, _stream = stream.Stream.open_block(
stream.Stream.open(uri))
if error:
client.vlog.warn("failed to connect to %s" % path)
return error, None
return 0, cls(jsonrpc.Connection(_stream))


_, c = SSLClient.create(URI)
print(c.transact("echo", ["hello world"]))
c.close()

On Wed, Jul 10, 2019 at 12:17 PM Mark Michelson  wrote:

> On 7/10/19 12:11 PM, Terry Wilson wrote:
> > An example of a reproducer script attached. If you enable SSL and OVN w/
> > the sandbox and run it, looking in the sandbox/nb1.log you'll see the
> > disconnect errors that the patch makes go away.
> >
>
> Hi Terry. It looks like the mailing list has eaten your attachment. If
> possible, can you include it in-line?
>
> > On Wed, Jul 10, 2019 at 11:07 AM Terry Wilson 
> wrote:
> >
> >> Without shutting down the SSL connection, log messages like:
> >>
> >> stream_ssl|WARN|SSL_read: unexpected SSL connection close
> >> jsonrpc|WARN|ssl:127.0.0.1:47052: receive error: Protocol error
> >> reconnect|WARN|ssl:127.0.0.1:47052: connection dropped (Protocol error)
> >>
> >> would occur whenever the socket is closed. This just adds an
> >> SSLStream.close() that calls shutdown() and ignores read/write
> >> errors.
> >>
> >> Signed-off-by: Terry Wilson 
> >> ---
> >>   python/ovs/stream.py | 8 
> >>   1 file changed, 8 insertions(+)
> >>
> >> diff --git a/python/ovs/stream.py b/python/ovs/stream.py
> >> index c15be4b..fd1045e 100644
> >> --- a/python/ovs/stream.py
> >> +++ b/python/ovs/stream.py
> >> @@ -825,6 +825,14 @@ class SSLStream(Stream):
> >>   except SSL.SysCallError as e:
> >>   return -ovs.socket_util.get_exception_errno(e)
> >>
> >> +def close(self):
> >> +if self.socket:
> >> +try:
> >> +self.socket.shutdown()
> >> +except (SSL.WantReadError, SSL.WantWriteError):
> >> +pass
> >> +return super(SSLStream, self).close()
> >> +
> >>
> >>   if SSL:
> >>   # Register SSL only if the OpenSSL module is available
> >> --
> >> 1.8.3.1
> >>
> >>
> >>
> >> ___
> >> dev mailing list
> >> d...@openvswitch.org
> >> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
>
>
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] OVS-DPDK public meeting

2019-07-10 Thread Ian Stokes

Next meeting July 24th 1700 UTC

July 10th minutes

Attendees: Ian, Aaron, Eelco, Fouad, Johann, Scott, Simon.

===
GENERAL
===

- OVS 2.12 (Ian)
-- Dates
--- Soft Freeze entered 7th July. Will remain until the 22nd of July.
 Hard freeze from the 22nd of July onwards. Only bug fixes accepted 
for 2.12 beyond this point.

 Release ~ mid August (typically 4 weeks after hard freeze).


PERFORMANCE/FEATURES


- Quicker PMD Reloads
 -- Status: v4 accepted and upstreamed to master.

- Poll enabled vHost queues.
 -- Status: Accepted and upstreamed to master.

- Vhost retry stats and configuration.
 -- Status: Accepted and upstreamed to master.

- DPDK 18.11.2 support.
 -- Status: accepted to master and branch 2.11.
 -- 18.11.2 is now the minimum DPDK supported for master.
 -- OVS 2.11 will still be able to use 18.11.2 or earlier.

- dpcls function pointer & optimizations.
 -- Status: v10 revision sent to mailing list yesterday.
 -- Corrects latest issues flagged on v9, now passes travis, appveyor etc.
 -- Testing on ARM & x86 showing performance improvements 10 - 15%.

- RFC 4115 egress policer on dpdk-next, sent RFC patch (Eelco).
-- Required policer functionality available since DPDK 19.05.
-- Looking for feedback on current RFC series so as to merge the 
feature to master as soon as DPDK 19.11 is supported in OVS.

-- https://patchwork.ozlabs.org/patch/1122062/

- HW CT offload (Aaron)
-- RFC v2 submitted.
-- https://patchwork.ozlabs.org/project/openvswitch/list/?series=117809
-- Question regarding would this be too late for consideration of OVS 2.12?
-- Probably best for author to raise with maintainers on mailing list 
for exception if enough feedback is secured before the 2.12 deadline.



Bugs

- Call to action for the 2.12 release window.
- Testing and validation across the community before the 2.12 release 
date appreciated. Any bugs or regression reported to the mailing list.


Regards
Ian
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v10 3/5] dpif-netdev: split out generic lookup function

2019-07-10 Thread Ian Stokes

On 7/9/2019 1:34 PM, Harry van Haaren wrote:

This commit splits the generic hash-lookup-verify
function to its own file. In doing so, we must move
some MACRO definitions to dpif-netdev.h

Signed-off-by: Harry van Haaren 
Tested-by: Malvika Gupta 



Thanks Harry, some feedback below.


---

v10:
- Rebase fixups from previous patch changes
- Add copyright as suggested in review (Ian)

v6:
- Fixup some checkpatch warnings on whitespace with MACROs (Ilya)
- Other MACROs function incorrectly when checkpatch is happy,
   so using the functional version without space after the ( character.
   This prints a checkpatch warning, but I see no way to fix it.
---
  lib/automake.mk  |  1 +
  lib/dpif-netdev-lookup-generic.c | 95 
  lib/dpif-netdev.c| 82 +--
  lib/dpif-netdev.h| 16 ++
  4 files changed, 113 insertions(+), 81 deletions(-)
  create mode 100644 lib/dpif-netdev-lookup-generic.c






+
+uint32_t
+dpcls_subtable_lookup_generic(struct dpcls_subtable *subtable,
+  uint32_t keys_map,
+  const struct netdev_flow_key *keys[],
+  struct dpcls_rule **rules)
+{
+int i;
+/* Compute hashes for the remaining keys.  Each search-key is
+ * masked with the subtable's mask to avoid hashing the wildcarded
+ * bits. */
+uint32_t hashes[NETDEV_MAX_BURST];
+ULLONG_FOR_EACH_1(i, keys_map) {
+hashes[i] = netdev_flow_key_hash_in_mask(keys[i],
+ >mask);
+}
+
+/* Lookup. */
+const struct cmap_node *nodes[NETDEV_MAX_BURST];
+uint32_t found_map =
+cmap_find_batch(>rules, keys_map, hashes, nodes);


I would prefer to see

uint32_t found_map = cmap_find_batch(>rules,
 keys_map, hashes, nodes);

or alternatively you could declare uint32_t found_map at the beginning 
of the function so that way cmap_find_batch and arguments should be able 
to stay on the same line.



+/* Check results.  When the i-th bit of found_map is set, it means
+ * that a set of nodes with a matching hash value was found for the
+ * i-th search-key.  Due to possible hash collisions we need to check
+ * which of the found rules, if any, really matches our masked
+ * search-key. */
+ULLONG_FOR_EACH_1(i, found_map) {
+struct dpcls_rule *rule;
+
+CMAP_NODE_FOR_EACH (rule, cmap_node, nodes[i]) {
+if (OVS_LIKELY(dpcls_rule_matches_key(rule, keys[i]))) {
+rules[i] = rule;
+/* Even at 20 Mpps the 32-bit hit_cnt cannot wrap
+ * within one second optimization interval. */
+subtable->hit_cnt++;
+goto next;
+}
+}
+/* None of the found rules was a match.  Reset the i-th bit to
+ * keep searching this key in the next subtable. */
+ULLONG_SET0(found_map, i);  /* Did not match. */
+next:
+; /* Keep Sparse happy. */
+}
+
+return found_map;
+}
diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 2414103d8..190cc8918 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -235,14 +235,6 @@ struct dpcls {
  struct pvector subtables;
  };
  





  /* Data structure to keep packet order till fastpath processing. */
  struct dp_packet_flow_map {
  struct dp_packet *packet;
@@ -260,8 +252,6 @@ static bool dpcls_lookup(struct dpcls *cls,
   const struct netdev_flow_key *keys[],
   struct dpcls_rule **rules, size_t cnt,
   int *num_lookups_p);
-static bool dpcls_rule_matches_key(const struct dpcls_rule *rule,
-const struct netdev_flow_key *target);


I know you are only removing the function prototype above, but a new 
line here added would look better as well just to split the prototypes 
from the comments for the define section below for the meter flags.



  /* Set of supported meter flags */
  #define DP_SUPPORTED_METER_FLAGS_MASK \
  (OFPMF13_STATS | OFPMF13_PKTPS | OFPMF13_KBPS | OFPMF13_BURST)
@@ -2741,27 +2731,6 @@ netdev_flow_key_init_masked(struct netdev_flow_key *dst,
  (dst_u64 - miniflow_get_values(>mf)) * 8);
  }
  





  /* For each miniflow in 'keys' performs a classifier lookup writing the result
   * into the corresponding slot in 'rules'.  If a particular entry in 'keys' is
   * NULL it is skipped.
diff --git a/lib/dpif-netdev.h b/lib/dpif-netdev.h
index 3d23743bd..3c3cc65ef 100644
--- a/lib/dpif-netdev.h
+++ b/lib/dpif-netdev.h
@@ -52,6 +52,14 @@ struct netdev_flow_key {
  uint64_t buf[FLOW_MAX_PACKET_U64S];
  };
  
+/* A rule to be 

Re: [ovs-dev] [PATCH] Shutdown SSL connection before closing socket

2019-07-10 Thread Mark Michelson

On 7/10/19 12:11 PM, Terry Wilson wrote:

An example of a reproducer script attached. If you enable SSL and OVN w/
the sandbox and run it, looking in the sandbox/nb1.log you'll see the
disconnect errors that the patch makes go away.



Hi Terry. It looks like the mailing list has eaten your attachment. If 
possible, can you include it in-line?



On Wed, Jul 10, 2019 at 11:07 AM Terry Wilson  wrote:


Without shutting down the SSL connection, log messages like:

stream_ssl|WARN|SSL_read: unexpected SSL connection close
jsonrpc|WARN|ssl:127.0.0.1:47052: receive error: Protocol error
reconnect|WARN|ssl:127.0.0.1:47052: connection dropped (Protocol error)

would occur whenever the socket is closed. This just adds an
SSLStream.close() that calls shutdown() and ignores read/write
errors.

Signed-off-by: Terry Wilson 
---
  python/ovs/stream.py | 8 
  1 file changed, 8 insertions(+)

diff --git a/python/ovs/stream.py b/python/ovs/stream.py
index c15be4b..fd1045e 100644
--- a/python/ovs/stream.py
+++ b/python/ovs/stream.py
@@ -825,6 +825,14 @@ class SSLStream(Stream):
  except SSL.SysCallError as e:
  return -ovs.socket_util.get_exception_errno(e)

+def close(self):
+if self.socket:
+try:
+self.socket.shutdown()
+except (SSL.WantReadError, SSL.WantWriteError):
+pass
+return super(SSLStream, self).close()
+

  if SSL:
  # Register SSL only if the OpenSSL module is available
--
1.8.3.1



___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH 0/3] OVN: add Controller Events

2019-07-10 Thread Lorenzo Bianconi
> On Fri, Jun 14, 2019 at 05:53:20PM +0200, Lorenzo Bianconi wrote:
> > The series here introduces a new table, Controller_Event, for this
> > purpose. Traffic into OVS can raise a controller() event that results in
> > a Controller_Event being written to the southbound database. The
> > intention is for a CMS to see the events and take some sort of action.
> > The "handled" column of the event is initially set to "false" by OVN.
> > When the CMS has seen the event and taken appropriate action, then the
> > CMS sets this field to "true'. When ovn-controller sees that the event
> > has been handled, it deletes the event from the database.
> > 
> > Controller events are only added to the southbound database if they have
> > been enabled in the nb_global table via the options:controller_event
> > setting.
> 
> The above paragraphs of high-level information are helpful, but they are
> only in email and do not appear in the documentation.  I suggest adding
> them to the documentation so that the reader can see the whole picture.
> 
> Some other high-level design comments:
> 
> - I don't see an explanation of the sequence number and how it's used in
>   practice.
> 
> - I don't yet understand why it's better to have the CMS set 'handled'
>   and then have ovn-controller delete the row, than to have the CMS
>   delete the row directly.

Hi Ben,

thx for the review.
I think controlling 'handled' field in a single point make the system more 
robust
against races but we can just move the control to the CMS.
I will post a v2 soon.

Regards,
Lorenzo

> 
> - The passive voice in the second paragraph above makes it ambiguous who
>   sets options:controller_event.  I think that the CMS sets this; it
>   would be best for the documentation to say so.
> 
> Thanks,
> 
> Ben.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] Shutdown SSL connection before closing socket

2019-07-10 Thread Terry Wilson
An example of a reproducer script attached. If you enable SSL and OVN w/
the sandbox and run it, looking in the sandbox/nb1.log you'll see the
disconnect errors that the patch makes go away.

On Wed, Jul 10, 2019 at 11:07 AM Terry Wilson  wrote:

> Without shutting down the SSL connection, log messages like:
>
> stream_ssl|WARN|SSL_read: unexpected SSL connection close
> jsonrpc|WARN|ssl:127.0.0.1:47052: receive error: Protocol error
> reconnect|WARN|ssl:127.0.0.1:47052: connection dropped (Protocol error)
>
> would occur whenever the socket is closed. This just adds an
> SSLStream.close() that calls shutdown() and ignores read/write
> errors.
>
> Signed-off-by: Terry Wilson 
> ---
>  python/ovs/stream.py | 8 
>  1 file changed, 8 insertions(+)
>
> diff --git a/python/ovs/stream.py b/python/ovs/stream.py
> index c15be4b..fd1045e 100644
> --- a/python/ovs/stream.py
> +++ b/python/ovs/stream.py
> @@ -825,6 +825,14 @@ class SSLStream(Stream):
>  except SSL.SysCallError as e:
>  return -ovs.socket_util.get_exception_errno(e)
>
> +def close(self):
> +if self.socket:
> +try:
> +self.socket.shutdown()
> +except (SSL.WantReadError, SSL.WantWriteError):
> +pass
> +return super(SSLStream, self).close()
> +
>
>  if SSL:
>  # Register SSL only if the OpenSSL module is available
> --
> 1.8.3.1
>
>
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH] Shutdown SSL connection before closing socket

2019-07-10 Thread Terry Wilson
Without shutting down the SSL connection, log messages like:

stream_ssl|WARN|SSL_read: unexpected SSL connection close
jsonrpc|WARN|ssl:127.0.0.1:47052: receive error: Protocol error
reconnect|WARN|ssl:127.0.0.1:47052: connection dropped (Protocol error)

would occur whenever the socket is closed. This just adds an
SSLStream.close() that calls shutdown() and ignores read/write
errors.

Signed-off-by: Terry Wilson 
---
 python/ovs/stream.py | 8 
 1 file changed, 8 insertions(+)

diff --git a/python/ovs/stream.py b/python/ovs/stream.py
index c15be4b..fd1045e 100644
--- a/python/ovs/stream.py
+++ b/python/ovs/stream.py
@@ -825,6 +825,14 @@ class SSLStream(Stream):
 except SSL.SysCallError as e:
 return -ovs.socket_util.get_exception_errno(e)
 
+def close(self):
+if self.socket:
+try:
+self.socket.shutdown()
+except (SSL.WantReadError, SSL.WantWriteError):
+pass
+return super(SSLStream, self).close()
+
 
 if SSL:
 # Register SSL only if the OpenSSL module is available
-- 
1.8.3.1

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v10 1/5] dpif-netdev: implement function pointers/subtable

2019-07-10 Thread Ian Stokes

On 7/10/2019 4:40 PM, Van Haaren, Harry wrote:

-Original Message-
From: Stokes, Ian
Sent: Wednesday, July 10, 2019 4:30 PM
To: Van Haaren, Harry ; d...@openvswitch.org
Cc: i.maxim...@samsung.com; malvika.gu...@arm.com
Subject: Re: [PATCH v10 1/5] dpif-netdev: implement function pointers/subtable

On 7/9/2019 1:34 PM, Harry van Haaren wrote:

This allows plugging-in of different subtable hash-lookup-verify
routines, and allows special casing of those functions based on
known context (eg: # of bits set) of the specific subtable.

Signed-off-by: Harry van Haaren 
Tested-by: Malvika Gupta 



Thanks for this harry, few minor comments below.


Hey, cool I'll snip away irrelevant code sections for readability.









I've thought about whether an item should be added to the NEWS doc
(possibly not in this patch but perhaps later in the series), it's hard
to say as the changes are under the hood and not configurable by a user.
Thoughts?


Correct the user shouldn't identify the actual changes, except for small
gains in performance if the specialized subtables are in use.

I think it would be good to add a NEWS item, as people profiling OVS's functions
will see different function names, and having called out the DPCLS Function 
Pointer
changes, and why they are there would be beneficial.

I will draft a separate patch for NEWS item - so we can keep it parallel to 
this patch set.
Shout if that is not a good approach :)


I would think it could be added as part of patch 5 of the series as 
that's when the performance increase is introduced? Anyway it's minor 
enough that if there is not another revision of the series it can be 
done upon commit.


Regards
Ian

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v10 2/5] dpif-netdev: move dpcls lookup structures to .h

2019-07-10 Thread Ian Stokes

On 7/9/2019 1:34 PM, Harry van Haaren wrote:

This commit moves some data-structures to be available
in the dpif-netdev.h header. This allows specific implementations
of the subtable lookup function to include just that header
file, and not require that the code exists in dpif-netdev.c

Signed-off-by: Harry van Haaren 
Tested-by: Malvika Gupta 

---

v10:
- Rebase updates from previous patch in code that moved.
- Move cmap.h include into alphabetical order (Ian)
- Fix comment and typo (Ian)
- Restructure function typedef to fit in 80 chars


Thanks for addressing the issues raised in v9,

Although previously I said this could be combined with patch 1 and 3,now 
I'm thinking it might be better to leave them split as is. Logically 
they accomplish separate goals in each patch.


Seeing the division of the patches in this series was definitely helpful 
to myself during reviews so I'm thinking it may be helpful to others in 
the future if they are moving from the current dpcls approach to this 
generic approach. Unless there are objections I suggest we leave as is.


Ian


___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v10 1/5] dpif-netdev: implement function pointers/subtable

2019-07-10 Thread Van Haaren, Harry
> -Original Message-
> From: Stokes, Ian
> Sent: Wednesday, July 10, 2019 4:30 PM
> To: Van Haaren, Harry ; d...@openvswitch.org
> Cc: i.maxim...@samsung.com; malvika.gu...@arm.com
> Subject: Re: [PATCH v10 1/5] dpif-netdev: implement function pointers/subtable
> 
> On 7/9/2019 1:34 PM, Harry van Haaren wrote:
> > This allows plugging-in of different subtable hash-lookup-verify
> > routines, and allows special casing of those functions based on
> > known context (eg: # of bits set) of the specific subtable.
> >
> > Signed-off-by: Harry van Haaren 
> > Tested-by: Malvika Gupta 
> >
> 
> Thanks for this harry, few minor comments below.

Hey, cool I'll snip away irrelevant code sections for readability.




> > +uint32_t
> > +dpcls_subtable_lookup_generic(struct dpcls_subtable *subtable,
> > +  uint32_t keys_map,
> > +  const struct netdev_flow_key *keys[],
> > +  struct dpcls_rule **rules)
> > +{
> > +int i;
> > +uint32_t found_map;
> 
> I was thinking should this be initialized to 0, but  I don't think there
> is a need. the call to cmap_find_batch() below will set it to 0 in the
> case of no math, otherwise set the i-th bit for a match so I thinks it's ok.

Correct - the value is written before being used - always.
Initializing it to zero is pointless, and only waste and instruction.



> > @@ -7849,16 +7931,12 @@ dpcls_lookup(struct dpcls *cls, const struct
> >   if (cnt != MAP_BITS) {
> >   keys_map >>= MAP_BITS - cnt; /* Clear extra bits. */
> > @@ -7866,6 +7944,7 @@ dpcls_lookup(struct dpcls *cls, const struct
> netdev_flow_key *keys[],
> >   memset(rules, 0, cnt * sizeof *rules);
> >
> >   int lookups_match = 0, subtable_pos = 1;
> > +uint32_t found_map;
> 
> Same as above, should be OK as it wont be used until after the call to
> the generic look up function, which will set it to 0 in the event of no
> matches.

Correct - written before use - no use in initializing to any value.


> > -/* None of the found rules was a match.  Reset the i-th bit to
> > - * keep searching this key in the next subtable. */
> > -ULLONG_SET0(found_map, i);  /* Did not match. */
> > -next:
> > -; /* Keep Sparse happy. */
> > -}
> > -keys_map &= ~found_map; /* Clear the found rules. */
> > +/* Clear the found rules, and return early if all packets are
> found. */
> > +keys_map &= ~found_map;
> >   if (!keys_map) {
> >   if (num_lookups_p) {
> >   *num_lookups_p = lookups_match;
> >   }
> > -return true;  /* All found. */
> > +return true;
> >   }
> >   subtable_pos++;
> >   }
> > +
> >   if (num_lookups_p) {
> >   *num_lookups_p = lookups_match;
> >   }
> > -return false; /* Some misses. */
> > +return false;
> >   }
> >
> 
> I've thought about whether an item should be added to the NEWS doc
> (possibly not in this patch but perhaps later in the series), it's hard
> to say as the changes are under the hood and not configurable by a user.
> Thoughts?

Correct the user shouldn't identify the actual changes, except for small
gains in performance if the specialized subtables are in use.

I think it would be good to add a NEWS item, as people profiling OVS's functions
will see different function names, and having called out the DPCLS Function 
Pointer
changes, and why they are there would be beneficial.

I will draft a separate patch for NEWS item - so we can keep it parallel to 
this patch set.
Shout if that is not a good approach :)

> Regards
> Ian

Thanks for review!

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v10 1/5] dpif-netdev: implement function pointers/subtable

2019-07-10 Thread Ian Stokes

On 7/9/2019 1:34 PM, Harry van Haaren wrote:

This allows plugging-in of different subtable hash-lookup-verify
routines, and allows special casing of those functions based on
known context (eg: # of bits set) of the specific subtable.

Signed-off-by: Harry van Haaren 
Tested-by: Malvika Gupta 



Thanks for this harry, few minor comments below.


---

v10:
- Fix capitalization of comments, and punctuation. (Ian)
- Variable declarations up top before use (Ian)
- Fix alignment of function parameters, had to newline after typedef (Ian)
- Some mailing-list questions relpied to on-list (Ian)

v9:
- Use count_1bits in favour of __builtin_popcount (Ilya)

v6:
- Implement subtable effort per packet "lookups_match" counter  (Ilya)
- Remove double newline (Eelco)
- Remove double * before comments (Eelco)
- Reword comments in dpcls_lookup() for clarity (Harry)
---
  lib/dpif-netdev.c | 138 --
  1 file changed, 96 insertions(+), 42 deletions(-)

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index f4b59e41b..f02bcd552 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -7603,6 +7603,28 @@ dpif_dummy_register(enum dummy_level level)
  
  /* Datapath Classifier. */
  
+/* Forward declaration for lookup_func typedef. */

+struct dpcls_subtable;
+
+/* Lookup function for a subtable in the dpcls. This function is called
+ * by each subtable with an array of packets, and a bitmask of packets to
+ * perform the lookup on. Using a function pointer gives flexibility to
+ * optimize the lookup function based on subtable properties and the
+ * CPU instruction set available at runtime.
+ */
+typedef
+uint32_t (*dpcls_subtable_lookup_func)(struct dpcls_subtable *subtable,
+   uint32_t keys_map,
+   const struct netdev_flow_key *keys[],
+   struct dpcls_rule **rules);
+
+/* Prototype for generic lookup func, using same code path as before. */
+uint32_t
+dpcls_subtable_lookup_generic(struct dpcls_subtable *subtable,
+  uint32_t keys_map,
+  const struct netdev_flow_key *keys[],
+  struct dpcls_rule **rules);
+
  /* A set of rules that all have the same fields wildcarded. */
  struct dpcls_subtable {
  /* The fields are only used by writers. */
@@ -7612,6 +7634,13 @@ struct dpcls_subtable {
  struct cmap rules;   /* Contains "struct dpcls_rule"s. */
  uint32_t hit_cnt;/* Number of match hits in subtable in 
current
  optimization interval. */
+
+/* The lookup function to use for this subtable. If there is a known
+ * property of the subtable (eg: only 3 bits of miniflow metadata is
+ * used for the lookup) then this can point at an optimized version of
+ * the lookup function for this particular subtable. */
+dpcls_subtable_lookup_func lookup_func;
+
  struct netdev_flow_key mask; /* Wildcards for fields (const). */
  /* 'mask' must be the last field, additional space is allocated here. */
  };
@@ -7671,6 +7700,10 @@ dpcls_create_subtable(struct dpcls *cls, const struct 
netdev_flow_key *mask)
  cmap_init(>rules);
  subtable->hit_cnt = 0;
  netdev_flow_key_clone(>mask, mask);
+
+/* Decide which hash/lookup/verify function to use. */
+subtable->lookup_func = dpcls_subtable_lookup_generic;
+
  cmap_insert(>subtables_map, >cmap_node, mask->hash);
  /* Add the new subtable at the end of the pvector (with no hits yet) */
  pvector_insert(>subtables, subtable, 0);
@@ -7831,6 +7864,55 @@ dpcls_rule_matches_key(const struct dpcls_rule *rule,
  return true;
  }
  
+uint32_t

+dpcls_subtable_lookup_generic(struct dpcls_subtable *subtable,
+  uint32_t keys_map,
+  const struct netdev_flow_key *keys[],
+  struct dpcls_rule **rules)
+{
+int i;
+uint32_t found_map;


I was thinking should this be initialized to 0, but  I don't think there 
is a need. the call to cmap_find_batch() below will set it to 0 in the 
case of no math, otherwise set the i-th bit for a match so I thinks it's ok.



+
+/* Compute hashes for the remaining keys.  Each search-key is
+ * masked with the subtable's mask to avoid hashing the wildcarded
+ * bits. */
+uint32_t hashes[NETDEV_MAX_BURST];
+ULLONG_FOR_EACH_1(i, keys_map) {
+hashes[i] = netdev_flow_key_hash_in_mask(keys[i],
+ >mask);
+}
+
+/* Lookup. */
+const struct cmap_node *nodes[NETDEV_MAX_BURST];
+found_map = cmap_find_batch(>rules, keys_map, hashes, nodes);
+
+/* Check results.  When the i-th bit of found_map is set, it means
+ * that a set of nodes with a matching hash value was found for the

Re: [ovs-dev] [PATCH] dpif-netdev: Clarify PMD reloading scheme.

2019-07-10 Thread David Marchand
On Wed, Jul 10, 2019 at 1:51 PM Ilya Maximets  wrote:
>
> It became more complicated, hence needs to be documented.
>
> Signed-off-by: Ilya Maximets 
> ---
>
> Applicable on top of "Quicker pmd threads reloads" patch-set:
> https://patchwork.ozlabs.org/project/openvswitch/list/?series=118588=*
>
>  lib/dpif-netdev.c | 39 +++
>  1 file changed, 39 insertions(+)
>
> diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
> index 647a8ee4b..c5ffc72f5 100644
> --- a/lib/dpif-netdev.c
> +++ b/lib/dpif-netdev.c
> @@ -682,10 +682,49 @@ struct dp_netdev_pmd_thread {
>
>  struct seq *reload_seq;
>  uint64_t last_reload_seq;
> +
> +/* These are atomic variables used as a synchronization and configuration
> + * points for thread reload/exit.
> + *
> + * 'reload' atomic is the main one and it's used as a memory
> + * synchronization point for all other knobs and data.
> + *
> + * For a thread that requests PMD reload:
> + *
> + *   * All changes that should be visible to the PMD thread must be made
> + * before setting the 'reload'.  These changes could use any memory
> + * ordering model including 'relaxed'.
> + *   * Setting the 'reload' atomic should occur in the same thread where
> + * all other PMD configuration options updated.
> + *   * Setting the 'reload' atomic should be done with 'release' memory
> + * ordering model or stricter.  This will guarantee that all previous
> + * changes (including non-atomic and 'relaxed') will be visible to
> + * the PMD thread.
> + *   * To check that reload is done, thread should poll the 'reload' 
> atomic
> + * to become 'false'.  Polling should be done with 'acquire' memory
> + * ordering model or stricter.  This ensures that PMD thread 
> completed
> + * the reload process.
> + *
> + * For the PMD thread:
> + *
> + *   * PMD thread should read 'reload' atomic with 'acquire' memory
> + * ordering model or stricter.  This will guarantee that all changes
> + * made before setting the 'reload' in the requesting thread will be
> + * visible to the PMD thread.
> + *   * All other configuration data could be read with any memory
> + * ordering model (including non-atomic and 'relaxed') but *only 
> after*
> + * reading the 'reload' atomic set to 'true'.
> + *   * When the PMD reload done, PMD should (optionally) set all the 
> below

is done?

> + * knobs except the 'reload' to their default ('false') values and
> + * (mandatory), as the last step, set the 'reload' to 'false' using
> + * 'release' memory ordering model or stricter.  This will inform the
> + * requesting thread that PMD has completed a reload cycle.
> + */
>  atomic_bool reload; /* Do we need to reload ports? */
>  atomic_bool wait_for_reload;/* Can we busy wait for the next reload? 
> */
>  atomic_bool reload_tx_qid;  /* Do we need to reload static_tx_qid? */
>  atomic_bool exit;   /* For terminating the pmd thread. */
> +
>  pthread_t thread;
>  unsigned core_id;   /* CPU core id of this pmd thread. */
>  int numa_id;/* numa node id of this pmd thread. */
> --
> 2.17.1
>

Thanks, it will help others, I agree.

Even if already pushed,
Reviewed-by: David Marchand 


-- 
David Marchand
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v4 0/5] Quicker pmd threads reloads

2019-07-10 Thread Ian Stokes

On 7/10/2019 10:03 AM, Ilya Maximets wrote:

On 09.07.2019 19:19, David Marchand wrote:

We have been testing the rebalance code in different situations while
having traffic going through OVS.
Those tests have shown that part of the observed packets losses is due to
some time wasted in signaling/waiting for the pmd threads to reload their
polling configurations.

This series is an attempt at getting pmd threads reloads quicker and
more deterministic.

Example of number of cycles spent by a pmd between two polling
configurations (in cycles minimum/average/maximum of 1000 changes):
- cfc06fb13d9c: 141059/332512/6230137
- patch1:   146114/279722/ 721557
- patch2:46118/176561/ 459963
- patch3:13878/124914/ 509629
- patch4:12980/157706/ 509447
- patch5:12945/ 17715/  45592

Changelog since v3:
- explicitly do not wait for non pmd reload in patch 2
- added Eelco acks

Changelog since v2:
- remove unneeded synchronisation on pmd thread join in patch 2

Changelog since v1:
- incorporated Ilya suggestions in patch 2 and 3
- dropped previous acks on patch 2 and 3 but kept them on patch 4 and 5 since
   there is no major change in them

Changelog since RFC v2:
- added ack from Eelco

Changelog since RFC v1:
- added numbers per patch in cover letter
- added memory ordering for explicit synchronisations between threads
   in patch 1 and patch 2



For the series:
Acked-by: Ilya Maximets 



Thanks all for the work on this series. I've validated the v4 and added 
the documentation clarification patch Ilya had provided. Pushed to master.


Regards
Ian
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH] dpif-netdev: Clarify PMD reloading scheme.

2019-07-10 Thread Ian Stokes

On 7/10/2019 12:50 PM, Ilya Maximets wrote:

It became more complicated, hence needs to be documented.

Signed-off-by: Ilya Maximets 


Thanks for taking care of this Ilya, very valuable info.

Looks fine to me.

I was just getting ready to push the quicker PMD reload series, I'll add 
this as part of the push to master.


Regards
Ian

---

Applicable on top of "Quicker pmd threads reloads" patch-set:
https://patchwork.ozlabs.org/project/openvswitch/list/?series=118588=*

  lib/dpif-netdev.c | 39 +++
  1 file changed, 39 insertions(+)

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 647a8ee4b..c5ffc72f5 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -682,10 +682,49 @@ struct dp_netdev_pmd_thread {
  
  struct seq *reload_seq;

  uint64_t last_reload_seq;
+
+/* These are atomic variables used as a synchronization and configuration
+ * points for thread reload/exit.
+ *
+ * 'reload' atomic is the main one and it's used as a memory
+ * synchronization point for all other knobs and data.
+ *
+ * For a thread that requests PMD reload:
+ *
+ *   * All changes that should be visible to the PMD thread must be made
+ * before setting the 'reload'.  These changes could use any memory
+ * ordering model including 'relaxed'.
+ *   * Setting the 'reload' atomic should occur in the same thread where
+ * all other PMD configuration options updated.
+ *   * Setting the 'reload' atomic should be done with 'release' memory
+ * ordering model or stricter.  This will guarantee that all previous
+ * changes (including non-atomic and 'relaxed') will be visible to
+ * the PMD thread.
+ *   * To check that reload is done, thread should poll the 'reload' atomic
+ * to become 'false'.  Polling should be done with 'acquire' memory
+ * ordering model or stricter.  This ensures that PMD thread completed
+ * the reload process.
+ *
+ * For the PMD thread:
+ *
+ *   * PMD thread should read 'reload' atomic with 'acquire' memory
+ * ordering model or stricter.  This will guarantee that all changes
+ * made before setting the 'reload' in the requesting thread will be
+ * visible to the PMD thread.
+ *   * All other configuration data could be read with any memory
+ * ordering model (including non-atomic and 'relaxed') but *only after*
+ * reading the 'reload' atomic set to 'true'.
+ *   * When the PMD reload done, PMD should (optionally) set all the below
+ * knobs except the 'reload' to their default ('false') values and
+ * (mandatory), as the last step, set the 'reload' to 'false' using
+ * 'release' memory ordering model or stricter.  This will inform the
+ * requesting thread that PMD has completed a reload cycle.
+ */
  atomic_bool reload; /* Do we need to reload ports? */
  atomic_bool wait_for_reload;/* Can we busy wait for the next reload? 
*/
  atomic_bool reload_tx_qid;  /* Do we need to reload static_tx_qid? */
  atomic_bool exit;   /* For terminating the pmd thread. */
+
  pthread_t thread;
  unsigned core_id;   /* CPU core id of this pmd thread. */
  int numa_id;/* numa node id of this pmd thread. */



___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH] dpif-netdev: Clarify PMD reloading scheme.

2019-07-10 Thread Ilya Maximets
It became more complicated, hence needs to be documented.

Signed-off-by: Ilya Maximets 
---

Applicable on top of "Quicker pmd threads reloads" patch-set:
https://patchwork.ozlabs.org/project/openvswitch/list/?series=118588=*

 lib/dpif-netdev.c | 39 +++
 1 file changed, 39 insertions(+)

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 647a8ee4b..c5ffc72f5 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -682,10 +682,49 @@ struct dp_netdev_pmd_thread {
 
 struct seq *reload_seq;
 uint64_t last_reload_seq;
+
+/* These are atomic variables used as a synchronization and configuration
+ * points for thread reload/exit.
+ *
+ * 'reload' atomic is the main one and it's used as a memory
+ * synchronization point for all other knobs and data.
+ *
+ * For a thread that requests PMD reload:
+ *
+ *   * All changes that should be visible to the PMD thread must be made
+ * before setting the 'reload'.  These changes could use any memory
+ * ordering model including 'relaxed'.
+ *   * Setting the 'reload' atomic should occur in the same thread where
+ * all other PMD configuration options updated.
+ *   * Setting the 'reload' atomic should be done with 'release' memory
+ * ordering model or stricter.  This will guarantee that all previous
+ * changes (including non-atomic and 'relaxed') will be visible to
+ * the PMD thread.
+ *   * To check that reload is done, thread should poll the 'reload' atomic
+ * to become 'false'.  Polling should be done with 'acquire' memory
+ * ordering model or stricter.  This ensures that PMD thread completed
+ * the reload process.
+ *
+ * For the PMD thread:
+ *
+ *   * PMD thread should read 'reload' atomic with 'acquire' memory
+ * ordering model or stricter.  This will guarantee that all changes
+ * made before setting the 'reload' in the requesting thread will be
+ * visible to the PMD thread.
+ *   * All other configuration data could be read with any memory
+ * ordering model (including non-atomic and 'relaxed') but *only after*
+ * reading the 'reload' atomic set to 'true'.
+ *   * When the PMD reload done, PMD should (optionally) set all the below
+ * knobs except the 'reload' to their default ('false') values and
+ * (mandatory), as the last step, set the 'reload' to 'false' using
+ * 'release' memory ordering model or stricter.  This will inform the
+ * requesting thread that PMD has completed a reload cycle.
+ */
 atomic_bool reload; /* Do we need to reload ports? */
 atomic_bool wait_for_reload;/* Can we busy wait for the next reload? */
 atomic_bool reload_tx_qid;  /* Do we need to reload static_tx_qid? */
 atomic_bool exit;   /* For terminating the pmd thread. */
+
 pthread_t thread;
 unsigned core_id;   /* CPU core id of this pmd thread. */
 int numa_id;/* numa node id of this pmd thread. */
-- 
2.17.1

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] Why is ovs DPDK much worse than ovs in my test case?

2019-07-10 Thread Ilya Maximets
> Hi, all
> 
> I just use ovs as a static router in my test case, ovs is ran in vagrant VM,
> ethernet interfaces uses virtio driver, I create two ovs bridges, each one
> adds one ethernet interface, two bridges are connected by patch port, only
> default openflow rule is there.
> 
> table=0, priority=0 actions=NORMAL
> Bridge br-int
> Port patch-br-ex
> Interface patch-br-ex
> type: patch
> options: {peer=patch-br-int}
> Port br-int
> Interface br-int
> type: internal
> Port "dpdk0"
> Interface "dpdk0"
> type: dpdk
> options: {dpdk-devargs=":00:08.0"}
> Bridge br-ex
> Port "dpdk1"
> Interface "dpdk1"
> type: dpdk
> options: {dpdk-devargs=":00:09.0"}
> Port patch-br-int
> Interface patch-br-int
> type: patch
> options: {peer=patch-br-ex}
> Port br-ex
> Interface br-ex
> type: internal
> 
> But when I run iperf to do performance benchmark, the result shocked me.
> 
> For ovs nondpdk, the result is
> 
> vagrant at client1:~$ iperf -t 60 -i 10 -c 192.168.230.101
> 
> 
> Client connecting to 192.168.230.101, TCP port 5001
> TCP window size: 85.0 KByte (default)
> 
> [  3] local 192.168.200.101 port 53900 connected with 192.168.230.101 port
> 5001
> [ ID] Interval   Transfer Bandwidth
> [  3]  0.0-10.0 sec  1.05 GBytes   905 Mbits/sec
> [  3] 10.0-20.0 sec  1.02 GBytes   877 Mbits/sec
> [  3] 20.0-30.0 sec  1.07 GBytes   922 Mbits/sec
> [  3] 30.0-40.0 sec  1.08 GBytes   927 Mbits/sec
> [  3] 40.0-50.0 sec  1.06 GBytes   914 Mbits/sec
> [  3] 50.0-60.0 sec  1.07 GBytes   922 Mbits/sec
> [  3]  0.0-60.0 sec  6.37 GBytes   911 Mbits/sec
> 
> vagrant at client1:~$
> 
> For ovs dpdk, the bandwidth is just about 45Mbits/sec, why? I really don’t
> understand what happened.
> 
> vagrant at client1:~$ iperf -t 60 -i 10 -c 192.168.230.101
> 
> 
> Client connecting to 192.168.230.101, TCP port 5001
> TCP window size: 85.0 KByte (default)
> 
> [  3] local 192.168.200.101 port 53908 connected with 192.168.230.101 port
> 5001
> [ ID] Interval   Transfer Bandwidth
> [  3]  0.0-10.0 sec  54.6 MBytes  45.8 Mbits/sec
> [  3] 10.0-20.0 sec  55.5 MBytes  46.6 Mbits/sec
> [  3] 20.0-30.0 sec  52.5 MBytes  44.0 Mbits/sec
> [  3] 30.0-40.0 sec  53.6 MBytes  45.0 Mbits/sec
> [  3] 40.0-50.0 sec  54.0 MBytes  45.3 Mbits/sec
> [  3] 50.0-60.0 sec  53.9 MBytes  45.2 Mbits/sec
> [  3]  0.0-60.0 sec   324 MBytes  45.3 Mbits/sec
> 
> vagrant at client1:~$
> 
> By the way, I tried to pin physical cores to qemu processes which correspond
> to ovs pmd threads, but it hardly affects on performance. 
> 
>   PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
> P
> 16303 yangyi 20   0   9207120 209700 107500 R 99.9  0.1   63:02.37
> EMT-1  1
> 16304 yangyi 20   0   9207120 209700 107500 R 99.9  0.1   69:16.16
> EMT-2  2


Hi.
There might be a lot of reasons for a bad performance, but the most likely this
is just because of disabled offloading capabilities on the VM interface (TSO 
mostly).

Try using UDP flow for testing. You should have almost same results for kernel
and DPDK in UDP case. Try:
# iperf -t 60 -i 10 -c 192.168.230.101 -u -b 10G -l 1460

The bottleneck in your setup is the tap interface that connects VM with the
host network, that is why I'm expecting similar results in both cases.

In case of kernel OVS, the tap interface on host will have TSO and checksum
offloading enabled so iperf will use huge 64K packets that will never be
segmented, since everything happens on the same host and all kernels (guest and 
host)
by default has TSO and checksum offloading support.
While using OVS with DPDK in guest, tap interface on host will have no 
TSO/chksum
offloading and the host kernel will have to split each 64K TCP packet into MTU
sized frames and re-calculate checksums for all the chunks. This significantly
slows down everything.

To partially mitigate the issue with TCP you could increase the MTU to 8K on
all your interfaces (all the host and gest interfaces ). Use 'mtu_request' to
set the MTU for DPDK ports. This should give a good performance boost in DPDK
case.

Best regards, Ilya Maximets.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] Servus. wollte mich mal wieder bei dir melden.

2019-07-10 Thread papienguyen

Hättest du das gedacht? https://laumemismi1987.blogspot.nl/



___
Freu mich auf deine Nachricht
papiengu...@yahoo.com
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v4 0/5] Quicker pmd threads reloads

2019-07-10 Thread Ilya Maximets
On 09.07.2019 19:19, David Marchand wrote:
> We have been testing the rebalance code in different situations while
> having traffic going through OVS.
> Those tests have shown that part of the observed packets losses is due to
> some time wasted in signaling/waiting for the pmd threads to reload their
> polling configurations.
> 
> This series is an attempt at getting pmd threads reloads quicker and
> more deterministic.
> 
> Example of number of cycles spent by a pmd between two polling
> configurations (in cycles minimum/average/maximum of 1000 changes):
> - cfc06fb13d9c: 141059/332512/6230137
> - patch1:   146114/279722/ 721557
> - patch2:46118/176561/ 459963
> - patch3:13878/124914/ 509629
> - patch4:12980/157706/ 509447
> - patch5:12945/ 17715/  45592
> 
> Changelog since v3:
> - explicitly do not wait for non pmd reload in patch 2
> - added Eelco acks
> 
> Changelog since v2:
> - remove unneeded synchronisation on pmd thread join in patch 2
> 
> Changelog since v1:
> - incorporated Ilya suggestions in patch 2 and 3
> - dropped previous acks on patch 2 and 3 but kept them on patch 4 and 5 since
>   there is no major change in them
> 
> Changelog since RFC v2:
> - added ack from Eelco
> 
> Changelog since RFC v1:
> - added numbers per patch in cover letter
> - added memory ordering for explicit synchronisations between threads
>   in patch 1 and patch 2
> 

For the series:
Acked-by: Ilya Maximets 
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] gardez une longueur d’avance

2019-07-10 Thread inscrivez-vous
alt_text
   Conference E-Marketing Live   
Big Data, SEO, Blockchain, Social Media, UX… Quelles perspectives marketing 
pour 2020 ?

Identifiez les enjeux des nouvelles tendances digitales et gardez une longueur 
d’avance sur vos concurrents !

JE MINSCRIS
Conference diffusee en direct et disponible en replay sur notre plateforme de 
Live streaming.
Inscription obligatoire pour acceder a ce service.
alt_text
 
Jeudi 3 octobre 2019
14h30 - 16h30

_

7 RUE HENRI BOCQUILLON
75015 PARIS

Plus de reception de messages sur loffre Conference E-Marketing Live 
envoyes par ASCPM, cliquez-ici
Vous disposez dun droit dacces, de rectification, dopposition 
et de consentement auquel
vous avez acces a partir de cette page web : Charte des Donnees Personnelles.
Vous recevez ce message car vous etes present dans la base ASCPM composee de 
dirigeants et professionnels. 
ASCPM - 5 Avenue du General de Gaulle - 94160 Saint Mande - France - R.C.S. 814 
073 060 - CRETEIL

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v4 2/5] dpif-netdev: Trigger parallel pmd reloads.

2019-07-10 Thread Eelco Chaudron

Ack the v4 also…

Acked-by: Eelco Chaudron 

On 9 Jul 2019, at 18:19, David Marchand wrote:


pmd reloads are currently serialised in each steps calling
reload_affected_pmds.
Any pmd processing packets, waiting on a mutex etc... will make other
pmd threads wait for a delay that can be undeterministic when syscalls
adds up.

Switch to a little busy loop on the control thread using the existing
per-pmd reload boolean.

The memory order on this atomic is rel-acq to have an explicit
synchronisation between the pmd threads and the control thread.

Signed-off-by: David Marchand 
Acked-by: Eelco Chaudron 
---
Changelog since v3:
- explicitly do not wait for non pmd reload in patch 2 (Ilya)
- added Eelco ack

Changelog since v2:
- remove unneeded synchronisation on pmd thread join (Ilya)
- "inlined" synchronisation loop in reload_affected_pmds

Changelog since v1:
- removed the introduced reloading_pmds atomic and reuse the existing
  pmd->reload boolean as a single synchronisation point (Ilya)

Changelog since RFC v1:
- added memory ordering on 'reloading_pmds' atomic to serve as a
  synchronisation point between pmd threads and control thread

---
 lib/dpif-netdev.c | 28 ++--
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c
index 8eeec63..e0ffa4a 100644
--- a/lib/dpif-netdev.c
+++ b/lib/dpif-netdev.c
@@ -649,9 +649,6 @@ struct dp_netdev_pmd_thread {
 struct ovs_refcount ref_cnt;/* Every reference must be 
refcount'ed. */

 struct cmap_node node;  /* In 'dp->poll_threads'. */

-pthread_cond_t cond;/* For synchronizing pmd thread 
reload. */
-struct ovs_mutex cond_mutex;/* Mutex for condition variable. 
*/

-
 /* Per thread exact-match cache.  Note, the instance for cpu core
  * NON_PMD_CORE_ID can be accessed by multiple threads, and 
thusly

  * need to be protected by 'non_pmd_mutex'.  Every other instance
@@ -1758,11 +1755,8 @@ dp_netdev_reload_pmd__(struct 
dp_netdev_pmd_thread *pmd)

 return;
 }

-ovs_mutex_lock(>cond_mutex);
 seq_change(pmd->reload_seq);
 atomic_store_explicit(>reload, true, memory_order_release);
-ovs_mutex_cond_wait(>cond, >cond_mutex);
-ovs_mutex_unlock(>cond_mutex);
 }

 static uint32_t
@@ -4655,6 +4649,19 @@ reload_affected_pmds(struct dp_netdev *dp)
 if (pmd->need_reload) {
 flow_mark_flush(pmd);
 dp_netdev_reload_pmd__(pmd);
+}
+}
+
+CMAP_FOR_EACH (pmd, node, >poll_threads) {
+if (pmd->need_reload) {
+if (pmd->core_id != NON_PMD_CORE_ID) {
+bool reload;
+
+do {
+atomic_read_explicit(>reload, ,
+ memory_order_acquire);
+} while (reload);
+}
 pmd->need_reload = false;
 }
 }
@@ -5842,11 +5849,8 @@ dpif_netdev_enable_upcall(struct dpif *dpif)
 static void
 dp_netdev_pmd_reload_done(struct dp_netdev_pmd_thread *pmd)
 {
-ovs_mutex_lock(>cond_mutex);
-atomic_store_relaxed(>reload, false);
 pmd->last_reload_seq = seq_read(pmd->reload_seq);
-xpthread_cond_signal(>cond);
-ovs_mutex_unlock(>cond_mutex);
+atomic_store_explicit(>reload, false, memory_order_release);
 }

 /* Finds and refs the dp_netdev_pmd_thread on core 'core_id'.  
Returns
@@ -5931,8 +5935,6 @@ dp_netdev_configure_pmd(struct 
dp_netdev_pmd_thread *pmd, struct dp_netdev *dp,

 pmd->reload_seq = seq_create();
 pmd->last_reload_seq = seq_read(pmd->reload_seq);
 atomic_init(>reload, false);
-xpthread_cond_init(>cond, NULL);
-ovs_mutex_init(>cond_mutex);
 ovs_mutex_init(>flow_mutex);
 ovs_mutex_init(>port_mutex);
 cmap_init(>flow_table);
@@ -5975,8 +5977,6 @@ dp_netdev_destroy_pmd(struct 
dp_netdev_pmd_thread *pmd)

 cmap_destroy(>flow_table);
 ovs_mutex_destroy(>flow_mutex);
 seq_destroy(pmd->reload_seq);
-xpthread_cond_destroy(>cond);
-ovs_mutex_destroy(>cond_mutex);
 ovs_mutex_destroy(>port_mutex);
 free(pmd);
 }
--
1.8.3.1

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


Re: [ovs-dev] [PATCH v4 2/5] dpif-netdev: Trigger parallel pmd reloads.

2019-07-10 Thread Ilya Maximets
On 09.07.2019 19:19, David Marchand wrote:
> pmd reloads are currently serialised in each steps calling
> reload_affected_pmds.
> Any pmd processing packets, waiting on a mutex etc... will make other
> pmd threads wait for a delay that can be undeterministic when syscalls
> adds up.
> 
> Switch to a little busy loop on the control thread using the existing
> per-pmd reload boolean.
> 
> The memory order on this atomic is rel-acq to have an explicit
> synchronisation between the pmd threads and the control thread.
> 
> Signed-off-by: David Marchand 
> Acked-by: Eelco Chaudron 
> ---
> Changelog since v3:
> - explicitly do not wait for non pmd reload in patch 2 (Ilya)
> - added Eelco ack
> 
> Changelog since v2:
> - remove unneeded synchronisation on pmd thread join (Ilya)
> - "inlined" synchronisation loop in reload_affected_pmds
> 
> Changelog since v1:
> - removed the introduced reloading_pmds atomic and reuse the existing
>   pmd->reload boolean as a single synchronisation point (Ilya)
> 
> Changelog since RFC v1:
> - added memory ordering on 'reloading_pmds' atomic to serve as a
>   synchronisation point between pmd threads and control thread
> 
> ---
>  lib/dpif-netdev.c | 28 ++--
>  1 file changed, 14 insertions(+), 14 deletions(-)

LGTM,
Acked-by: Ilya Maximets 
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev