Conntrack 5-tuple consists of src address, dst address, src port,
dst port and protocol which will be unique to a ct session.
Use this information along with zone to compute hash.
Also re-factor conntrack code related to parsing netlink attributes.
Testing:
Verified loading/unloading the driver w
This patch primarily replaces existing ndis RWlock based implementaion
for NAT in conntrack with a spinlock based implementation inside NAT,
module along with some conntrack optimization.
- The 'ovsNatTable' and 'ovsUnNatTable' tables are shared
between cleanup threads and packet processing thre
This patch series is primarily to refactor conntrack code for
better throughput with conntrack.
With this patch series TCP throughput with conntrack increased
by ~50%.
Anand Kumar (3):
datapath-windows: Use spinlock instead of RW lock for ct entry
datapath-windows: Implement locking in conntr
This patch mainly changes a ndis RW lock for conntrack entry to a
spinlock along with some minor refactor in conntrack. Using
spinlock instead of RW lock as RW locks causes performance hits
when acquired/released multiple times.
- Use NdisInterlockedXX wrapper api's instead of InterlockedXX.
- Upd
Hi Ilya,
Thanx for taking a look. Please see inline.
Thanx
Manu
On 18/06/18, 4:04 PM, "Ilya Maximets" wrote:
> Hi
Hi,
I just wanted to clarify few things about RSS hash. See inline.
One more thing:
Despite of usual OVS bonding, this implementation doesn't support
Hi,
I am using above configuration on my testbed and when I try to add a port
which is bound to igb_uio, I see following errors due to Tx queue
configuration. I just want to use single Tx and Rx queue for my testing. I
looked at Documentation/intro/install/dpdk.rst, i see only "DPDK Physical
Port
On Sun, Jun 17, 2018 at 04:24:34AM -0700, Toms Atteka wrote:
> created sample python scripts, which helps to learn how to use OVSDB library
>
> Signed-off-by: Toms Atteka
Thanks for working on making it easier to understand how to use Open
vSwitch.
This doesn't build, because:
The followin
Build Fedora RPM from 2.9.90.
On "yum remove", get the following errors:
Erasing: openvswitch-kmod-2.9.90-1.el7.x86_64
3/6
depmod: ERROR: fstatat(4, vport-gre.ko): No such file
We use ovs 2.9.0.
Ben Pfaff 于2018年6月16日周六 上午2:24写道:
> [Dropping invalid email address findtheonly...@example.com.]
>
> Thanks.
>
> I believe that we've found and fixed related memory errors before. What
> version of OVS are you using?
>
> On Fri, Jun 15, 2018 at 02:48:11PM +0800, 孙文杰 wrote:
> >
Conntrack 5-tuple consists of src address, dst address, src port,
dst port and protocol which will be unique to a ct session.
Use this information along with zone to compute hash.
Also re-factor conntrack code related to parsing netlink attributes.
Testing:
Verified loading/unloading the driver w
This patch primarily replaces existing ndis RWlock based implementaion
for NAT in conntrack with a spinlock based implementation inside NAT,
module along with some conntrack optimization.
- The 'ovsNatTable' and 'ovsUnNatTable' tables are shared
between cleanup threads and packet processing thre
This patch mainly changes a ndis RW lock for conntrack entry to a
spinlock along with some minor refactor in conntrack. Using
spinlock instead of RW lock as RW locks causes performance hits
when acquired/released multiple times.
- Use NdisInterlockedXX wrapper api's instead of InterlockedXX.
- Upd
This patch series is primarily to refactor conntrack code for
better throughput with conntrack.
With this patch series TCP throughput with conntrack increased
by ~50%.
Anand Kumar (3):
datapath-windows: Use spinlock instead of RW lock for ct entry
datapath-windows: Implement locking in conntr
Hi Shashank,
I will address this in next version of the patch.
Thanks,
Anand Kumar
On 6/18/18, 2:36 PM, "Shashank Ram" wrote:
This patch should be combined with the patch where NAT lock is removed
from CT. Keeping this separate will cause the previous patches in this
series to br
On Fri, Jun 8, 2018 at 12:32 PM, aginwala wrote:
>
> so that we can adjust inactivity_probe for master node, while still not
> listening on TCP on slave node using use_remote_in_db in ovn-ctl.
>
Minor comment: the commit message is better to have a short summary as
subject, and then the detailed
On Fri, Jun 8, 2018 at 12:32 PM, aginwala wrote:
>
> using remote connections from DB tables.
>
> e.g --remote=db:OVN_Southbound,SB_Global,connections and
> --remote=db:OVN_Northbound,NB_Global,connections
>
> can be skipped for cases where slaves do not need to listen on nb and sb
db
> connec
Thanks for the review Han. Have addressed the comments and have split it
into 2 patches: https://patchwork.ozlabs.org/patch/931228/ and
https://patchwork.ozlabs.org/patch/931229/ . Please review the same and let
me know for questions further.
Regards,
On Mon, Jun 18, 2018 at 12:08 PM, Han Zhou
so that we can adjust inactivity_probe for master node, while still not
listening on TCP on slave node using use_remote_in_db in ovn-ctl.
Signed-off-by: aginwala
---
ovn/utilities/ovndb-servers.ocf | 39 +++
1 file changed, 23 insertions(+), 16 deletions(-)
d
using remote connections from DB tables.
e.g --remote=db:OVN_Southbound,SB_Global,connections and
--remote=db:OVN_Northbound,NB_Global,connections
can be skipped for cases where slaves do not need to listen on nb and sb db
connection tables while using pacemaker with load balancer for ovndb c
On Mon, Jun 18, 2018 at 01:55:57PM +0200, Lorenzo Bianconi wrote:
> Add TCP reset/ICMP port unreachable messages in reply to IP packets directed
> to
> the logical router's IP addresses
>
> Changes since v1:
> - use strings literal for actions since they are fixed strings
>
> Lorenzo Bianconi (3
On Fri, Jun 15, 2018 at 01:38:02PM -0700, Han Zhou wrote:
> On Fri, Jun 15, 2018 at 1:03 PM, Ben Pfaff wrote:
> >
> > On Wed, May 30, 2018 at 10:08:26AM -0700, Han Zhou wrote:
> > > In ovsdb_idl_db_track_clear(), it needs to free the deleted row.
> > > However, it unnecessary to call ovsdb_idl_row
On Sun, Jun 17, 2018 at 10:16:41AM +0200, Justin Pettit wrote:
>
> > On May 17, 2018, at 8:22 AM, Ben Pfaff wrote:
> >
> > Signed-off-by: Ben Pfaff
>
> I assume this is mostly just moving code around, so I didn't look too
> closely. Let me know if you want a closer look.
That's correct.
>
On Sun, Jun 17, 2018 at 11:43:16AM +0300, Roi Dayan wrote:
>
>
> On 14/06/2018 23:43, Ben Pfaff wrote:
> > ovs-sim is a funny utility since it only works from a build tree, not from
> > an installed OVS. That means that we shouldn't install its manpage when
> > we run "make install". But we do
On 06/17/2018 10:37 PM, Anand Kumar wrote:
Conntrack 5-tuple consists of src address, dst address, src port,
dst port and protocol which will be unique to a ct session.
Use this information along with zone to compute hash.
Also re-factor conntrack code related to parsing netlink attributes.
On 06/18/2018 09:56 PM, Ben Pfaff wrote:
> Hi Thomas. Thanks for the latest upload of the Open vSwitch packages to
> Debian.
>
> I'm no longer interested in being a maintainer for the downstream Debian
> packages of Open vSwitch. I just don't have sufficient time to do it
> properly (and honestl
This patch should be combined with the patch where NAT lock is removed
from CT. Keeping this separate will cause the previous patches in this
series to break NAT functionality.
Thanks,
Shashank
On 06/17/2018 10:37 PM, Anand Kumar wrote:
The 'ovsNatTable' and 'ovsUnNatTable' tables are shared
Hi, Harry,
This is a good observation and there is definitely some "duplicated
calculation" there.
Actually when we were designing our "CD" distributor structure, we found this
overhead and we tried to cache the first hash value in "CD" thus to totally
bypass the first hash calculation. But
On 6/18/2018 11:44 AM, Ben Pfaff wrote:
On Mon, Jun 18, 2018 at 11:14:53AM -0700, Qiuyu Xiao wrote:
This patch adds transport ports information for route lookup so that IPsec
can select tunnel traffic (geneve, stt, vxlan) to do encryption.
The patch was tested for geneve, stt, and vxlan tunnel
Hi,
I looked into the code and the logic seems good to me.
But reordering in dataplane also has performance implications especially
considering non-TCP traffic. Also, the packets came to OvS may already been
out-of-ordered. Could you provide some performance data points showing when
all EMC
Thanks, applied to master.
On Mon, Jun 18, 2018 at 03:29:42PM -0400, Mark Michelson wrote:
> Acked-by: Mark Michelson
>
> On 06/18/2018 02:45 PM, Ben Pfaff wrote:
> >Until now, the ipam_info struct for a datapath has been allocated on
> >demand. This leads to slightly complication in the code i
Hi Thomas. Thanks for the latest upload of the Open vSwitch packages to
Debian.
I'm no longer interested in being a maintainer for the downstream Debian
packages of Open vSwitch. I just don't have sufficient time to do it
properly (and honestly have not had that time for at least a few years).
T
On Sun, Jun 17, 2018 at 05:20:30AM +0530, Vishal Deep Ajmera wrote:
> During bundle commit flows which are added in bundle are applied
> to ofproto in-order. In case if a flow cannot be added (e.g. flow
> action is go-to group id which does not exist), OVS tries to
> revert back all previous flows
Acked-by: Mark Michelson
On 06/18/2018 02:45 PM, Ben Pfaff wrote:
Until now, the ipam_info struct for a datapath has been allocated on
demand. This leads to slightly complication in the code in places, and
there is hardly any benefit since ipam_info is only about 48 bytes anyway.
This commit j
On Mon, Jun 18, 2018 at 09:31:43AM -0700, Han Zhou wrote:
> On Mon, Jun 18, 2018 at 8:34 AM, Ilya Maximets
> wrote:
> >
> > On 18.06.2018 18:07, Ben Pfaff wrote:
> > > On Mon, Jun 18, 2018 at 05:18:49PM +0300, Ilya Maximets wrote:
> > >>> On Wed, May 23, 2018 at 09:28:59PM -0700, Ben Pfaff wrote:
On Mon, Jun 18, 2018 at 06:34:13PM +0300, Ilya Maximets wrote:
> On 18.06.2018 18:07, Ben Pfaff wrote:
> > On Mon, Jun 18, 2018 at 05:18:49PM +0300, Ilya Maximets wrote:
> >>> On Wed, May 23, 2018 at 09:28:59PM -0700, Ben Pfaff wrote:
> On Wed, May 23, 2018 at 06:06:44PM -0700, Han Zhou wrote:
Hi Ali, thanks for the fix. I reviewed and have some minor improvements
suggested.
On Sun, Jun 3, 2018 at 7:50 PM, aginwala wrote:
>
> only for master node with remote option when using load balancer to manage
> ovndb clusters via pacemaker.
>
> This is will allow setting inactivity probe on the
Hi Shashank,
Thanks for the review. Please find my response inline.
Thanks,
Anand Kumar
On 6/18/18, 11:54 AM, "Shashank Ram" wrote:
On 06/17/2018 10:37 PM, Anand Kumar wrote:
> This patch primarily gets rid of NdisRWLock in conntrack for NAT
> functionality along with som
Hi Shashank,
Thanks for the review.
Please find my response inline.
Thanks,
Anand Kumar
From: Shashank Ram
Date: Monday, June 18, 2018 at 11:27 AM
To: Anand Kumar , "d...@openvswitch.org"
Subject: Re: [ovs-dev] [PATCH v4 1/4] datapath-windows: Use spinlock instead of
RW lock for ct entry
On 06/17/2018 10:37 PM, Anand Kumar wrote:
This patch primarily gets rid of NdisRWLock in conntrack for NAT
functionality along with some conntrack optimization. The subsequent
patch will have a lock implementation inside NAT module.
- Introduce a new function OvsGetTcpHeader() to retrieve TC
On Mon, Jun 18, 2018 at 09:06:22AM -0400, Mark Michelson wrote:
> Hi Ben,
>
> The first two patches in this series aren't necessary. ovn_datapaths are
> allocated from scratch and then all destroyed during each loop of
> ovn-northd. They never survive multiple loops. When entering
> init_ipam_info
Until now, the ipam_info struct for a datapath has been allocated on
demand. This leads to slightly complication in the code in places, and
there is hardly any benefit since ipam_info is only about 48 bytes anyway.
This commit just inlines it into struct ovn_datapath.
Signed-off-by: Ben Pfaff
--
On Mon, Jun 18, 2018 at 11:14:53AM -0700, Qiuyu Xiao wrote:
> This patch adds transport ports information for route lookup so that IPsec
> can select tunnel traffic (geneve, stt, vxlan) to do encryption.
>
> The patch was tested for geneve, stt, and vxlan tunnel and the results
> show that IPsec p
At startup time, ovn-controller connects to the OVS database and retrieves
a pointer to the southbound database, then connects to the southbound
database and retrieves a snapshot. Until now, however, it didn't pay
attention to changes in the OVS database while trying to retrieve the
southbound dat
This new function makes it possible to create an instance of the IDL
without connecting it to a remote OVSDB server. The caller can then
connect and disconnect using ovsdb_idl_set_remote(); the ability to
disconnect is a new feature.
With this patch, the ovsdb_idl 'session' member can be null whe
On 06/17/2018 10:37 PM, Anand Kumar wrote:
This patch mainly changes a ndis RW lock for conntrack entry to a
spinlock along with some minor refactor in conntrack. Using
spinlock instead of RW lock as RW locks causes performance hits
when acquired/released multiple times.
- Use NdisInterlocked
This patch adds transport ports information for route lookup so that IPsec
can select tunnel traffic (geneve, stt, vxlan) to do encryption.
The patch was tested for geneve, stt, and vxlan tunnel and the results
show that IPsec policy can be set to only match the corresponding tunnel
traffic.
Sign
On Mon, Jun 18, 2018 at 8:34 AM, Ilya Maximets
wrote:
>
> On 18.06.2018 18:07, Ben Pfaff wrote:
> > On Mon, Jun 18, 2018 at 05:18:49PM +0300, Ilya Maximets wrote:
> >>> On Wed, May 23, 2018 at 09:28:59PM -0700, Ben Pfaff wrote:
> On Wed, May 23, 2018 at 06:06:44PM -0700, Han Zhou wrote:
> >>>
Hi,
Does anyone see any issue with the patch ? The intent of the patch is to fix
*reordering* of packets belonging to same megaflow. The issue can be frequently
reproduced by running iperf test (as an example) between two VM's connected by
an OVS netdev bridge with NORMAL flow. Analyzing packet
Hi Ben,
If the patch looks ok, can we this patch merged to master ? Steps to reproduce
the issue was shared on ovs-discuss list.
Warm Regards,
Vishal Ajmera
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev
On 18.06.2018 18:07, Ben Pfaff wrote:
> On Mon, Jun 18, 2018 at 05:18:49PM +0300, Ilya Maximets wrote:
>>> On Wed, May 23, 2018 at 09:28:59PM -0700, Ben Pfaff wrote:
On Wed, May 23, 2018 at 06:06:44PM -0700, Han Zhou wrote:
> On Wed, May 23, 2018 at 5:14 PM, Ben Pfaff wrote:
>>
>>
On Mon, Jun 18, 2018 at 05:18:49PM +0300, Ilya Maximets wrote:
> > On Wed, May 23, 2018 at 09:28:59PM -0700, Ben Pfaff wrote:
> >> On Wed, May 23, 2018 at 06:06:44PM -0700, Han Zhou wrote:
> >> > On Wed, May 23, 2018 at 5:14 PM, Ben Pfaff wrote:
> >> > >
> >> > > Until now, rconn_get_version() has
>-Original Message-
>From: Shahaf Shuler
>Sent: 18. juni 2018 16:29
>To: Andrew Rybchenko ; Finn Christensen
>; ian.sto...@intel.com
>Cc: d...@openvswitch.org; simon.hor...@netronome.com; f...@redhat.com
>Subject: RE: [ovs-dev] [PATCH v10 4/7] netdev-dpdk: implement flow offload
>with rte
Monday, June 18, 2018 1:17 PM, Andrew Rybchenko:
> Subject: Re: [ovs-dev] [PATCH v10 4/7] netdev-dpdk: implement flow offload
> with rte flow
>
> On 05/18/2018 12:14 PM, Shahaf Shuler wrote:
> > From: Finn Christensen
> >
> > The basic yet the major part of this patch is to translate the "match"
Thanks Ilya, I will look at the commit, but not sure now how to tell how
much real work is being done, I would have liked polling cycles to be
treated as before and not towards packet processing. That does explain, as
long as there are packets on the wire we are always 100%, basically cannot
tell h
> On Wed, May 23, 2018 at 09:28:59PM -0700, Ben Pfaff wrote:
>> On Wed, May 23, 2018 at 06:06:44PM -0700, Han Zhou wrote:
>> > On Wed, May 23, 2018 at 5:14 PM, Ben Pfaff wrote:
>> > >
>> > > Until now, rconn_get_version() has only reported the OpenFlow version in
>> > > use when the rconn is actua
Thanks for the data.
I have to note additionally that the meaning of "processing cycles"
significantly changed since the following commit:
commit a2ac666d5265c01661e189caac321d962f54649f
Author: Ciara Loftus
Date: Mon Feb 20 12:53:00 2017 +
dpif-netdev: Change definiti
I was running out of time, so did not review this patch. Will try to do
it on the next iteration of this patch-set.
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
In order to create a minimal environment that allows the tests to get
mbufs from an existing mempool, the following approach is taken:
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
From: Mark Kavanagh
Currently, jumbo frame support for OvS-DPDK is implemented by
increasing the size of mbufs within a mempool, such that each mbuf
within the pool is large enough to contain an entire jumbo frame of
a user-defined size. Typically,
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
From: Mark Kavanagh
Currently, packets are only copied to a single segment in the function
dpdk_do_tx_copy(). This could be an issue in the case of jumbo frames,
particularly when multi-segment mbufs are involved.
This patch calculates the number
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
From: Michael Qiu
When doing packet clone, if packet source is from DPDK driver,
multi-segment must be considered, and copy the segment's data one by
one.
Also, lots of DPDK mbuf's info is missed during a copy, like packet
type, ol_flags, etc. Th
Hi Ben,
Just a couple of findings in-line below.
On 06/15/2018 07:11 PM, Ben Pfaff wrote:
Until now, the ipam_info struct for a datapath has been allocated on
demand. This leads to slightly complication in the code in places, and
there is hardly any benefit since ipam_info is only about 48 byt
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
When enabled with DPDK OvS relies on mbufs allocated by mempools to
receive and output data on DPDK ports. Until now, each OvS dp_packet
has
had only one mbuf associated, which is allocated with the maximum
possible size, taking the MTU into accoun
Hi Ben,
The first two patches in this series aren't necessary. ovn_datapaths are
allocated from scratch and then all destroyed during each loop of
ovn-northd. They never survive multiple loops. When entering
init_ipam_info_for_datapath(), you can assert that od->ipam_info == NULL
[1].
For p
---
Este correo electrónico ha sido comprobado en busca de virus por AVG.
http://www.avg.com
___
dev mailing lis
On 06/16/2018 12:53 AM, Ben Pfaff wrote:
On Fri, Jun 15, 2018 at 10:11:41AM -0400, Mark Michelson wrote:
On 06/13/2018 11:29 PM, Han Zhou wrote:
On Wed, Jun 13, 2018 at 3:37 PM, Ben Pfaff wrote:
To make ovn-controller recompute incrementally, we need accurate
dependencies for each function t
Add priority-70 flows to generate ICMP protocol unreachable messages
in reply to packets directed to the router's IP address on IP protocols
other than UDP, TCP, and ICMP
Signed-off-by: Lorenzo Bianconi
---
ovn/northd/ovn-northd.8.xml | 4
ovn/northd/ovn-northd.c | 14 ++
t
Add priority-80 flows to generate TCP reset messages in reply to
TCP datagrams directed to the router's IP address since the
logical router doesn't accept any TCP traffic
Signed-off-by: Lorenzo Bianconi
---
ovn/northd/ovn-northd.8.xml | 4
ovn/northd/ovn-northd.c | 29 +
Add priority-80 flows to generate ICMP port unreachable messages in
reply to UDP datagrams directed to the router's IP address since the
logical router doesn't accept any UDP traffic
Signed-off-by: Lorenzo Bianconi
---
ovn/northd/ovn-northd.8.xml | 4 --
ovn/northd/ovn-northd.c | 19 +++
Add TCP reset/ICMP port unreachable messages in reply to IP packets directed to
the logical router's IP addresses
Changes since v1:
- use strings literal for actions since they are fixed strings
Lorenzo Bianconi (3):
OVN: add UDP port unreachable support to OVN logical router
OVN: add TCP por
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
In its current implementation dp_packet_shift() is also unaware of
multi-seg mbufs (that holds data in memory non-contiguously) and
assumes
that data exists contiguously in memory, memmove'ing data to perform
the
shift.
To add support for multi-s
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
The dp_packet_put_uninit() function is, in its current implementation,
operating on the data buffer of a dp_packet as if it were contiguous,
which in the case of multi-segment mbufs means they operate on the
first
mbuf in the chain. However, when m
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
Most helper functions in dp-packet assume that the data held by a
dp_packet is contiguous, and perform operations such as pointer
arithmetic under that assumption. However, with the introduction of
multi-segment mbufs, where data is non-contiguous, su
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
When a dp_packet is from a DPDK source, and it contains multi-segment
mbufs, the data_len is not equal to the packet size, pkt_len. Instead,
the data_len of each mbuf in the chain should be considered while
distributing the new (provided) size.
To a
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
A new mutex, 'nonpmd_mp_mutex', has been introduced to serialise
allocation and free operations by non-pmd threads on a given mempool.
Can you explain why we need the mutex here? Can't see any reason why
rte_pktmbuf_free() needs to be protected f
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
When enabled with DPDK OvS deals with two types of packets, the ones
coming from the mempool and the ones locally created by OvS - which
are
copied to mempool mbufs before output. In the latter, the space is
allocated from the system, while in the f
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
From: Mark Kavanagh
dp_packets are created using xmalloc(); in the case of OvS-DPDK, it's
possible the the resultant mbuf portion of the dp_packet contains
random data. For some mbuf fields, specifically those related to
multi-segment mbufs and/or
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
From: Mark Kavanagh
There are numerous factors that must be considered when calculating
the size of an mbuf:
- the data portion of the mbuf must be sized in accordance With Rx
buffer alignment (typically 1024B). So, for example, in order to
suc
On 11 Jun 2018, at 18:21, Tiago Lam wrote:
Overview
This patchset introduces support for multi-segment mbufs to OvS-DPDK.
Multi-segment mbufs are typically used when the size of an mbuf is
insufficient to contain the entirety of a packet's data. Instead, the
data is split across numer
Mallesh,
I was finally able to setup the vxlan testing w/ OVS. Instead of using OVS on
both sides I used vxlan i/f to inject the traffic on one host and run ovs with
vxlan tunnel configuration you specified on the other.
I was not able to reproduce your case. Actually I see the rules are create
> Hi
Hi,
I just wanted to clarify few things about RSS hash. See inline.
One more thing:
Despite of usual OVS bonding, this implementation doesn't support
shifting the load between ports. Am I right?
This could be an issue, because few heavy flows could be mapped to
a single port, while other por
On 05/18/2018 12:14 PM, Shahaf Shuler wrote:
From: Finn Christensen
The basic yet the major part of this patch is to translate the "match"
to rte flow patterns. And then, we create a rte flow with MARK + RSS
actions. Afterwards, all packets match the flow will have the mark id in
the mbuf.
The
Hi
Problem:
In OVS-DPDK, flows with output over a bond interface of type “balance-tcp”
(using a hash on TCP/UDP 5-tuple) get translated by the ofproto layer into
"HASH" and "RECIRC" datapath actions. After recirculation, the packet is
forwarded to the bond member port based on 8-bits of t
On Sun, Jun 10, 2018 at 03:05:39PM +, Gavi Teitz wrote:
> From: Simon Horman, Sent: Thursday, June 7, 2018 3:13 PM
> >On Thu, Jun 07, 2018 at 09:36:59AM +0300, Gavi Teitz wrote:
> >> Previously, any rule that is offloaded via a netdev, not necessarily
> >> to the HW, would be reported as "offl
Hey Ben,
Answer broken into points 1) and 2) as discussion before.
I've added labels 3) 4) and 5) to aid future discussion.
1) Miniflow bit setting:
Reworded: the miniflow is a summary of the packet.
If a particular bit is set in a miniflow, it indicates
the packet has that property. Eg: ipv6_sr
Ben,
Here are the v2 diffs. Hope this applies without any issue.
Thanx
Manu
Signed-off-by: Manohar K C
CC: Jan Scheurich
CC: Nitin Katiyar
---
v1 1/2: https://patchwork.ozlabs.org/patch/915285/
v2 1/2: Rebased to master
lib/lacp.c | 14 --
lib/lacp.h
Thanx Ben. Will rebase and send v2 diffs.
Thanx
Manu
On 15/06/18, 2:24 AM, "Ben Pfaff" wrote:
Thanks for the patch.
The patch cannot be applied. It appears to be white space damaged.
Please resubmit using "git send-email".
___
Hi,
We also experienced degradation from OVS2.6/2.7 to OVS2.8.2(with DPDK17.05.02).
The drop is more for 64 bytes packet size (~8-10%) even with higher number of
flows. I tried OVS 2.8 with DPDK17.11 and it improved for higher packet sizes
but 64 bytes size is still the concern.
Regards,
Nitin
CC: Shahaji Bhosle
Sorry, missed you in CC list.
Best regards, Ilya Maximets.
On 15.06.2018 10:44, Ilya Maximets wrote:
>> Hi,
>> I just upgraded from OvS 2.7 + DPDK 16.11 to OvS2.9 + DPDK 17.11 and
>> running into performance issue with 64 Byte packet rate. One interesting
>> thing that I notic
88 matches
Mail list logo