Hi guys,
I am afraid that commit is too long ago that would remember any details that
caused us to change the code in beb75a40fdc2 ("userspace: Switching of L3
packets in L2 pipeline"). What I vaguely remember was that I couldn’t
comprehend the original code and it was not working correctly in
Hi,
True, the cost of polling a packet from a physical port on a remote NUMA node
is slightly higher than from local NUMA node. Hence the cross-NUMA polling of
rx queues has some overhead. However, the packet processing cost is much more
influence by the location of the target vhostuser ports.
> I found this one though:
> https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-
> 45444731-805e1f47a2a28e92&q=1&e=1fbc0307-e0af-4087-98ef-
> d9dbff40359a&u=https%3A%2F%2Fpatchwork.ozlabs.org%2Fproject%2Fopenvs
> witch%2Fpatch%2F20220403222617.31688-1-jan.scheurich%40web.de%2F
>
Hi Ilya,
Sorry for spamming but I have problems again to send correctly formatted
patches to the ovs-dev list. My previous SMTP server for git-send email no
longer works and patches I send through my private SMTP provider do not reach
the mailing list. Resending from Outlook obviously still scr
From: Jan Scheurich
A packet received from a tunnel port with legacy_l3 packet-type (e.g.
lisp, L3 gre, gtpu) is conceptually wrapped in a dummy Ethernet header
for processing in an OF pipeline that is not packet-type-aware. Before
transmission of the packet to another legacy_l3 tunnel port, the
Hi Kevin,
This was a bit of a misunderstanding. We didn't check your RFC patch carefully
enough to realize that you had meant to encompass our cross-numa-polling
function in that RFC patch. Sorry for the confusion.
I wouldn't say we are particularly keen on upstreaming exactly our
implementati
> Thanks for sharing your experience with it. My fear with the proposal is that
> someone turns this on and then tells us performance is worse and/or OVS
> assignments/ALB are broken, because it has an impact on their case.
>
> In terms of limiting possible negative effects,
> - it can be opt-in a
Hi Kevin,
> > We have done extensive benchmarking and found that we get better overall
> PMD load balance and resulting OVS performance when we do not statically
> pin any rx queues and instead let the auto-load-balancing find the optimal
> distribution of phy rx queues over both NUMA nodes to bal
> > We do acknowledge the benefit of non-pinned polling of phy rx queues by
> PMD threads on all NUMA nodes. It gives the auto-load balancer much better
> options to utilize spare capacity on PMDs on all NUMA nodes.
> >
> > Our patch proposed in
> > https://protect2.fireeye.com/v1/url?k=31323334-50
> >
> > Btw, this patch is similar in functionality to the one posted by
> > Anurag [0] and there was also some discussion about this approach here [1].
> >
>
> Thanks for pointing this out.
> IMO, setting interface cross-numa would be good for phy port but not good for
> vhu. Since vhu can be de
> > In our patch series we decided to skip the check on cross-numa polling
> > during
> auto-load balancing. The rationale is as follows:
> >
> > If the estimated PMD-rxq distribution includes cross-NUMA rxq assignments,
> the same must apply for the current distribution, as none of the scheduling
> >> +If using ``pmd-rxq-assign=group`` PMD threads with *pinned* Rxqs can
> >> +be
> >> +*non-isolated* by setting::
> >> +
> >> + $ ovs-vsctl set Open_vSwitch . other_config:pmd-rxq-isolate=false
> >
> > Is there any specific reason why the new pmd-rxq-isolate option should be
> limited to the "
> -Original Message-
> From: dev On Behalf Of Kevin Traynor
> Sent: Thursday, 8 July, 2021 15:54
> To: d...@openvswitch.org
> Cc: david.march...@redhat.com
> Subject: [ovs-dev] [PATCH v4 2/7] dpif-netdev: Make PMD auto load balance
> use common rxq scheduling.
>
> PMD auto load balance ha
> -Original Message-
> From: dev On Behalf Of Kevin Traynor
> Sent: Thursday, 8 July, 2021 15:54
> To: d...@openvswitch.org
> Cc: david.march...@redhat.com
> Subject: [ovs-dev] [PATCH v4 6/7] dpif-netdev: Allow pin rxq and non-isolate
> PMD.
>
> Pinning an rxq to a PMD with pmd-rxq-affi
LGTM.
Acked-by: Jan Scheurich
> -Original Message-
> From: Martin Varghese
> Sent: Wednesday, 9 June, 2021 15:36
> To: d...@openvswitch.org; i.maxim...@ovn.org; Jan Scheurich
>
> Cc: Martin Varghese
> Subject: [PATCH] tests: Fixed L3 over patch port tests
>
> From: Martin Varghese
>
> -Original Message-
> From: Martin Varghese
> Sent: Monday, 7 June, 2021 16:47
> To: Ilya Maximets
> Cc: d...@openvswitch.org; echau...@redhat.com; Jan Scheurich
> ; Martin Varghese
>
> Subject: Re: [ovs-dev] [PATCH v2] Fix redundant datapath set ethernet action
> with NSH Decap
>
> On
Hi Martin,
Somehow I didn’t receive this patch email via the ovs-dev mailing list, perhaps
one of the many spam filters on the way interfered. Don't know if this response
email will be recognized by ovs patchwork.
The nsh.at lines are broken wrongly in this email, but they look OK in
patchwork
> -Original Message-
> From: Martin Varghese
> Sent: Tuesday, 13 April, 2021 16:20
> To: Jan Scheurich
> Cc: Eelco Chaudron ; d...@openvswitch.org;
> pshe...@ovn.org; martin.vargh...@nokia.com
> Subject: Re: [PATCH v4 1/2] Encap & Decap actions for MPLS packet type.
>
> On Wed, Apr 07,
Hi Martin,
I guess you are aware of the original design document we wrote for generic
encap/decap and NSH support:
https://docs.google.com/document/d/1oWMYUH8sjZJzWa72o2q9kU0N6pNE-rwZcLH3-kbbDR8/edit#
It is no longer 100% aligned with the final implementation in OVS but still a
good reference f
Hi,
Thanks for the heads up. The interaction with MPLS push/pop is a use case that
was likely not tested during the NSH and generic encap/decap design. It's
complex code and a long time ago. I'm willing to help, but I will need some
time to go back and have a look.
It would definitely help, if
LGTM. Please back-port to stable branches.
Acked-by: Jan Scheurich
/Jan
> -Original Message-
> From: Ilya Maximets
> Sent: Wednesday, 14 October, 2020 18:14
> To: ovs-dev@openvswitch.org; Jan Scheurich
> Cc: Ben Pfaff ; Ilya Maximets
> Subject: [PATCH v2] ofp-ed-props: Fix using unin
> >> Fix that by clearing padding bytes while encoding, and checking that
> >> these bytes are all zeros on decoding.
> >
> > Is the latter strictly necessary? It may break existing controllers that do
> > not
> initialize the padding bytes to zero.
> > Wouldn't it be sufficient to just zero the p
Hi Ilya,
Good catch. One comment below.
/Jan
> -Original Message-
> From: Ilya Maximets
> Sent: Tuesday, 13 October, 2020 21:02
> To: ovs-dev@openvswitch.org; Jan Scheurich
> Cc: Ben Pfaff ; Yi Yang ; Ilya Maximets
>
> Subject: [PATCH] ofp-ed-props: Fix using uninitialized padding for
> -Original Message-
> From: Flavio Leitner
> On Tue, Sep 22, 2020 at 01:22:58PM +0200, Ilya Maximets wrote:
> > On 9/19/20 3:07 PM, Flavio Leitner wrote:
> > > The EMC is not large enough for current production cases and they
> > > are scaling up, so this change switches over from EMC to
Even simpler solution to the problem.
Acked-by: Jan Scheurich
BR, Jan
> -Original Message-
> From: Ilya Maximets
> Sent: Thursday, 24 October, 2019 14:32
> To: ovs-dev@openvswitch.org
> Cc: Ian Stokes ; Kevin Traynor ;
> Jan Scheurich ; ychen103...@163.com; Ilya
> Maximets
> Subject: [
Hi,
You have pointed out an interesting issue in the netdev datapath implementation
(not sure in how far the same applies also to the kernel datapath).
Conceptually, the dp_hash of a packet should be based on the current packet's
flow. It should not change if the headers remain unchanged.
For
26 matches
Mail list logo