Hi,
I am just starting out learning about OVS. My mentor suggested a possible
work area of implementing support for SNMP protocol inside OVS. I was
wondering if someone had attempted it before?
If yes, what was the way they went about it? Is there some thread where it
was discussed?
If not, what wo
Hi Gao
I had applied the patch to dpdk_merge here
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_darball_ovs_commits_dpdk-5Fmerge&d=DwIGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=A2_FCacqbp2moAo3HGFlTuxsjONUGhlN42OBcAuQQ6w&s=b6btPKhgvOFr2GOUYvktND6kaC6jc3fXI-mXfvNgXOU
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 10:49 PM
To: Kevin Traynor; Jan Scheurich; 王志克; Darrell Ball;
ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org
Subject: RE: [ovs-dev] OVS DPDK NUMA pmd assignment question
On Tue, 18 Jul 2017 15:42:08 +0200
Maxime Coquelin wrote:
> This is an revival from a thread I initiated earlier this year [0], that
> I had to postpone due to other priorities.
>
> First, I'd like to thanks reviewers of my first proposal, this new
> version tries to address the comments made:
>
Hi Billy,
Please see my reply in line.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 9:01 PM
To: 王志克; Darrell Ball; ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org;
Kevin Traynor
Subject: RE: [ovs-dev]
Hello,
I'm trying to find how to configure pcp tag along with vlan id.
Vlan interface created in such way producing packets that have vlan=42 and
pcp=0:
ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44
tag=44 -- set Interface vlan44 type=internal
I would like to kno
On 9/6/17, 5:12 AM, "ovs-dev-boun...@openvswitch.org on behalf of Zoltán
Balogh" wrote:
DPDK uses dp-packet pool for storing received packets. The pool is
reused by rxq_recv funcions of the DPDK netdevs. The datapath is
capable to modify the packet_type property of packets. For ins
The bug can cause ovs-vswitchd to crash (due to assert) when it is
set up with a passive controller connection. Since only active
connections are kept, the passive connection status update should be
ignored and not trigger asserts.
Reported-by: Josh Bailey
Signed-off-by: Andy Zhou
---
AUTHORS.r
On 09/03/2017 10:19 PM, Roi Dayan wrote:
On 29/08/2017 07:30, Roi Dayan wrote:
Hi,
The first commit is a fix for parsing set masked action
and the second commit is adding a test.
Before the fix the addtion of the tests fails with the following:
# make check TESTSUITEFLAGS=436
# ./tests/ovste
In most situations, we don't expect that a flow we've successfully
dumped, which we intend to delete, cannot be deleted. However, to make
this code more resilient to ensure that ukeys *will* transition in all
cases (including an error at this stage), grab the lock and transition
this ukey forward t
On 09/06/2017 02:53 AM, Weglicki, MichalX wrote:
Hey Greg,
Do you have any schedule for checking this patch?
Thank you in advance!
Br,
Michal.
Michal,
The good news is that I have my ntopng/nprobe ipfix collector properly
configured now and can see
flows, hosts, etc. when I enable IPFIX.
"Bodireddy, Bhanuprakash" writes:
> Hi Aaron,
>
>>Quick comment before I do an in-depth review.
>>
>>One thing that is missing in this series is some form of documentation added
>>to explain why this feature should exist (for instance, why can't the standard
>>posix process accounting information
On Wed, Sep 6, 2017 at 10:18 AM, Mark Michelson wrote:
> On Wed, Sep 6, 2017 at 8:51 AM Jakub Sitnicki wrote:
>
>> ovn-trace example refers to a non-existent output port. Correct it.
>>
>> Fixes: 46a2dc58781a ("Document OVN support in ovs-sandbox.")
>> Signed-off-by: Jakub Sitnicki
>>
>
> Acked-
On Tue, Sep 5, 2017 at 3:45 PM, Lance Richardson wrote:
>> From: "Russell Bryant"
>> To: d...@openvswitch.org
>> Cc: lrich...@redhat.com, "Russell Bryant"
>> Sent: Friday, September 1, 2017 9:14:10 PM
>> Subject: [PATCH 2/2] ovn: Support chassis hostname in requested-chassis.
>>
>> Previously, O
Bhanuprakash Bodireddy writes:
> This commit registers the packet processing PMD cores to keepalive
> framework. Only PMDs that have rxqs mapped will be registered and
> actively monitored by KA framework.
>
> This commit spawns a keepalive thread that will dispatch heartbeats to
> PMD cores. The
ovn-nbctl will now accept IPv6 addresses for load balancer VIPs and
desetination addresses.
In addition, the ovn-nbctl lb-list, lr-lb-list, and ls-lb-list have been
modified to be able to fit IPv6 addresses on screen.
Signed-off-by: Mark Michelson
---
ovn/utilities/ovn-nbctl.c | 175 +++
For this commit, ovn-northd will now accept both IPv4 and IPv6 addresses
in the northbound database for a load balancer VIP or destination
addresses. For IPv4, the behavior remains the same. For IPv6, the
following logical flows will be added to the southbound database:
* An ND_NA response for inc
The ct_lb action previously assumed that any address arguments were
IPv4. This patch expands the parsing, formatting, and encoding of ct_lb
to be amenable to IPv6 addresses as well.
Signed-off-by: Mark Michelson
---
include/ovn/actions.h | 4 ++-
ovn/lib/actions.c | 99 +
OVS has functions for parsing IPv4 addresses, parsing IPv4 addresses
with a port, and parsing IPv6 addresses. What is lacking though is a
function that can take an IPv4 or IPv6 address, with or without a port.
This commit adds ipv46_parse(), which breaks the given input string into
its component p
This patchset adds the necessary items in order to support IPv6 load
balancers in OVN. No syntax has changed in ovn-nbctl or in the
northbound database to support this. Appropriate tests have been
added to the testsuite as well.
Mark Michelson (4):
Add general-purpose IP/port parsing function.
Thanks for the quick review.
I applied it on master.
Alin.
> -Original Message-
> From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-
> boun...@openvswitch.org] On Behalf Of Ben Pfaff
> Sent: Wednesday, September 6, 2017 7:06 PM
> To: Alin Gabriel Serdean
> Cc: d...@openvswitch.org
>
On 9/6/17, 8:57 AM, "ovs-dev-boun...@openvswitch.org on behalf of Ben Pfaff"
wrote:
Some of the implementations of atomic_store_relaxed() evaluate their
first argument more than once, so arguments with side effects cause
strange behavior. This fixes a problem observed on 64-bit Win
Hi Bhanu,
Bhanuprakash Bodireddy writes:
> This commit introduces the initial keepalive support by adding
> 'keepalive' module and also helper and initialization functions
> that will be invoked by later commits.
>
> This commit adds new ovsdb column "keepalive" that shows the status
> of the da
Thanks!
I applied this on master.
Alin.
> -Original Message-
> From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-
> boun...@openvswitch.org] On Behalf Of Ben Pfaff
> Sent: Wednesday, September 6, 2017 7:06 PM
> To: Alin Gabriel Serdean
> Cc: d...@openvswitch.org
> Subject: Re: [ovs-
Hi Aaron,
>Quick comment before I do an in-depth review.
>
>One thing that is missing in this series is some form of documentation added
>to explain why this feature should exist (for instance, why can't the standard
>posix process accounting information suffice?) and what the high-level
>concepts
Thanks a lot for fixing this!
Acked-by: Alin Serdean
> -Original Message-
> From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-
> boun...@openvswitch.org] On Behalf Of Ben Pfaff
> Sent: Wednesday, September 6, 2017 6:58 PM
> To: d...@openvswitch.org
> Cc: Ben Pfaff ; Alin Serdean
> S
I guess the Conntrack code is intentionally styled similar to the user space
code with the idea of sharing it. I am not sure if this is realistic or even
possible now, given that the entire Conntrack code is re-written for the
Windows data path. If there is not going to be any code sharing, it d
On 09/06/2017 02:53 AM, Weglicki, MichalX wrote:
Hey Greg,
Do you have any schedule for checking this patch?
Thank you in advance!
Br,
Michal.
Hi Michal,
I'll work on it this week and see if I can resolve the connection problem to
the collector.
Thanks,
- Greg
-Original Message--
Hi Bhanu,
Bhanuprakash Bodireddy writes:
> Keepalive feature is aimed at achieving Fastpath Service Assurance
> in OVS-DPDK deployments. It adds support for monitoring the packet
> processing cores(PMD thread cores) by dispatching heartbeats at regular
> intervals. Incase of heartbeat misses add
On Fri, Aug 25, 2017 at 10:47:12PM +0300, Alin Gabriel Serdean wrote:
> Just a small nit to see current build status of appveyor.
> Also add a link so one could easily reach the history of the builds.
>
> Signed-off-by: Alin Gabriel Serdean
> ---
> README.rst | 2 ++
> 1 file changed, 2 insertio
On Wed, Sep 06, 2017 at 01:39:45AM +0300, Alin Gabriel Serdean wrote:
> This patch enables atomics on x64 builds.
>
> Reuse the atomics defined for x86 and add atomics for 64 bit reads/writes.
>
> Before this patch the cmap test gives us:
> $ ./tests/ovstest.exe test-cmap benchmark 1000 3 1
>
Some of the implementations of atomic_store_relaxed() evaluate their
first argument more than once, so arguments with side effects cause
strange behavior. This fixes a problem observed on 64-bit Windows.
Reported-by: Alin Serdean
Signed-off-by: Ben Pfaff
---
lib/dpif-netdev.c | 4 ++--
1 file
Hi All,
On the "“RSS hash threshold method” for EMC load shedding I hope to have time
to do an RFC to illustrate in the next week or so give a better idea of what I
mean.
Thanks,
Billy.
> -Original Message-
> From: ovs-dev-boun...@openvswitch.org [mailto:ovs-dev-
> boun...@openvswitch.
> -Original Message-
> From: Kevin Traynor [mailto:ktray...@redhat.com]
> Sent: Wednesday, September 6, 2017 3:02 PM
> To: Jan Scheurich ; O Mahony, Billy
> ; wangzh...@jd.com; Darrell Ball
> ; ovs-disc...@openvswitch.org; ovs-
> d...@openvswitch.org
> Subject: Re: [ovs-dev] OVS DPDK NUMA
> -Original Message-
> From: Kevin Traynor [mailto:ktray...@redhat.com]
> Sent: Wednesday, September 6, 2017 2:50 PM
> To: Jan Scheurich ; O Mahony, Billy
> ; wangzh...@jd.com; Darrell Ball
> ; ovs-disc...@openvswitch.org; ovs-
> d...@openvswitch.org
> Subject: Re: [ovs-dev] OVS DPDK NUMA
> From: "Mark Michelson"
> To: d...@openvswitch.org
> Sent: Wednesday, September 6, 2017 9:38:40 AM
> Subject: [ovs-dev] [PATCH v4] ovn: Check for known logical switch port types.
>
> OVN is lenient with the types of logical switch ports. Maybe too
> lenient. This patch attempts to solve this pro
On Wed, Sep 6, 2017 at 8:51 AM Jakub Sitnicki wrote:
> ovn-trace example refers to a non-existent output port. Correct it.
>
> Fixes: 46a2dc58781a ("Document OVN support in ovs-sandbox.")
> Signed-off-by: Jakub Sitnicki
>
Acked-By: Mark Michelson
> ---
> Documentation/tutorials/ovn-sandbox.
On 09/06/2017 02:43 PM, Jan Scheurich wrote:
>>
>> I think the mention of pinning was confusing me a little. Let me see if I
>> fully understand your use case: You don't 'want' to pin
>> anything but you are using it as a way to force the distribution of rxq from
>> a single nic across to PMDs o
ovn-trace example refers to a non-existent output port. Correct it.
Fixes: 46a2dc58781a ("Document OVN support in ovs-sandbox.")
Signed-off-by: Jakub Sitnicki
---
Documentation/tutorials/ovn-sandbox.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/tutorials/o
On 09/06/2017 02:33 PM, Jan Scheurich wrote:
> Hi Billy,
>
>> You are going to have to take the hit crossing the NUMA boundary at some
>> point if your NIC and VM are on different NUMAs.
>>
>> So are you saying that it is more expensive to cross the NUMA boundary from
>> the pmd to the VM that t
>
> I think the mention of pinning was confusing me a little. Let me see if I
> fully understand your use case: You don't 'want' to pin
> anything but you are using it as a way to force the distribution of rxq from
> a single nic across to PMDs on different NUMAs. As without
> pinning all rxqs
OVN is lenient with the types of logical switch ports. Maybe too
lenient. This patch attempts to solve this problem on two fronts:
1) In ovn-nbctl, if you attempt to set the port type to an unknown
type, the command will not end up setting the type.
2) In northd, when copying the port type from th
Hi Billy,
> You are going to have to take the hit crossing the NUMA boundary at some
> point if your NIC and VM are on different NUMAs.
>
> So are you saying that it is more expensive to cross the NUMA boundary from
> the pmd to the VM that to cross it from the NIC to the
> PMD?
Indeed, that i
Hi Wang,
I think the mention of pinning was confusing me a little. Let me see if I fully
understand your use case: You don't 'want' to pin anything but you are using
it as a way to force the distribution of rxq from a single nic across to PMDs
on different NUMAs. As without pinning all rxqs ar
DPDK uses dp-packet pool for storing received packets. The pool is
reused by rxq_recv funcions of the DPDK netdevs. The datapath is
capable to modify the packet_type property of packets. For instance
when encapsulated L3 packets are received on a ptap gre port.
In this case the packet_type property
Hi Billy,
See my reply in line.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy [mailto:billy.o.mah...@intel.com]
Sent: Wednesday, September 06, 2017 7:26 PM
To: 王志克; Darrell Ball; ovs-disc...@openvswitch.org; ovs-dev@openvswitch.org;
Kevin Traynor
Subject: RE: [ovs-dev] OVS DP
Thanks for your testing and I reproduce it on my own machine.
I did the testing:
10% times to get about 8.5Gb/s thoughput when "ethtool -K eth0 tx on" , 90%
situation get 3.5Gb/s.
10% times to get about 3.5Gb/s thoughput when "ethtool -K eth0 tx off", 90%
situation I get 8.5Gb/s
And this wierd t
Hi Wang,
You are going to have to take the hit crossing the NUMA boundary at some point
if your NIC and VM are on different NUMAs.
So are you saying that it is more expensive to cross the NUMA boundary from the
pmd to the VM that to cross it from the NIC to the PMD?
If so then in that case you
On Wed, Sep 06, 2017 at 04:03:29PM +0800, Hannes Frederic Sowa wrote:
> "Yang, Yi" writes:
> >>
> >> > If you check GENEVE implementation, tun_metadata* can be set or matched
> >> > as any other match field.
> >>
> >> Yes, I wrote that in my previous mail. I wonder why NSH context metadata
> >>
Hi Billy,
It depends on the destination of the traffic.
I observed that if the traffic destination is across NUMA socket, the "avg
processing cycles per packet" would increase 60% than the traffic to same NUMA
socket.
Br,
Wang Zhike
-Original Message-
From: O Mahony, Billy [mailto:bil
Hi Wang,
If you create several PMDs on the NUMA of the physical port does that have the
same performance characteristic?
/Billy
> -Original Message-
> From: 王志克 [mailto:wangzh...@jd.com]
> Sent: Wednesday, September 6, 2017 10:20 AM
> To: O Mahony, Billy ; Darrell Ball
> ; ovs-disc..
Hi Kevin,
Consider the scenario:
One host with 1 physical NIC, and the NIC locates on NUMA socket0. There are
lots of VM on this host.
I can see several method to improve the performance:
1) Try to make sure the VM memory used for networking would locate on socket0
forever. Eg, if VM uses 4G m
Jan Scheurich writes:
>> > There is no way we can re-use the existing TLV tunnel metadata
>> > infrastructure in OVS for matching and setting NSH MD2 TLV headers. We
>> > will need to introduce a new (perhaps similar) scheme for modelling
>> > generic TLV match registers in OVS that are assigned
> > There is no way we can re-use the existing TLV tunnel metadata
> > infrastructure in OVS for matching and setting NSH MD2 TLV headers. We
> > will need to introduce a new (perhaps similar) scheme for modelling
> > generic TLV match registers in OVS that are assigned to protocol TLVs
> > by the
Hey Greg,
Do you have any schedule for checking this patch?
Thank you in advance!
Br,
Michal.
> -Original Message-
> From: Greg Rose [mailto:gvrose8...@gmail.com]
> Sent: Tuesday, August 29, 2017 5:15 PM
> To: Weglicki, MichalX
> Cc: d...@openvswitch.org; Szczerbik, PrzemyslawX
>
Jan Scheurich writes:
>> >> Yes, I wrote that in my previous mail. I wonder why NSH context metadata
>> >> is not in tun_metadata as well?
>> >
>> > tun_metadata is tunnel metadata, GENEVE needs tunnel port, but NSH is
>> > not so, NSH can't directly use tun_metadata, for MD type 2, we need to a
On 09/06/2017 08:03 AM, 王志克 wrote:
> Hi Darrell,
>
> pmd-rxq-affinity has below limitation: (so isolated pmd can not be used for
> others, which is not my expectation. Lots of VMs come and go on the fly, and
> manully assignment is not feasible.)
> >>After that PMD threads on cores whe
Hi Billy,
Yes, I want to achieve better performance.
The commit "dpif-netdev: Assign ports to pmds on non-local numa node" can NOT
meet my needs.
I do have pmd on socket 0 to poll the physical NIC which is also on socket 0.
However, this is not enough since I also have other pmd on socket 1. I
Hi Wang,
A change was committed to head of master 2017-08-02 "dpif-netdev: Assign ports
to pmds on non-local numa node" which if I understand your request correctly
will do what you require.
However it is not clear to me why you are pinning rxqs to PMDs in the first
instance. Currently if you
> >> Yes, I wrote that in my previous mail. I wonder why NSH context metadata
> >> is not in tun_metadata as well?
> >
> > tun_metadata is tunnel metadata, GENEVE needs tunnel port, but NSH is
> > not so, NSH can't directly use tun_metadata, for MD type 2, we need to a
> > lot of rework on tun_meta
"Yang, Yi" writes:
> On Tue, Sep 05, 2017 at 09:12:09PM +0800, Hannes Frederic Sowa wrote:
>> "Yang, Yi" writes:
>>
>> > We can change this later if we really find a better way to handle this
>> > because it isn't defined in include/uapi/linux/openvswitch.h, so I still
>> > have backdoor to do
I had applied the patch to dpdk_merge here
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_darball_ovs_commits_dpdk-5Fmerge&d=DwIGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=A2_FCacqbp2moAo3HGFlTuxsjONUGhlN42OBcAuQQ6w&s=b6btPKhgvOFr2GOUYvktND6kaC6jc3fXI-mXfvNgXOU&e=
O
Adding Billy and Kevin
On 9/6/17, 12:22 AM, "Darrell Ball" wrote:
On 9/6/17, 12:03 AM, "王志克" wrote:
Hi Darrell,
pmd-rxq-affinity has below limitation: (so isolated pmd can not be used
for others, which is not my expectation. Lots of VMs come and go
On 9/6/17, 12:03 AM, "王志克" wrote:
Hi Darrell,
pmd-rxq-affinity has below limitation: (so isolated pmd can not be used for
others, which is not my expectation. Lots of VMs come and go on the fly, and
manully assignment is not feasible.)
>>After that PMD threads on co
Hi Darrell,
pmd-rxq-affinity has below limitation: (so isolated pmd can not be used for
others, which is not my expectation. Lots of VMs come and go on the fly, and
manully assignment is not feasible.)
>>After that PMD threads on cores where RX queues was pinned will
become isolated.
65 matches
Mail list logo