Hi,
Any suggestions/pointers on following?
Regards,
Nitin
From: Nitin Katiyar
Sent: Friday, June 29, 2018 3:00 PM
To: ovs-dev@openvswitch.org
Subject: Conntrack performance drop in OVS 2.8
Hi,
The performance of OVS 2.8 (with DPDK 17.05.02) with conntrack configuration
has dropped
On Tue, Jul 3, 2018 at 10:56 AM, Alexei Starovoitov
wrote:
> On Thu, Jun 28, 2018 at 07:19:35AM -0700, William Tu wrote:
>> Hi Alexei,
>>
>> Thanks a lot for the feedback!
>>
>> On Wed, Jun 27, 2018 at 8:00 PM, Alexei Starovoitov
>> wrote:
>> > On Sat, Jun 23, 2018 at 05:16:32AM -0700, William
---
Este correo electrónico ha sido comprobado en busca de virus por AVG.
As it stands, OVS_DPDK_START() won't allow other configs to be set
before starting the ovs-vswitchd daemon. This is a problem since some
configs, such as the "dpdk-multi-seg-mbufs=true" for enabling the
multi-segment mbufs, need to be set prior to start OvS.
To support other options,
The following tests are added to the DPDK testsuite to add some
coverage for the multi-segment mbufs:
- Check that multi-segment mbufs are disabled by default;
- Check that providing `other_config:dpdk-multi-seg-mbufs=true` indeed
enables mbufs;
- Using a DPDK port, send a random packet out and
In order to create a minimal environment that allows the tests to get
mbufs from an existing mempool, the following approach is taken:
- EAL is initialised (by using the main dpdk_init()) and a (very) small
mempool is instantiated (mimicking the logic in dpdk_mp_create()).
This mempool
From: Mark Kavanagh
Currently, jumbo frame support for OvS-DPDK is implemented by
increasing the size of mbufs within a mempool, such that each mbuf
within the pool is large enough to contain an entire jumbo frame of
a user-defined size. Typically, for each user-defined MTU,
'requested_mtu', a
From: Mark Kavanagh
Currently, packets are only copied to a single segment in the function
dpdk_do_tx_copy(). This could be an issue in the case of jumbo frames,
particularly when multi-segment mbufs are involved.
This patch calculates the number of segments needed by a packet and
copies the
Most helper functions in dp-packet assume that the data held by a
dp_packet is contiguous, and perform operations such as pointer
arithmetic under that assumption. However, with the introduction of
multi-segment mbufs, where data is non-contiguous, such assumptions are
no longer possible. Some
When enabled with DPDK OvS relies on mbufs allocated by mempools to
receive and output data on DPDK ports. Until now, each OvS dp_packet has
had only one mbuf associated, which is allocated with the maximum
possible size, taking the MTU into account. This approach, however,
doesn't allow us to
From: Michael Qiu
When doing packet clone, if packet source is from DPDK driver,
multi-segment must be considered, and copy the segment's data one by
one.
Also, lots of DPDK mbuf's info is missed during a copy, like packet
type, ol_flags, etc. That information is very important for DPDK to do
In its current implementation dp_packet_shift() is also unaware of
multi-seg mbufs (that holds data in memory non-contiguously) and assumes
that data exists contiguously in memory, memmove'ing data to perform the
shift.
To add support for multi-seg mbuds a new set of functions was
introduced,
When a dp_packet is from a DPDK source, and it contains multi-segment
mbufs, the data_len is not equal to the packet size, pkt_len. Instead,
the data_len of each mbuf in the chain should be considered while
distributing the new (provided) size.
To account for the above dp_packet_set_size() has
A new mutex, 'nonpmd_mp_mutex', has been introduced to serialise
allocation and free operations by non-pmd threads on a given mempool.
free_dpdk_buf() has been modified to make use of the introduced mutex.
Signed-off-by: Tiago Lam
---
lib/netdev-dpdk.c | 30 +++---
1
When enabled with DPDK OvS deals with two types of packets, the ones
coming from the mempool and the ones locally created by OvS - which are
copied to mempool mbufs before output. In the latter, the space is
allocated from the system, while in the former the mbufs are allocated
from a mempool,
From: Mark Kavanagh
There are numerous factors that must be considered when calculating
the size of an mbuf:
- the data portion of the mbuf must be sized in accordance With Rx
buffer alignment (typically 1024B). So, for example, in order to
successfully receive and capture a 1500B packet,
From: Mark Kavanagh
dp_packets are created using xmalloc(); in the case of OvS-DPDK, it's
possible the the resultant mbuf portion of the dp_packet contains
random data. For some mbuf fields, specifically those related to
multi-segment mbufs and/or offload features, random values may cause
Overview
This patchset introduces support for multi-segment mbufs to OvS-DPDK.
Multi-segment mbufs are typically used when the size of an mbuf is
insufficient to contain the entirety of a packet's data. Instead, the
data is split across numerous mbufs, each carrying a portion, or
Hi,
I've checked the latest patch and the performance results I get are similar to
the ones give in the previous patches. Also enabling/disabling the DFC on the
fly works as expected.
The main query I have regards the slow sweep for SMC
[[BO'M]] The slow sweep removes EMC entries that are no
Greetings,
Let's work together!
We are a trusted digital marketing agency for more than 10 years, our team
of 200+ marketing gurus has served over 4000 clients. Our unique
"Pay-for-performance" model attracts customers from all geographies of the
world, and we are proud to cater to the
On 03/07/2018 11:22, Eelco Chaudron wrote:
>
>
> On 28 Jun 2018, at 17:41, Tiago Lam wrote:
>
>> When a dp_packet is from a DPDK source, and it contains multi-segment
>> mbufs, the data_len is not equal to the packet size, pkt_len. Instead,
>> the data_len of each mbuf in the chain should be
On 03/07/2018 10:49, Eelco Chaudron wrote:
>
>
> On 28 Jun 2018, at 17:41, Tiago Lam wrote:
>
>> A new mutex, 'nonpmd_mp_mutex', has been introduced to serialise
>> allocation and free operations by non-pmd threads on a given mempool.
>>
>> free_dpdk_buf() has been modified to make use of the
OVS reads packets in batches from a given port and packets in the
batch are subjected to potentially 3 levels of lookups to identify
the datapath megaflow entry (or flow) associated with the packet.
Each megaflow entry has a dedicated buffer in which packets that match
the flow classification
>
> Hi Vishal, thanks for the patch, I've some minor comments below. I'm
> still testing but so far it seems good.
>
> Unless there are objections (or any new issues come to light) I will put
> this as part of the pull request this week and back port to the
> appropriate branches.
>
> Thanks
>
Thanks a lot Han!
On Tue, Jun 26, 2018 at 9:41 AM Jakub Sitnicki wrote:
> On Mon, 25 Jun 2018 10:03:02 -0700
> Han Zhou wrote:
>
> > A bug was reported on the feature of applying ACLs on port groups [1].
> > This bug was not detected by the original test case, because it didn't
> > test the
This commit fixes uninitialized variable warnings in functions
cmd_create() and cmd_get() when compiling with gcc 6.3.1 and -Werror
by initializing variables 'symbol' and 'new' to NULL.
Cc: Alex Wang
Fixes: 07ff77ccb82a ("db-ctl-base: Make common database command code
into
From: Stefano Brivio
Open vSwitch sends to userspace all received packets that have
no associated flow (thus doing an "upcall"). Then the userspace
program creates a new flow and determines the actions to apply
based on its configuration.
When a single port generates a high rate of upcalls, it
On Tue, 3 Jul 2018 11:32:18 -0700
Ben Pfaff wrote:
> check_logs ignores some log messages, but it wasn't smart enough to
> ignore the messages that said that the ignored messages had been
> rate-limited. This fixes the problem.
>
> It's OK to ignore all rate-limiting messages because they only
Hi
For below pipeline I see different behavior in case of bridge is of type
netdev or system. I see in OVS2.6.2 (and OVS 2.8.2) when
netdev bridge is used conntrack field are sent to controller while in case of
kernel bridge it is not sent. So I want to know what is the correct behavior.
Hi,
I hope you are doing well.
I am delighted to share with you "THE SECRET BEHIND SUCCESSFUL SALES TEAM
MANAGEMENT".
I found this is beneficial to you & your organization.
Check: 5 Advantages of Smart Work-flow:
1. Automate work-force.
2. Increases
Currently in OvS if we hit "Table-miss" rules (associated with Controller
action) then we send PACKET_IN message to controller with reason as
OFPR_NO_MATCH.
“Table-miss” rule is one whose priority is 0 and its catch all rule.
But if we hit same "Table-miss" rule after executing group entry we
Thanks a lot Ben for promptly reviewing the patch.
I agree with you that now there is no real users of in_group. I will delete
this and post the next version of patch.
Thanks
Keshav
-Original Message-
From: Ben Pfaff [mailto:b...@ovn.org]
Sent: Wednesday, July 04, 2018 12:10 AM
32 matches
Mail list logo