Re: [ovs-dev] Conntrack performance drop in OVS 2.8

2018-07-04 Thread Nitin Katiyar
Hi, Any suggestions/pointers on following? Regards, Nitin From: Nitin Katiyar Sent: Friday, June 29, 2018 3:00 PM To: ovs-dev@openvswitch.org Subject: Conntrack performance drop in OVS 2.8 Hi, The performance of OVS 2.8 (with DPDK 17.05.02) with conntrack configuration has dropped

Re: [ovs-dev] [iovisor-dev] [RFC PATCH 00/11] OVS eBPF datapath.

2018-07-04 Thread William Tu
On Tue, Jul 3, 2018 at 10:56 AM, Alexei Starovoitov wrote: > On Thu, Jun 28, 2018 at 07:19:35AM -0700, William Tu wrote: >> Hi Alexei, >> >> Thanks a lot for the feedback! >> >> On Wed, Jun 27, 2018 at 8:00 PM, Alexei Starovoitov >> wrote: >> > On Sat, Jun 23, 2018 at 05:16:32AM -0700, William

[ovs-dev] Postulaciones Abiertas

2018-07-04 Thread Nuevos Doctorados 2018-2019
--- Este correo electrónico ha sido comprobado en busca de virus por AVG.

[ovs-dev] [PATCH v2 13/14] dpdk-tests: Accept other configs in OVS_DPDK_START

2018-07-04 Thread Tiago Lam
As it stands, OVS_DPDK_START() won't allow other configs to be set before starting the ovs-vswitchd daemon. This is a problem since some configs, such as the "dpdk-multi-seg-mbufs=true" for enabling the multi-segment mbufs, need to be set prior to start OvS. To support other options,

[ovs-dev] [PATCH v2 14/14] dpdk-tests: End-to-end tests for multi-seg mbufs.

2018-07-04 Thread Tiago Lam
The following tests are added to the DPDK testsuite to add some coverage for the multi-segment mbufs: - Check that multi-segment mbufs are disabled by default; - Check that providing `other_config:dpdk-multi-seg-mbufs=true` indeed enables mbufs; - Using a DPDK port, send a random packet out and

[ovs-dev] [PATCH v2 12/14] dpdk-tests: Add uni-tests for multi-seg mbufs.

2018-07-04 Thread Tiago Lam
In order to create a minimal environment that allows the tests to get mbufs from an existing mempool, the following approach is taken: - EAL is initialised (by using the main dpdk_init()) and a (very) small mempool is instantiated (mimicking the logic in dpdk_mp_create()). This mempool

[ovs-dev] [PATCH v2 11/14] netdev-dpdk: support multi-segment jumbo frames

2018-07-04 Thread Tiago Lam
From: Mark Kavanagh Currently, jumbo frame support for OvS-DPDK is implemented by increasing the size of mbufs within a mempool, such that each mbuf within the pool is large enough to contain an entire jumbo frame of a user-defined size. Typically, for each user-defined MTU, 'requested_mtu', a

[ovs-dev] [PATCH v2 10/14] netdev-dpdk: copy large packet to multi-seg. mbufs

2018-07-04 Thread Tiago Lam
From: Mark Kavanagh Currently, packets are only copied to a single segment in the function dpdk_do_tx_copy(). This could be an issue in the case of jumbo frames, particularly when multi-segment mbufs are involved. This patch calculates the number of segments needed by a packet and copies the

[ovs-dev] [PATCH v2 06/14] dp-packet: Handle multi-seg mbufs in helper funcs.

2018-07-04 Thread Tiago Lam
Most helper functions in dp-packet assume that the data held by a dp_packet is contiguous, and perform operations such as pointer arithmetic under that assumption. However, with the introduction of multi-segment mbufs, where data is non-contiguous, such assumptions are no longer possible. Some

[ovs-dev] [PATCH v2 08/14] dp-packet: Handle multi-seg mbufs in resize__().

2018-07-04 Thread Tiago Lam
When enabled with DPDK OvS relies on mbufs allocated by mempools to receive and output data on DPDK ports. Until now, each OvS dp_packet has had only one mbuf associated, which is allocated with the maximum possible size, taking the MTU into account. This approach, however, doesn't allow us to

[ovs-dev] [PATCH v2 09/14] dp-packet: copy data from multi-seg. DPDK mbuf

2018-07-04 Thread Tiago Lam
From: Michael Qiu When doing packet clone, if packet source is from DPDK driver, multi-segment must be considered, and copy the segment's data one by one. Also, lots of DPDK mbuf's info is missed during a copy, like packet type, ol_flags, etc. That information is very important for DPDK to do

[ovs-dev] [PATCH v2 07/14] dp-packet: Handle multi-seg mubfs in shift() func.

2018-07-04 Thread Tiago Lam
In its current implementation dp_packet_shift() is also unaware of multi-seg mbufs (that holds data in memory non-contiguously) and assumes that data exists contiguously in memory, memmove'ing data to perform the shift. To add support for multi-seg mbuds a new set of functions was introduced,

[ovs-dev] [PATCH v2 05/14] dp-packet: Fix data_len handling multi-seg mbufs.

2018-07-04 Thread Tiago Lam
When a dp_packet is from a DPDK source, and it contains multi-segment mbufs, the data_len is not equal to the packet size, pkt_len. Instead, the data_len of each mbuf in the chain should be considered while distributing the new (provided) size. To account for the above dp_packet_set_size() has

[ovs-dev] [PATCH v2 04/14] netdev-dpdk: Serialise non-pmds mbufs' alloc/free.

2018-07-04 Thread Tiago Lam
A new mutex, 'nonpmd_mp_mutex', has been introduced to serialise allocation and free operations by non-pmd threads on a given mempool. free_dpdk_buf() has been modified to make use of the introduced mutex. Signed-off-by: Tiago Lam --- lib/netdev-dpdk.c | 30 +++--- 1

[ovs-dev] [PATCH v2 03/14] dp-packet: Fix allocated size on DPDK init.

2018-07-04 Thread Tiago Lam
When enabled with DPDK OvS deals with two types of packets, the ones coming from the mempool and the ones locally created by OvS - which are copied to mempool mbufs before output. In the latter, the space is allocated from the system, while in the former the mbufs are allocated from a mempool,

[ovs-dev] [PATCH v2 01/14] netdev-dpdk: fix mbuf sizing

2018-07-04 Thread Tiago Lam
From: Mark Kavanagh There are numerous factors that must be considered when calculating the size of an mbuf: - the data portion of the mbuf must be sized in accordance With Rx buffer alignment (typically 1024B). So, for example, in order to successfully receive and capture a 1500B packet,

[ovs-dev] [PATCH v2 02/14] dp-packet: Init specific mbuf fields.

2018-07-04 Thread Tiago Lam
From: Mark Kavanagh dp_packets are created using xmalloc(); in the case of OvS-DPDK, it's possible the the resultant mbuf portion of the dp_packet contains random data. For some mbuf fields, specifically those related to multi-segment mbufs and/or offload features, random values may cause

[ovs-dev] [PATCH v2 00/14] Support multi-segment mbufs

2018-07-04 Thread Tiago Lam
Overview This patchset introduces support for multi-segment mbufs to OvS-DPDK. Multi-segment mbufs are typically used when the size of an mbuf is insufficient to contain the entirety of a packet's data. Instead, the data is split across numerous mbufs, each carrying a portion, or

Re: [ovs-dev] [PATCH v4 1/2] dpif-netdev: Add SMC cache after EMC cache

2018-07-04 Thread O Mahony, Billy
Hi, I've checked the latest patch and the performance results I get are similar to the ones give in the previous patches. Also enabling/disabling the DFC on the fly works as expected. The main query I have regards the slow sweep for SMC [[BO'M]] The slow sweep removes EMC entries that are no

[ovs-dev] Website Ranking Solution!

2018-07-04 Thread Daisy Williams
Greetings, Let's work together! We are a trusted digital marketing agency for more than 10 years, our team of 200+ marketing gurus has served over 4000 clients. Our unique "Pay-for-performance" model attracts customers from all geographies of the world, and we are proud to cater to the

Re: [ovs-dev] [PATCH v1 05/14] dp-packet: Fix data_len handling multi-seg mbufs.

2018-07-04 Thread Lam, Tiago
On 03/07/2018 11:22, Eelco Chaudron wrote: > > > On 28 Jun 2018, at 17:41, Tiago Lam wrote: > >> When a dp_packet is from a DPDK source, and it contains multi-segment >> mbufs, the data_len is not equal to the packet size, pkt_len. Instead, >> the data_len of each mbuf in the chain should be

Re: [ovs-dev] [PATCH v1 04/14] netdev-dpdk: Serialise non-pmds mbufs' alloc/free.

2018-07-04 Thread Lam, Tiago
On 03/07/2018 10:49, Eelco Chaudron wrote: > > > On 28 Jun 2018, at 17:41, Tiago Lam wrote: > >> A new mutex, 'nonpmd_mp_mutex', has been introduced to serialise >> allocation and free operations by non-pmd threads on a given mempool. >> >> free_dpdk_buf() has been modified to make use of the

[ovs-dev] [PATCH v2] dpif-netdev: Avoid reordering of packets in a batch with same megaflow

2018-07-04 Thread Vishal Deep Ajmera
OVS reads packets in batches from a given port and packets in the batch are subjected to potentially 3 levels of lookups to identify the datapath megaflow entry (or flow) associated with the packet. Each megaflow entry has a dedicated buffer in which packets that match the flow classification

Re: [ovs-dev] [PATCH] dpif-netdev: Avoid reordering of packets in a batch with same megaflow

2018-07-04 Thread Vishal Deep Ajmera
> > Hi Vishal, thanks for the patch, I've some minor comments below. I'm > still testing but so far it seems good. > > Unless there are objections (or any new issues come to light) I will put > this as part of the pull request this week and back port to the > appropriate branches. > > Thanks >

Re: [ovs-dev] [PATCH v3] ovn.at: Add stateful test for ACL on port groups.

2018-07-04 Thread Daniel Alvarez Sanchez
Thanks a lot Han! On Tue, Jun 26, 2018 at 9:41 AM Jakub Sitnicki wrote: > On Mon, 25 Jun 2018 10:03:02 -0700 > Han Zhou wrote: > > > A bug was reported on the feature of applying ACLs on port groups [1]. > > This bug was not detected by the original test case, because it didn't > > test the

[ovs-dev] [PATCH] db-ctl-base: Fix compilation warnings.

2018-07-04 Thread Ian Stokes
This commit fixes uninitialized variable warnings in functions cmd_create() and cmd_get() when compiling with gcc 6.3.1 and -Werror by initializing variables 'symbol' and 'new' to NULL. Cc: Alex Wang Fixes: 07ff77ccb82a ("db-ctl-base: Make common database command code into

[ovs-dev] [PATCH RFC net-next] openvswitch: Queue upcalls to userspace in per-port round-robin order

2018-07-04 Thread Matteo Croce
From: Stefano Brivio Open vSwitch sends to userspace all received packets that have no associated flow (thus doing an "upcall"). Then the userspace program creates a new flow and determines the actions to apply based on its configuration. When a single port generates a high rate of upcalls, it

Re: [ovs-dev] [PATCH] ofproto-macros: Ignore "Dropped # log messages" in check_logs.

2018-07-04 Thread Timothy Redaelli
On Tue, 3 Jul 2018 11:32:18 -0700 Ben Pfaff wrote: > check_logs ignores some log messages, but it wasn't smart enough to > ignore the messages that said that the ignored messages had been > rate-limited. This fixes the problem. > > It's OK to ignore all rate-limiting messages because they only

[ovs-dev] Query regarding connection tracking info sent to controller as part of PACKET_IN

2018-07-04 Thread Keshav Gupta
Hi For below pipeline I see different behavior in case of bridge is of type netdev or system. I see in OVS2.6.2 (and OVS 2.8.2) when netdev bridge is used conntrack field are sent to controller while in case of kernel bridge it is not sent. So I want to know what is the correct behavior.

[ovs-dev] Monitor Real-Time locations for your Employees

2018-07-04 Thread Ricky
Hi, I hope you are doing well. I am delighted to share with you "THE SECRET BEHIND SUCCESSFUL SALES TEAM MANAGEMENT". I found this is beneficial to you & your organization. Check: 5 Advantages of Smart Work-flow: 1. Automate work-force. 2. Increases

[ovs-dev] [PATCH v2] ofproto-dpif-xlate: Fix packet_in reason for Table-miss rule

2018-07-04 Thread Keshav Gupta
Currently in OvS if we hit "Table-miss" rules (associated with Controller action) then we send PACKET_IN message to controller with reason as OFPR_NO_MATCH. “Table-miss” rule is one whose priority is 0 and its catch all rule. But if we hit same "Table-miss" rule after executing group entry we

Re: [ovs-dev] [PATCH v1] ofproto-dpif-xlate: Fix packet_in reason for Table-miss rule

2018-07-04 Thread Keshav Gupta
Thanks a lot Ben for promptly reviewing the patch. I agree with you that now there is no real users of in_group. I will delete this and post the next version of patch. Thanks Keshav -Original Message- From: Ben Pfaff [mailto:b...@ovn.org] Sent: Wednesday, July 04, 2018 12:10 AM