Thanks for the kind reply !!
Have a nice day !!
2017-06-09 14:37 GMT+09:00 Ben Pfaff :
> With microflow cache only, there's one cache miss per microflow.
> With an effective megaflow cache, the miss rate is much lower.
>
> The paper gives examples for your other question.
>
> On Fri, Jun 09, 201
With microflow cache only, there's one cache miss per microflow.
With an effective megaflow cache, the miss rate is much lower.
The paper gives examples for your other question.
On Fri, Jun 09, 2017 at 01:47:14PM +0900, Heung Sik Choi wrote:
> Thanks to reply.
>
> Regarding your second question,
Thanks to reply.
Regarding your second question, which flow rules are you asking about?
I just want to know that when ovs had only an in-kernel microflow cache,
How many in-kernel cache miss there are, and also how much improving
there are when using megaflow.
And I have a last question. short
When OVS had only an in-kernel microflow cache, there were at least two
reasons for performance problems with many short-lived flows. The first
was the cost of sending packets to userspace. The second was the cost
of translating the packets through the entire OpenFlow pipeline. The
megaflow cach
What does "if only the microflow cache works" mean?
sorry to confuse you. I'm not good at English.
In the paper, the authors say that at start of ovs implementation, there
has been microflow cache(EMC), but megaflow wasn't implemented.
At that time, they say there was a problem caused by short
On 06/07/2017 02:29 PM, Simone Aglianò wrote:
Yes you have caught my question
http://lmgtfy.com/?q=cisco+catalyst+openflow
--
Ian Pilcher arequip...@gmail.com
"I grew up be
On 06/06/2017 05:18 AM, Felix Konstantin Maurer wrote:
I accidentally only responded to you, so here again what I wrote.
Thanks for the quick response! If tested it now but so far I see no
difference. All IPFIX packets have the same flowStartDelta and flowEndDelta.
Furthermore, about the first l
On 06/08/2017 11:44 AM, Mark McConnaughay wrote:
Thanks, sorry, needed to reply all, yes, that would work, but unfortunately, I am not able to deploy OvS into the VMs. For one of
the VMs it is provided by a third-party and it is expecting VxLAN, and in the other VM, it runs a proprietary OS which
Thanks, sorry, needed to reply all, yes, that would work, but
unfortunately, I am not able to deploy OvS into the VMs. For one of the VMs
it is provided by a third-party and it is expecting VxLAN, and in the other
VM, it runs a proprietary OS which works with qemu/KVM and no hope of
getting OvS up
On 06/08/2017 06:49 AM, Mark McConnaughay wrote:
Thanks Greg. I am working on an 'NFV' project which has a requirement to maintain the entire packet (L2 and up) on reception from
the NIC, forward to an 'on-box' VM and then forward to either another on-box VM or another VM on another physical box.
>From: ovs-discuss-boun...@openvswitch.org
>[mailto:ovs-discuss-boun...@openvswitch.org] On
>Behalf Of ??
>Sent: Thursday, June 8, 2017 12:22 PM
>To: b...@openvswitch.org
>Subject: [ovs-discuss] dpdk17.05 missing the rte_virtio_net.h
>
>HI,
> I'm using dpdk17.05 and ovs2.7 for ovs-dpdk. And fou
On Thu, Jun 08, 2017 at 04:33:54PM +0900, Heung Sik Choi wrote:
> 1. If only microflow cache works and there are many short lived
> connections, does it make many tuples in table, and does it suffers serious
> performance degradation by the many tuples(very many tuple makes context
> switching to U
HI,
I'm using dpdk17.05 and ovs2.7 for ovs-dpdk. And found a mistake.
In the ovs lib/netdev-dpdk.c, there uses the header file
"#include ", however, the dpdk17.05 has deleted the file.
So, if it is a bug?.
___
discuss mailing list
disc...@openvswitch
Sugesh,
I've captured the received packets at br-phy (below) and it is vxlan with vlan
in outer eth header - where did u see qinq ?
08:24:20.598878 02:d7:d1:26:84:e5 (oui Unknown) > 02:fb:9e:ce:f2:0d (oui
Unknown), ethertype 802.1Q (0x8100), length 96: vlan 2047, p 0, ethertype IPv4,
ip-172-3
Regards
_Sugesh
> -Original Message-
> From: Avi Cohen (A) [mailto:avi.co...@huawei.com]
> Sent: Wednesday, June 7, 2017 3:51 PM
> To: Chandran, Sugesh ; ovs-
> disc...@openvswitch.org
> Subject: RE: [ovs-discuss] OVS-DPDK - packets received at dpdk interface but
> not forwarded to VXLAN
Hi,
I have studied OVS in performance aspect. And, I found out that flow
masking and looking up table have quite overhead.
I read a paper ' “The Design and Implementation of Open vSwitch.” USENIX
NSDI 2015', and discovered that there is why use masking process in the
paper.
the paper said
16 matches
Mail list logo