Re: [ovs-discuss] max mega flow 64k per pmd or per dpcls?

2017-06-29 Thread Hui Xiang
I guess the answer is now the general LLC is 2.5M per core so that there is
64k flows per thread.

On Fri, Jun 23, 2017 at 11:15 AM, Hui Xiang  wrote:

> Thanks Darrell,
>
> More questions:
> Why not allocating 64k for each dpcls? does the 64k just fit in L3 cache
> or anywhere? how it is calculated in such an exact number?  If there are
> more ports added for polling, for avoid competing can I increase the 64k
> size into a bigger one? Thanks.
>
> Hui.
>
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] max mega flow 64k per pmd or per dpcls?

2017-06-29 Thread Bodireddy, Bhanuprakash
>
>I guess the answer is now the general LLC is 2.5M per core so that there is 64k
>flows per thread.

AFAIK, the no. of flows here may not have to do anything with LLC.  Also there 
is EMC cache(8k entries) of ~4MB per PMD thread.
Yes the performance will be nice with simple test cases (P2P with 1 PMD thread) 
as most of this fits in to LLC. But in real scenarios  OvS-DPDK can be memory 
bound.

BTW, on my DUT the LLC is 35MB and has 28 cores and so the assumption of 
2.5M/core isn't right. 

- Bhanuprakash.

>
>On Fri, Jun 23, 2017 at 11:15 AM, Hui Xiang  wrote:
>Thanks Darrell,
>
>More questions:
>Why not allocating 64k for each dpcls? does the 64k just fit in L3 cache or
>anywhere? how it is calculated in such an exact number?  If there are more
>ports added for polling, for avoid competing can I increase the 64k size into a
>bigger one? Thanks.
>
>Hui.
>
>

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] max mega flow 64k per pmd or per dpcls?

2017-06-29 Thread Darrell Ball
Q: “how it is calculated in such an exact number? “

A: It is a reasonable number to accommodate many cases.

Q: “If there are more ports added for polling, for avoid competing can I 
increase the 64k size into a
bigger one?”

A: If a larger number is needed, it may imply that adding another PMD and 
dividing the forwarding
work would be best.  Maybe even a smaller number of flows may be best served 
with more PMDs.





On 6/29/17, 7:23 AM, "ovs-discuss-boun...@openvswitch.org on behalf of 
Bodireddy, Bhanuprakash"  wrote:

>

>I guess the answer is now the general LLC is 2.5M per core so that there 
is 64k

>flows per thread.



AFAIK, the no. of flows here may not have to do anything with LLC.  Also 
there is EMC cache(8k entries) of ~4MB per PMD thread.





Yes the performance will be nice with simple test cases (P2P with 1 PMD 
thread) as most of this fits in to LLC. But in real scenarios  OvS-DPDK can be 
memory bound.



BTW, on my DUT the LLC is 35MB and has 28 cores and so the assumption of 
2.5M/core isn't right. 



- Bhanuprakash.



>

>On Fri, Jun 23, 2017 at 11:15 AM, Hui Xiang  wrote:

>Thanks Darrell,

>

>More questions:

>Why not allocating 64k for each dpcls? does the 64k just fit in L3 cache or

>anywhere? how it is calculated in such an exact number?  If there are more

>ports added for polling, for avoid competing can I increase the 64k size 
into a

>bigger one? Thanks.

>

>Hui.

>

>



___
discuss mailing list
disc...@openvswitch.org

https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_mailman_listinfo_ovs-2Ddiscuss&d=DwIGaQ&c=uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=-aL2AdnELLqgfD2paHXevABAGM7lXVTVcc-WMLHqINE&s=pSk0G_pj9n5VvpbG_ukDYkjSnSmA9Q9h37XchMZofuU&e=
 


___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] max mega flow 64k per pmd or per dpcls?

2017-06-29 Thread Hui Xiang
Thanks Bodireddy.

Sorry I am a bit confused about the EMC occupied size per PMD, here[1] has
a different story.

Do you mean in real scenarios OVS-DPDK can be memory bound on EMC? I
thought EMC should be totally fit in LLC.

If the megaflows just part in LLC, then the cost of copy between memory and
LLC should be large, isn't it not like what defined as 'fast path' in
userspace compared with kernel datapath? And if most of megaflows are in
memory, the reason of every PMD  has one dpcls instance is to follow the
rule PMD thread should has local data as most as it can, but not every PMD
put it in its local cache, if that is true, I can't see why 64k is the
limit num, unless this is an experience best value calculated from
vtune/perf resutls.

You are probably enabled hyper-thread with 35MB and got 28 cores.

[1] https://mail.openvswitch.org/pipermail/ovs-dev/2015-May/298999.html



On Thu, Jun 29, 2017 at 10:23 PM, Bodireddy, Bhanuprakash <
bhanuprakash.bodire...@intel.com> wrote:

> >
> >I guess the answer is now the general LLC is 2.5M per core so that there
> is 64k
> >flows per thread.
>
> AFAIK, the no. of flows here may not have to do anything with LLC.  Also
> there is EMC cache(8k entries) of ~4MB per PMD thread.
> Yes the performance will be nice with simple test cases (P2P with 1 PMD
> thread) as most of this fits in to LLC. But in real scenarios  OvS-DPDK can
> be memory bound.
>
> BTW, on my DUT the LLC is 35MB and has 28 cores and so the assumption of
> 2.5M/core isn't right.
>
> - Bhanuprakash.
>
> >
> >On Fri, Jun 23, 2017 at 11:15 AM, Hui Xiang  wrote:
> >Thanks Darrell,
> >
> >More questions:
> >Why not allocating 64k for each dpcls? does the 64k just fit in L3 cache
> or
> >anywhere? how it is calculated in such an exact number?  If there are more
> >ports added for polling, for avoid competing can I increase the 64k size
> into a
> >bigger one? Thanks.
> >
> >Hui.
> >
> >
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] max mega flow 64k per pmd or per dpcls?

2017-06-29 Thread Hui Xiang
I am interested about how to define 'reasonable' here, how it is got and
what what is the 'many case'? is there any document/link to refer this
information, please shed me some light.

On Thu, Jun 29, 2017 at 10:47 PM, Darrell Ball  wrote:

> Q: “how it is calculated in such an exact number? “
>
> A: It is a reasonable number to accommodate many cases.
>
> Q: “If there are more ports added for polling, for avoid competing can I
> increase the 64k size into a
> bigger one?”
>
> A: If a larger number is needed, it may imply that adding another PMD and
> dividing the forwarding
> work would be best.  Maybe even a smaller number of flows may be best
> served with more PMDs.
>
>
>
>
>
> On 6/29/17, 7:23 AM, "ovs-discuss-boun...@openvswitch.org on behalf of
> Bodireddy, Bhanuprakash"  of bhanuprakash.bodire...@intel.com> wrote:
>
> >
>
> >I guess the answer is now the general LLC is 2.5M per core so that
> there is 64k
>
> >flows per thread.
>
>
>
> AFAIK, the no. of flows here may not have to do anything with LLC.
> Also there is EMC cache(8k entries) of ~4MB per PMD thread.
>
>
>
>
>
> Yes the performance will be nice with simple test cases (P2P with 1
> PMD thread) as most of this fits in to LLC. But in real scenarios  OvS-DPDK
> can be memory bound.
>
>
>
> BTW, on my DUT the LLC is 35MB and has 28 cores and so the assumption
> of 2.5M/core isn't right.
>
>
>
> - Bhanuprakash.
>
>
>
> >
>
> >On Fri, Jun 23, 2017 at 11:15 AM, Hui Xiang 
> wrote:
>
> >Thanks Darrell,
>
> >
>
> >More questions:
>
> >Why not allocating 64k for each dpcls? does the 64k just fit in L3
> cache or
>
> >anywhere? how it is calculated in such an exact number?  If there are
> more
>
> >ports added for polling, for avoid competing can I increase the 64k
> size into a
>
> >bigger one? Thanks.
>
> >
>
> >Hui.
>
> >
>
> >
>
>
>
> ___
> discuss mailing list
> disc...@openvswitch.org
> https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.
> openvswitch.org_mailman_listinfo_ovs-2Ddiscuss&d=DwIGaQ&c=
> uilaK90D4TOVoH58JNXRgQ&r=BVhFA09CGX7JQ5Ih-uZnsw&m=-
> aL2AdnELLqgfD2paHXevABAGM7lXVTVcc-WMLHqINE&s=pSk0G_pj9n5VvpbG_
> ukDYkjSnSmA9Q9h37XchMZofuU&e=
>
>
>
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss