On 04/22/15 22:18, Jesse Gross wrote:
On Wed, Apr 22, 2015 at 1:15 AM, mark.d.gray<mark.d.g...@intel.com>  wrote:
Pravin Shelar wrote:

On Tue, Apr 21, 2015 at 7:30 PM, Jesse Gross<je...@nicira.com>  wrote:

On Tue, Apr 21, 2015 at 7:23 PM, Pravin Shelar<pshe...@nicira.com>
wrote:

On Tue, Apr 21, 2015 at 6:33 PM, Ethan Jackson<et...@nicira.com>  wrote:

I really don't think this should be configurable, I can't imagine a
situation in which a user would have enough knowledge to tweak the
knob correctly.  I think the only reason we'd want to configure it is
the current EMC size is too small at the moment.  My preference would
be to figure out how to make it bigger, i.e. by doing prefetching and
what not.


It does depends on amount of memory available in system which can vary
alot in practice. If you have lot of memory and few devices user can
configure bigger EMC accordingly. Thats why it could be useful to make
it configurable at vswitchd startup time.


Can't these be detected and data structures sized automatically? OVS
already has a ton of configuration knobs so I agree with Ethan that it
is not helpful to further push this onto the user who is not going to
understand this.



I am fine with automatically configuring it at boot up. I missed that
is what Ethan was referring.


A user may not have all the information required to make a decision like
that but a cloud orchestrator would. It could make a decision along the
lines of - "I have 10 virtual machines on theis compute node and I want to
allocate about 27Ms of L3 cache for the VMs and the rest for Open vSwitch
which will allow me to hit my SLA". In this case, having a runtime
configurable option would be desireable.

I can't say that I agree with this statement for the vast majority of
use cases.

The discussion is good to have though :)

Expecting to have that tight control over workloads and
hardware configurations (not to mention understanding by the
administrator) isn't really realistic outside of a benchmark or
hermetically sealed box.

The vSwitch can't possibly know how hardware resources are being used across the system as a whole as it is not an Operating System. Therefore, we can't let it make a decision like that. I agree that for many use cases and workloads you could probably tune it so that for 90% of them you get the performance that you want but there will always be the 10% that want to really unleash the best performance that is possible and are willing to sacrifice some of the flexibility by hermetically sealing the box. Therefore the user (and I am using user in the sense of user of an interface rather than a system administrator) needs some flexibility to tune some parameters to give the best possible performance, if required. For example, in NFV deployments in the cloud, some of the really high performing VNFs will need to have their vCPUs pinned to the underlying pCPU sacrificing some of the cloudiness for very high performance. However, Openstack can make that decision as it has a more global view of resources on the compute node.

If you want to make data structures sized based on various factors
that you think might affect performance, that is fine with me.
However, please do it automatically so that all users can benefit from
it rather just a small minority. In general, I think this is the
direction that the DPDK extensions to OVS need to move before it can
become more widely deployed.

Either way, the table size has always been tunable via a #define, I guess the discussion here is at what level should the parameter be exposed: #define, configure parameter, ovsdb entry, command line option? As a #define, it makes it a lot harder for those 10% that want blistering performance to be able to make that change and this patch is just a proposal to exposing it in a manner that is a little easier for someone to take advantage of it. Ciara can correct me if I am wrong, but by increasing the table size for some test cases we can double the performance!
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to