Hi, communities.

I’m trying to add a new scheduling feature for ContainerPool and LoadBalancer. 
The description lay down below:




Currently, the container scheduling strategy is CPU share, using docker run 
--cpu-shares.
And we use poolConfig.userMemory to specify every Invoker node's memory pool 
size. (Did I miss sth? Do me a favor to correct me if needed.)
It's easy to understand, and robust enough to fit most of the deployment case.


But what if we wanna deploy Invoker in some resource-limited machine (eg. 4 
cores CPU and 8 GB memory, only use 4 cores and 4GB memory), or K8S nodes with 
different CPU and memory resource?
And How to figure out the best userMemory in a cluster without boiling CPUs and 
saving testing time, if most of the actions are CPU consuming, such as AI 
functions?


Recently, I opened a PR try to figure out those deploy cases, see 
https://github.com/apache/openwhisk/pull/4648
The Issue is open on https://github.com/apache/openwhisk/issues/4650


It would use a configuration to turn on the feature, which handles better 
especially when K8S cluster form of different system resource nodes.


Any comments are welcome.


Sorry for the slower reply, for I'll be on a 9 days long vocation.




| |
曾先生
|
|
[email protected]
|
签名由网易邮箱大师定制

Reply via email to