Hi All - 
I’ve been considering how we can leverage mesos resource scheduling within 
OpenWhisk, and wanted to share some ideas and solicit some feedback. 

In general, I’ve been experimenting with an approach that replaces the 
DockerContainer implementation with a MesosTask. The MesosTask will have 
effectively the same lifecycle (ignoring pause/resume functions) and the same 
interface (HTTP api to the container), the main difference being that the 
container will be deployed to some arbitrary host in the cluster. 

There are a few broad topics around this, including:
- docker client usage needs to be better isolated (e.g. cleaning up existing 
containers, reconciling containers already running at time of invoker startup, 
etc). I think this will be mostly straightforward.
- log collection - I’m not sure the best approach here, but one is to 
completely decouple log collection from activation execution, and then provide 
a mesos-specific impl
- container state tracking + load balancing - obviously there is potential for 
conflict if 2 invokers schedule activations on the same container in the 
cluster (since the state tracking of the container would still be in the 
invoker). This would imply some extension to the ContainerPool as well.
- mesos framework - we’ve discussed internally a bit, and some preferences I 
have are: leverage the mesos http api (avoid the older jni libs if possible) 
and provide an independent framework application which provides a (simpler)  
http api that is consumed by the invoker (if mesos integration is enabled in 
the deployment). This way the framework deployment can be isolated from 
controller/invoker deployment, and interaction with docker containers is mostly 
the same (except for logging).  
- There are a few options of implementing mesos http api clients like: rxjava 
client, and nodejs client, but I've not seen any good Scala client to date, so 
we may either provide a scala app using the rxjava client, or create.a new 
client in Scala.  

Let me know if you have any thoughts around any of these, and I will share more 
details as they come.

Thanks
Tyson


Reply via email to