Hi Michael,

loosing/adding a shard is essentially reconciled by the ContainerManager.
As it keeps track of all the ContainerRouters in the system, it can also
observe one going down/crashing or one coming up and joining the "cluster".

If one Router leaves the cluster, the ContainerManager knows which
containers where "managed" by that router and redistributes them across the
Routers left in the system.
If one Router joins the cluster, we can try to rebalance containers to take
load off existing ones. Precise algorithm to be defined but the primitives
should be in place to be able to do that.

Does that answer the question?

Cheers,
Markus

Am Mi., 15. Aug. 2018 um 16:18 Uhr schrieb Michael Marth
<mma...@adobe.com.invalid>:

> Markus,
>
> I agree with your preference of making the state sharded instead of
> distributed. (not only for the scalability reasons you quote but also for
> operational concerns).
> What are your thoughts about losing a shard (planned or crashed) or adding
> a shard?
>
> Michael
>
>
> On 15.08.18, 09:58, "Markus Thömmes" <markusthoem...@apache.org> wrote:
>
>     Hi Dragos,
>
>     thanks for your questions, good discussion :)
>
>     Am Di., 14. Aug. 2018 um 23:42 Uhr schrieb Dragos Dascalita Haut
>     <ddas...@adobe.com.invalid>:
>
>     > Markus, I appreciate the enhancements you mentioned in the wiki, and
> I'm
>     > very much inline with the ideas you brought in there.
>     >
>     >
>     >
>     > "...having the ContainerManager be a cluster singleton..."
>     >
>     > I was just in process to reply with the same idea :)
>     >
>     > In addition, I was thinking we can leverage Akka Distributed Data
> [1] to
>     > keep all ContainerRouter actors eventually consistent. When creating
> a new
>     > container, the ContainerManager can write with a consistency
> "WriteAll"; it
>     > would be a little slower but it would improve consistency.
>     >
>
>     I think we need to quantify "a little slower". Note that "WriteAll"
> becomes
>     slower and slower the more actors you add to the cluster. Scalability
> is at
>     question then.
>
>     Of course scalability is also at question if we make the
> ContainerManager a
>     singleton. The ContainerManager has a 1:1 relationship to the
>     Kubernetes/Mesos scheduler. Do we know how those are distributed? I
> think
>     the Kubernetes scheduler is a singleton, but I'll need to doublecheck
> on
>     that.
>
>     I can see the possibility to move the ContainerManager into each
> Router and
>     have them communicate with each other to shard in the same way I'm
>     proposing. As Dave is hitting on the very same points, I get the
> feeling we
>     should/could breakout that specific discussion if we can agree on some
>     basic premises of the design (see my answers on the thread with Dave).
> WDYT?
>
>
>     >
>     >
>     > The "edge-case" isn't clear to me b/c I'm coming from the assumption
> that
>     > it doesn't matter which ContainerRouter handles the next request,
> given
>     > that all actors have the same data. Maybe you can help me understand
> better
>     > the edge-case ?
>     >
>
>     ContainerRouters do not have the same state specifically. The
>     live-concurrency on a container is potentially very fast changing data.
>     Sharing that across a potentially unbounded number of routers is not
> viable
>     performance wise.
>
>     Hence the premise is to manage that state locally and essentially
> shard the
>     list of available containers between all routers, so each of them can
> keep
>     its respective state local.
>
>
>     >
>     >
>     > Re Knative approach, can you expand why the execution layer/data
> plane
>     > would be replaced entirely by Knative serving ? I think knative
> serving
>     > handles very well some cases like API requests, but it's not
> designed to
>     > guarantee concurrency restrictions like "1 request at a time per
> container"
>     > - something that AI Actions need.
>     >
>
>     You are right... today! I'm not saying Knative is necessarily a
> superior
>     backend for OpenWhisk as it stands today. All I'm saying is that from
> an
>     architecture point-of-view, Knative serving replaces all of the
> concerns
>     that the execution layer has.
>
>
>     >
>     >
>     > Thanks,
>     >
>     > dragos
>     >
>     >
>     > [1] - https://doc.akka.io/docs/akka/2.5/distributed-data.html
>     >
>     >
>     > ________________________________
>     > From: David P Grove <gro...@us.ibm.com>
>     > Sent: Tuesday, August 14, 2018 2:15:13 PM
>     > To: dev@openwhisk.apache.org
>     > Subject: Re: Proposal on a future architecture of OpenWhisk
>     >
>     >
>     >
>     >
>     > "Markus Thömmes" <markusthoem...@apache.org> wrote on 08/14/2018
> 10:06:49
>     > AM:
>     > >
>     > > I just published a revision on the initial proposal I made. I
> still owe a
>     > > lot of sequence diagrams for the container distribution, sorry for
> taking
>     > > so long on that, I'm working on it.
>     > >
>     > > I did include a clear seperation of concerns into the proposal,
> where
>     > > user-facing abstractions and the execution (loadbalacing, scaling)
> of
>     > > functions are loosely coupled. That enables us to exchange the
> execution
>     > > system while not changing anything in the Controllers at all (to an
>     > > extent). The interface to talk to the execution layer is HTTP.
>     > >
>     >
>     > Nice writeup!
>     >
>     > For me, the part of the design I'm wondering about is the separation
> of the
>     > ContainerManager and the ContainerRouter and having the
> ContainerManager by
>     > a cluster singleton. With Kubernetes blinders on, it seems more
> natural to
>     > me to fuse the ContainerManager into each of the ContainerRouter
> instances
>     > (since there is very little to the ContainerManager except (a)
> talking to
>     > Kubernetes and (b) keeping track of which Containers it has handed
> out to
>     > which ContainerRouters -- a task which is eliminated if we fuse
> them).
>     >
>     > The main challenge is dealing with your "edge case" where the optimal
>     > number of containers to create to execute a function is less than the
>     > number of ContainerRouters.  I suspect this is actually an important
> case
>     > to handle well for large-scale deployments of OpenWhisk.  Having
> 20ish
>     > ContainerRouters on a large cluster seems plausible, and then we'd
> expect a
>     > long tail of functions where the optimal number of container
> instances is
>     > less than 20.
>     >
>     > I wonder if we can partially mitigate this problem by doing some
> amount of
>     > smart routing in the Controller.  For example, the first level of
> routing
>     > could be based on the kind of the action (nodejs:6, python, etc).
> That
>     > could then vector to per-runtime ContainerRouters which dynamically
>     > auto-scale based on load.  Since there doesn't have to be a fixed
> division
>     > of actual execution resources to each ContainerRouter this could
> work.  It
>     > also lets easily stemcells for multiple runtimes without worrying
> about
>     > wasting too many resources.
>     >
>     > How do you want to deal with design alternatives?  Should I be
> adding to
>     > the wiki page?  Doing something else?
>     >
>     > --dave
>     >
>
>
>

Reply via email to