This is a general problem with finite capacity. Increasing parallelism inside 
the container is attractive from your point of view because it's a way to 
increase compute density but has its down sides. In any case: lacking the 
ability to elastically scale and add new capacity the system will always be 
subject to queuing.

-r

> On Jul 1, 2017, at 3:17 PM, Tyson Norris <tnor...@adobe.com.INVALID> wrote:
> 
> Sure - what I mean is that once N containers are launched, and servicing N 
> activations, the N+1th activation is queued and processed sequential to some 
> particular one of the previous activations. And N is directly related to 
> concurrent users (and actions), so a burst of users will quickly exhaust the 
> system, which is only fine for event handling cases, and not fine at all for 
> UI use cases. 
> 
> So “sequential” is not quite accurate, but once concurrent activations max 
> out the container pool, it behaves as a queue compared to a system that 
> concurrently processed activations in a single container - which will have 
> its own point of exhaustion admittedly, but I think it is quite common, for 
> example to run nodejs applications that happily serve hundreds or thousands 
> of concurrent users, so we are taking about adding orders of magnitude to the 
> number of concurrent users that can be handled using the same pool of 
> resources. 
> 
> Thanks
> Tyson

Reply via email to