RE: Add support for microkernels instead of containers

2018-07-19 Thread David P Grove

The invoker needs to be given some class that implements the
ContainerFactoryProvider trait (single method getContainerFactory that when
invoked will return an instance of some class that implements the
ContainerFactory trait).

The class can be called anything you want, but it does have to implement
this trait, which right now means that it will have the word "Container" in
lots of places.  You'll need to ignore that "Container" isn't actually a
container in your implementation and just implement the expected types.

Once you have it (UkernelFactoryProvider extends ContainerFactoryProvider),
then you can get the system to use your implementation by giving the
invoker the command line argument

-Dwhisk.spi.ContainerFactoryProvider=whisk.core.containerpool.ukernel.UKernelFactoryProvider

--dave




From:   "Farwell, James C" 
To: "dev@openwhisk.apache.org" 
Date:   07/19/2018 04:14 PM
Subject:RE: Add support for microkernels instead of containers



Dave-

At one point I had done this, but the host of compiler issues that came up
made me want to rethink my approach. (Maybe I'm overly concerned about the
word 'Container', but it's EVERYWHERE.)

How can I modify SpiLoader to load a class that I determine?

--James

From: David P Grove [mailto:gro...@us.ibm.com]
Sent: Thursday, July 19, 2018 11:41 AM
To: dev@openwhisk.apache.org
Subject: Re: Add support for microkernels instead of containers


Did you remember to have your UkernelFactoryProvider extend
ContainerFactoryProvider?

For example, see KubernetesContainerFactory.scala.

--dave

[Inactive hide details for "Farwell, James C" ---07/19/2018 01:24:12
PM---Rodric- Okay, I've created a new Ukernel class to enca]"Farwell, James
C" ---07/19/2018 01:24:12 PM---Rodric- Okay, I've created a new Ukernel
class to encapsulate my microkernel, a UkernelFactory to pr

From: "Farwell, James C" mailto:james.c.farw...@intel.com>>
To: "dev@openwhisk.apache.org"
mailto:dev@openwhisk.apache.org>>
Date: 07/19/2018 01:24 PM
Subject: Re: Add support for microkernels instead of containers





Rodric-
Okay, I've created a new Ukernel class to encapsulate my microkernel, a
UkernelFactory to produce Ukernels, and a UkernelFactoryProvider to
instantiate a Factory.  I've updated the whisk.spi object (in
reference.conf) to name the desired class
(whisk.core.ukernel.UkernelFactoryProvider).  Everything compiles, but when
I try to deploy I get the exception
'whisk.core.ukernel.UkernelFactoryProvider$ cannot be cast to
whisk.core.containerpool.ContainerFactoryProvider'.
I have searched, but cannot find any reason that SpiLoader should object to
loading a class other than
whisk.core.containerpool.ContainerFactoryProvider.
I'm obviously missing something, but I can't figure out what it is.  Does
anyone have any ideas?
Thanks,
--James
>---

Hi James

There's an abstract interface to the execution unit in the invoker:
Start/Pause/Resume/Stop/Logs. You can select the implementation through a
configuration deployment (SPI).

There was some work on using the interface I alluded to for unikernels. I'd
imagine the interface it can be adapted for working with a process,
microkernel, ...

I'm suggesting it's a drop in replacement but at face value I don't see
that it's necessary to be too invasive. The openwhisk core is really about
starting/pausing/resuming/stopping an execution unit (which happens to be a
container today).

I think containers are too coarse grained an execution unit for functions
and expect technology to change in the future. But when/how long it will
take... we'll see. I'm curious to see how your work unfolds with OpenWhisk
and we're happy to help.

-r








RE: Add support for microkernels instead of containers

2018-07-19 Thread Farwell, James C
Dave-

At one point I had done this, but the host of compiler issues that came up made 
me want to rethink my approach. (Maybe I'm overly concerned about the word 
'Container', but it's EVERYWHERE.)

How can I modify SpiLoader to load a class that I determine?

--James

From: David P Grove [mailto:gro...@us.ibm.com]
Sent: Thursday, July 19, 2018 11:41 AM
To: dev@openwhisk.apache.org
Subject: Re: Add support for microkernels instead of containers


Did you remember to have your UkernelFactoryProvider extend 
ContainerFactoryProvider?

For example, see KubernetesContainerFactory.scala.

--dave

[Inactive hide details for "Farwell, James C" ---07/19/2018 01:24:12 
PM---Rodric- Okay, I've created a new Ukernel class to enca]"Farwell, James C" 
---07/19/2018 01:24:12 PM---Rodric- Okay, I've created a new Ukernel class to 
encapsulate my microkernel, a UkernelFactory to pr

From: "Farwell, James C" 
mailto:james.c.farw...@intel.com>>
To: "dev@openwhisk.apache.org" 
mailto:dev@openwhisk.apache.org>>
Date: 07/19/2018 01:24 PM
Subject: Re: Add support for microkernels instead of containers





Rodric-
Okay, I've created a new Ukernel class to encapsulate my microkernel, a 
UkernelFactory to produce Ukernels, and a UkernelFactoryProvider to instantiate 
a Factory.  I've updated the whisk.spi object (in reference.conf) to name the 
desired class (whisk.core.ukernel.UkernelFactoryProvider).  Everything 
compiles, but when I try to deploy I get the exception 
'whisk.core.ukernel.UkernelFactoryProvider$ cannot be cast to 
whisk.core.containerpool.ContainerFactoryProvider'.
I have searched, but cannot find any reason that SpiLoader should object to 
loading a class other than whisk.core.containerpool.ContainerFactoryProvider.
I'm obviously missing something, but I can't figure out what it is.  Does 
anyone have any ideas?
Thanks,
--James
>---
Hi James

There's an abstract interface to the execution unit in the invoker: 
Start/Pause/Resume/Stop/Logs. You can select the implementation through a 
configuration deployment (SPI).

There was some work on using the interface I alluded to for unikernels. I'd 
imagine the interface it can be adapted for working with a process, 
microkernel, ...

I'm suggesting it's a drop in replacement but at face value I don't see that 
it's necessary to be too invasive. The openwhisk core is really about 
starting/pausing/resuming/stopping an execution unit (which happens to be a 
container today).

I think containers are too coarse grained an execution unit for functions and 
expect technology to change in the future. But when/how long it will take... 
we'll see. I'm curious to see how your work unfolds with OpenWhisk and we're 
happy to help.

-r






[ANNOUNCE] Apache OpenWhisk, main module (incubating) 0.9.0 released

2018-07-19 Thread Vincent S Hou
Hi everyone,

This is the announcement of the 0.9.0 release of OpenWhisk, main module.
 
Best wishes.
Vincent Hou (侯胜博)

Advisory Software Engineer, OpenWhisk Contributor, Open Technology, IBM Cloud

Notes ID: Vincent S Hou/Raleigh/IBM, E-mail: s...@us.ibm.com,
Phone: +1(919)254-7182
Address: 4205 S Miami Blvd (Cornwallis Drive), Durham, NC 27703, United States

-Forwarded by Vincent S Hou/Raleigh/IBM on 07/19/2018 02:40PM -
To: gene...@incubator.apache.org
From: "Vincent S Hou" 
Date: 07/19/2018 02:32PM
Subject: [ANNOUNCE] Apache OpenWhisk, main module (incubating) 0.9.0 released

Hi everyone,

We are pleased to announce that Apache OpenWhisk, main module (incubating) 
0.9.0 is released.

OpenWhisk is a cloud-first distributed event-based programming service. It 
provides a programming model to upload event handlers to a cloud service, and 
register the handlers to respond to various events. This is the first time that 
OpenWhisk is released under Apache as an incubator project.

The release is available at:
https://www.apache.org/dist/incubator/openwhisk/apache-openwhisk-0.9.0-incubating/

The official website of OpenWhisk, https://openwhisk.apache.org/, is currently 
working on the webpage for the download links.

Vincent Hou
On behalf of the OpenWhisk team


-
To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
For additional commands, e-mail: general-h...@incubator.apache.org





Re: Add support for microkernels instead of containers

2018-07-19 Thread David P Grove

Did you remember to have your UkernelFactoryProvider extend
ContainerFactoryProvider?

For example, see KubernetesContainerFactory.scala.

--dave



From:   "Farwell, James C" 
To: "dev@openwhisk.apache.org" 
Date:   07/19/2018 01:24 PM
Subject:Re: Add support for microkernels instead of containers



Rodric-
Okay, I've created a new Ukernel class to encapsulate my microkernel, a
UkernelFactory to produce Ukernels, and a UkernelFactoryProvider to
instantiate a Factory.  I've updated the whisk.spi object (in
reference.conf) to name the desired class
(whisk.core.ukernel.UkernelFactoryProvider).  Everything compiles, but when
I try to deploy I get the exception
'whisk.core.ukernel.UkernelFactoryProvider$ cannot be cast to
whisk.core.containerpool.ContainerFactoryProvider'.
I have searched, but cannot find any reason that SpiLoader should object to
loading a class other than
whisk.core.containerpool.ContainerFactoryProvider.
I'm obviously missing something, but I can't figure out what it is.  Does
anyone have any ideas?
Thanks,
--James
>---

Hi James

There's an abstract interface to the execution unit in the invoker:
Start/Pause/Resume/Stop/Logs. You can select the implementation through a
configuration deployment (SPI).

There was some work on using the interface I alluded to for unikernels. I'd
imagine the interface it can be adapted for working with a process,
microkernel, ...

I'm suggesting it's a drop in replacement but at face value I don't see
that it's necessary to be too invasive. The openwhisk core is really about
starting/pausing/resuming/stopping an execution unit (which happens to be a
container today).

I think containers are too coarse grained an execution unit for functions
and expect technology to change in the future. But when/how long it will
take... we'll see. I'm curious to see how your work unfolds with OpenWhisk
and we're happy to help.

-r






Re: Add support for microkernels instead of containers

2018-07-19 Thread Farwell, James C
Rodric-
Okay, I've created a new Ukernel class to encapsulate my microkernel, a 
UkernelFactory to produce Ukernels, and a UkernelFactoryProvider to instantiate 
a Factory.  I've updated the whisk.spi object (in reference.conf) to name the 
desired class (whisk.core.ukernel.UkernelFactoryProvider).  Everything 
compiles, but when I try to deploy I get the exception 
'whisk.core.ukernel.UkernelFactoryProvider$ cannot be cast to 
whisk.core.containerpool.ContainerFactoryProvider'.
I have searched, but cannot find any reason that SpiLoader should object to 
loading a class other than whisk.core.containerpool.ContainerFactoryProvider.
I'm obviously missing something, but I can't figure out what it is.  Does 
anyone have any ideas?
Thanks,
--James
>---
Hi James

There's an abstract interface to the execution unit in the invoker: 
Start/Pause/Resume/Stop/Logs. You can select the implementation through a 
configuration deployment (SPI).

There was some work on using the interface I alluded to for unikernels. I'd 
imagine the interface it can be adapted for working with a process, 
microkernel, ...

I'm suggesting it's a drop in replacement but at face value I don't see that 
it's necessary to be too invasive. The openwhisk core is really about 
starting/pausing/resuming/stopping an execution unit (which happens to be a 
container today).

I think containers are too coarse grained an execution unit for functions and 
expect technology to change in the future. But when/how long it will take... 
we'll see. I'm curious to see how your work unfolds with OpenWhisk and we're 
happy to help.

-r




Add support for microkernels instead of containers

2018-07-19 Thread Farwell, James C
Markus Thoemmes found my question on StackOverflow, and suggested I post it 
here;

I am trying to customize OpenWhisk to call a microkernel from the Invoker, 
rather than Docker. Is there an effort underway currently to add this support, 
or a development guide covering the changes I would need to make? My current 
understanding of the code is that this will be a substantial project.

Is there guidance available on how to move away from the concept of containers? 
Or will I be better off treating a microkernel as an abstracted type of 
container?
Thanks,

--James


Re: Proposing Lean OpenWhisk

2018-07-19 Thread david . breitgand
Hi Dominic, 

Lean OpenWhisk is not supposed to run on the IoT devices such as sensors and 
actuators directly. It's supposed to run on a Gateway node that controls the 
sensors and actuators connected to it. Think AWS GreenGrass, Azure Functions on 
IoT Edge. This is the same use case. The data from a sensor, say a temperature 
sensor reading, will be sent to the Gateway via MQTT or HTTP or whatever and 
there will be a provider at the Gateway (say, an MQTT feed, which is outside of 
the OW core and this proposal) that can trigger an action on a trigger 
previously created via a feed action for this type of feed.

This proposal is strictly about making OW a better fit for small Gateway form 
factors. 

It's true that there are some other tools we need to provide to make OW@Edge a 
feasible option for developers, but they are outside of the core and this 
specific proposal and merit a separate discussion.

Cheers.

-- david

On 2018/07/16 11:40:35, Dominic Kim  wrote: 
> Dear David.
> 
> This is an awesome idea!!
> 
> Is this to control IoT devices programmatically?
> If yes, there would be many different types of IoT devices especially in
> terms of their capabilities such as lighting sensors, thermometer, robot
> cleaner, and so on.
> 
> Then do you have anything in mind to take care of heterogeneous sets of
> edge nodes?
> There is a possibility that some actions should only run on thermometers,
> while the others should run on lighting sensors.
> 
> If you are trying to install "one-for-all" OpenWhisk cluster rather than
> having separate OpenWhisk clusters for each device types, how will you
> manage heterogenous container pools and properly assign relevant actions to
> them?
> 
> 
> Best regards,
> Dominic
> 
> 
> 2018-07-16 20:24 GMT+09:00 Markus Thoemmes :
> 
> > Hi David,
> >
> > please send your PR for sure! IIRC we made the Loadbalancer pluggable
> > specifically for this use-case. Sounds like a great addition to our
> > possible deployment topologies.
> >
> > Shameless plug: Would you review the architecture I proposed here:
> > https://lists.apache.org/thread.html/29289006d190b2c68451f7625c13bb
> > 8020cc8e9928db66f1b0def18e@%3Cdev.openwhisk.apache.org%3E
> >
> > In theory, this could make your proposal even leaner in the future. Don't
> > hear me say though we should hold this back, we can absolutely go forward
> > with your implementation first. Just wanted to quickly verify this use-case
> > will also work with what we might plan for in the future.
> >
> > Cheers,
> > Markus
> >
> >
> 


Re: Proposal on a future architecture of OpenWhisk

2018-07-19 Thread Markus Thoemmes
Hi Chetan,

>Currently one aspect which is not clear is does Controller has access
>to
>
>1. Pool of prewarm containers - Container of base image where /init
>is
>yet not done. So these containers can then be initialized within
>Controller
>2. OR Pool of warm container bound to specific user+action. These
>containers would possibly have been initialized by ContainerManager
>and then it allocates them to controller.

The latter case is what I had in mind. The controller only knows containers 
that are already ready to call /run on.

Pre-Warm containers are an implementation detail to the Controller. The 
ContainerManager can keep them around to be able to answer demand for specific 
resources more quickly, but the Controller doesn't care. It only knows warm 
containers.

>Can you elaborate this bit more i.e. how scale up logic would work
>and
>is asynchronous?
>
>I think above aspect (type of pool) would have bearing on scale up
>logic. If an action was not in use so far then when first request
>comes (i.e. 0-1 scale up case) would Controller ask ContainerManager
>for specific action container and then wait for its setup and then
>execute it. OR if it has a generic pool then it takes one and
>initializes it and use it. And if its not done synchronously then
>would such an action be put to overflow queue.

In this specific example, the Controller will request a container from the 
ContainerManager and buffer the request until it finally has capacity to 
execute it. All subsequent requests will be put on the same buffer and a 
Container will be requested for each of them. 

Whether we put this buffer in an overflow queue (aka persist it) remains to be 
decided. If we keep it in memory, we have roughly the same guarantees as today. 
As Rodric mentioned though, we can improve certain failure scenarios (like 
waiting for a container in this case) by making this buffer more persistent. 
I'm not mentioning Kafka here for a reason, because in this case any persistent 
buffer is just fine.

Also note that this is not necessarily the case of the overflow queue. The 
overflow queue is used for arbitrary requests once the ContainerManager cannot 
create more resources and thus requests need to wait.

The buffer I described above is a per action "invoke me once resources are 
available" buffer, that could potentially be designed to be per Controller to 
not have the challenge of scaling it out. That of course has its downsides in 
itself, for instance: A buffer that spans all controllers would enable 
work-stealing between controllers with missing capacity and could mitigate some 
of load-imbalances that Dominic mentioned. We are entering then the same area 
that his proposal enters: The need of a queue per action.

Conclusion is, we have 2 perspectives to look at this:

1. Do we need to persist an in-memory queue that waits for resources to be 
created by the ContainerManager?
2. Do we need a shared queue between the Controllers to enable work-stealing in 
cases where multiple Controllers wait for resources?
 
An important thing to note here: Since all of this is no longer happening on 
the critical path (stuff gets put on the queue only if it needs to wait for 
resources anyway), we can afford a solution that isn't as perfomant as Kafka 
might be. That could potentially open up the possibility to use a technology 
more geared towards Pub/Sub, where subscribers per action are more cheap to 
implement than on Kafka?

Does that make sense? Hope that helps :). Thanks for the questions!

Cheers,
Markus



Re: Proposal on a future architecture of OpenWhisk

2018-07-19 Thread Chetan Mehrotra
Hi Markus,

Currently one aspect which is not clear is does Controller has access to

1. Pool of prewarm containers - Container of base image where /init is
yet not done. So these containers can then be initialized within
Controller
2. OR Pool of warm container bound to specific user+action. These
containers would possibly have been initialized by ContainerManager
and then it allocates them to controller.

> The scaleup model stays exactly the same as today! If you have 200 
> simultaneous invocations (assuming a per-container concurrency limit of 1) we 
> will create 200 containers to handle that load (given the requests are truly 
> simultaneous --> arrive at the same time). Containers are NOT created in a 
> synchronous way and there's no need to sequentialize their creation. Does 
> something in the proposal hint to that? If so, we should fix that immediately.

Can you elaborate this bit more i.e. how scale up logic would work and
is asynchronous?

I think above aspect (type of pool) would have bearing on scale up
logic. If an action was not in use so far then when first request
comes (i.e. 0-1 scale up case) would Controller ask ContainerManager
for specific action container and then wait for its setup and then
execute it. OR if it has a generic pool then it takes one and
initializes it and use it. And if its not done synchronously then
would such an action be put to overflow queue.

Chetan Mehrotra

On Thu, Jul 19, 2018 at 2:39 PM Markus Thoemmes
 wrote:
>
> Hi Dominic,
>
> >Ah yes. Now I remember I wondered why OS doesn't support
> >"at-least-once"
> >semantic.
> >This is the question apart from the new architecture, but is this
> >because
> >of the case that user can execute the non-idempotent action?
> >So though an invoker is failed, still action could be executed and it
> >could
> >cause some side effects such as repeating the action which requires
> >"at-most-once" semantic more than once?
>
> Exactly. Once we pass the HTTP request into the container, we cannot know 
> whether the action has already caused a side-effect. At that point it's not 
> safe to retry (hence /run doesn't allow for retries vs. /init does) and in 
> doubt we need to abort.
> We could imagine the user to state idempotency of an action so it's safe for 
> us to retry, but that's a different can of worms and imho unrelated to the 
> architecture as you say.
>
> >BTW, how would long warmed containers be kept in the new
> >architecture? Is
> >it a 1 or 2 order of magnitude in seconds?
>
> I don't see a reason to change this behavior from what we have today. Could 
> be configurable and potentially be hours. The only concerns are:
> - Scale-down of worker nodes is inhibited if we keep containers around a long 
> time --> costs the vendor money
> - If the system is full with warm containers and we want to evict one to make 
> space for a different container, removing and recreating a container is more 
> expensive than just creating.
>
> >In the new architecture, concurrency limit is controlled by users in
> >a
> >per-action based way?
>
> That's not necessarily architecture related, but Tyson is implementing this, 
> yes. Note that this is "concurrency per container" not "concurrency per 
> action" (which could be a second knob to turn).
>
> In a nutshell:
> - concurrency per container: The amount of parallel HTTP requests allowed for 
> a single container (this is what Tyson is implementing)
> - concurrency per action: You could potentially limit the maximum amount of 
> concurrent invocations running for each action (which is distinct from the 
> above, because this could mean to limit the amount of containers created vs. 
> limiting the amount of parallel HTTP requests to a SINGLE container)
>
> >So in case a user wants to execute the long-running action, does he
> >configure the concurreny limit for the action?
>
> Long running isn't related to concurrency I think.
>
> >
> >And if concurrency limit is 1, in case action container is possessed,
> >wouldn't controllers request a container again and again?
> >And if it only allows container creation in a synchronous
> >way(creating one
> >by one), couldn't it be a burden in case a user wants a huge number
> >of(100~200) simultaneous invocations?
>
> The scaleup model stays exactly the same as today! If you have 200 
> simultaneous invocations (assuming a per-container concurrency limit of 1) we 
> will create 200 containers to handle that load (given the requests are truly 
> simultaneous --> arrive at the same time). Containers are NOT created in a 
> synchronous way and there's no need to sequentialize their creation. Does 
> something in the proposal hint to that? If so, we should fix that immediately.
>
> No need to apologize, this is great engagement, exactly what we need here. 
> Keep it up!
>
> Cheers,
> Markus
>


Re: Proposal on a future architecture of OpenWhisk

2018-07-19 Thread Rodric Rabbah
Hi Mark

This is precisely captured by the serverless contract article I published 
recently:

https://medium.com/openwhisk/the-serverless-contract-44329fab10fb

Queue, reject, or add capacity as three potential resolutions under load. 

-r

> On Jul 18, 2018, at 8:16 AM, Martin Gencur  wrote:
> 
> Hi Markus,
> thinking about scalability and the edge case. When there are not enough 
> containers and new controllers are being created, and all of them redirect 
> traffic to the controllers with containers, doesn't it mean overloading the 
> available containers a lot? I'm curious how we throttle the traffic in this 
> case.
> 
> I guess the other approach would be to block creating new controllers when 
> there are no containers available as long as we don't want to overload the 
> existing containers. And keep the overflowing workload in Kafka as well.
> 
> Thanks,
> Martin Gencur
> QE, Red Hat


Re: Proposal on a future architecture of OpenWhisk

2018-07-19 Thread Rodric Rabbah


> On Jul 17, 2018, at 4:49 AM, Markus Thoemmes  
> wrote:
> 
> The design proposed does not intent to change the way we provide 
> oberservibility via persisting activation records.

It is worth considering how we can provide observability for activations in 
flight. As it stands today, as a user you get to see when the action has 
finished (if we record the record successfully). But given an activation id you 
cannot query the status otherwise: either the record exists, or not found.

-r

Re: Proposal on a future architecture of OpenWhisk

2018-07-19 Thread Rodric Rabbah
Regarding at least or at most once: 

Functions should be stateless and the burden is on the action for external side 
effects anyway... so it’s plausible with these in mind that we contemplate 
shifting modes (a la lambda). There are cases though that we should retry that 
are safer: in flight requests which are lost before they reach the container 
http end point, and failures to assign a container.

-r

Re: Proposal on a future architecture of OpenWhisk

2018-07-19 Thread Markus Thoemmes
Hi Dominic,

>Ah yes. Now I remember I wondered why OS doesn't support
>"at-least-once"
>semantic.
>This is the question apart from the new architecture, but is this
>because
>of the case that user can execute the non-idempotent action?
>So though an invoker is failed, still action could be executed and it
>could
>cause some side effects such as repeating the action which requires
>"at-most-once" semantic more than once?

Exactly. Once we pass the HTTP request into the container, we cannot know 
whether the action has already caused a side-effect. At that point it's not 
safe to retry (hence /run doesn't allow for retries vs. /init does) and in 
doubt we need to abort.
We could imagine the user to state idempotency of an action so it's safe for us 
to retry, but that's a different can of worms and imho unrelated to the 
architecture as you say.

>BTW, how would long warmed containers be kept in the new
>architecture? Is
>it a 1 or 2 order of magnitude in seconds?

I don't see a reason to change this behavior from what we have today. Could be 
configurable and potentially be hours. The only concerns are: 
- Scale-down of worker nodes is inhibited if we keep containers around a long 
time --> costs the vendor money
- If the system is full with warm containers and we want to evict one to make 
space for a different container, removing and recreating a container is more 
expensive than just creating.

>In the new architecture, concurrency limit is controlled by users in
>a
>per-action based way?

That's not necessarily architecture related, but Tyson is implementing this, 
yes. Note that this is "concurrency per container" not "concurrency per action" 
(which could be a second knob to turn).

In a nutshell:
- concurrency per container: The amount of parallel HTTP requests allowed for a 
single container (this is what Tyson is implementing)
- concurrency per action: You could potentially limit the maximum amount of 
concurrent invocations running for each action (which is distinct from the 
above, because this could mean to limit the amount of containers created vs. 
limiting the amount of parallel HTTP requests to a SINGLE container)

>So in case a user wants to execute the long-running action, does he
>configure the concurreny limit for the action?

Long running isn't related to concurrency I think.

>
>And if concurrency limit is 1, in case action container is possessed,
>wouldn't controllers request a container again and again?
>And if it only allows container creation in a synchronous
>way(creating one
>by one), couldn't it be a burden in case a user wants a huge number
>of(100~200) simultaneous invocations?

The scaleup model stays exactly the same as today! If you have 200 simultaneous 
invocations (assuming a per-container concurrency limit of 1) we will create 
200 containers to handle that load (given the requests are truly simultaneous 
--> arrive at the same time). Containers are NOT created in a synchronous way 
and there's no need to sequentialize their creation. Does something in the 
proposal hint to that? If so, we should fix that immediately.

No need to apologize, this is great engagement, exactly what we need here. Keep 
it up!

Cheers,
Markus