> How could a developer understand how many requests per container to set

James, this is a good point, along with the other points in your email.

I think the developer doesn't need to know this info actually. What stops
Openwhisk to be smart in observing the response times, CPU consumption,
memory consumption of the running containers ? Doing so it could learn
automatically how many concurrent requests 1 action can handle. It might be
easier to solve this problem efficiently, instead of the other problem
which pushes the entire system to its limits when a couple of actions get a
lot of traffic.



On Mon, Jul 3, 2017 at 10:08 AM James Thomas <jthomas...@gmail.com> wrote:

> +1 on Markus' points about "crash safety" and "scaling". I can understand
> the reasons behind exploring this change but from a developer experience
> point of view this adds introduces a large amount of complexity to the
> programming model.
>
> If I have a concurrent container serving 100 requests and one of the
> requests triggers a fatal error how does that affect the other requests?
> Tearing down the entire runtime environment will destroy all those
> requests.
>
> How could a developer understand how many requests per container to set
> without a manual trial and error process? It also means you have to start
> considering things like race conditions or other challenges of concurrent
> code execution. This makes debugging and monitoring also more challenging.
>
> Looking at the other serverless providers, I've not seen this featured
> requested before. Developers generally ask AWS to raise the concurrent
> invocations limit for their application. This keeps the platform doing the
> hard task of managing resources and being efficient and allows them to use
> the same programming model.
>
> On 2 July 2017 at 11:05, Markus Thömmes <markusthoem...@me.com> wrote:
>
> > ...
> >
>
> >
> To Rodric's points I think there are two topics to speak about and discuss:
> >
> > 1. The programming model: The current model encourages users to break
> > their actions apart in "functions" that take payload and return payload.
> > Having a deployment model outlined could as noted encourage users to use
> > OpenWhisk as a way to rapidly deploy/undeploy their usual webserver based
> > applications. The current model is nice in that it solves a lot of
> problems
> > for the customer in terms of scalability and "crash safeness".
> >
> > 2. Raw throughput of our deployment model: Setting the concerns aside I
> > think it is valid to explore concurrent invocations of actions on the
> same
> > container. This does not necessarily mean that users start to deploy
> > monolithic apps as noted above, but it certainly could. Keeping our
> > JSON-in/JSON-out at least for now though, could encourage users to
> continue
> > to think in functions. Having a toggle per action which is disabled by
> > default might be a good way to start here, since many users might need to
> > change action code to support that notion and for some applications it
> > might not be valid at all. I think it was also already noted, that this
> > imposes some of the "old-fashioned" problems on the user, like: How many
> > concurrent requests will my action be able to handle? That kinda defeats
> > the seemless-scalability point of serverless.
> >
> > Cheers,
> > Markus
> >
> >
> --
> Regards,
> James Thomas
>

Reply via email to