Well, Go actions can be precompiled using the same docker image used for 
execution, so if you can use Go for implementing your action,  you have the 
"prewarming" feature out of the box. 

Also since Go programs can embed statics, you do not have to import anything, 
it is all in memory. 

I am trying to do a demo (I wish I had more time those days) of a "web-site in 
an an action" with multiple pages, a template, markdown content and everything 
is embedded in a single executable and deployed as a single binary in 
OpenWhisk. All using the already available actionloop ...

I dare to say it is hard to do the same in NodeJS because it works by design as 
a Just in time compiler. Here, compiled languages have an edge and this is a 
good use case for Go (or Rust or other  compiled languages) that the actionloop 
images actually supports. 

Unless we follow the route of the GraalVM that promises to be able to compile 
also NodeJS and Python... 

-- 
  Michele Sciabarra
  openwh...@sciabarra.com

On Thu, May 31, 2018, at 4:29 PM, James Thomas wrote:
> Speaking to external developers about this, people seem happy to pay for
> this feature.
> 
> On 31 May 2018 at 12:34, Nick Mitchell <moose...@gmail.com> wrote:
> 
> > for nodejs at least: the cost of a few requires of common packages can
> > easily get you up to the 150-200ms range (e.g. request is a big hitter; and
> > this is all on top of the cost of starting a container!). perhaps, for
> > nodejs at least, there are only a few options, ultimately: user pays more
> > for idle resources; provider pays more for idle stem cells; or users take a
> > very hard line on the modules they import.
> >
> > switching to other (compiled) runtimes might help, e.g. with the recent
> > work on precompiled go and swift actions? we'd still be left with the
> > container start times, but at least this is something we can control, e.g.
> > by requiring users to pay more for access to a larger prewarmed pool?
> >
> > nick
> >
> >
> > On Thu, May 31, 2018 at 7:22 AM, James Thomas <jthomas...@gmail.com>
> > wrote:
> >
> > > One of most frequent complaints[1][2][3] I hear from developers using
> > > serverless platforms is coping with cold-start latency when dealing with
> > > sudden bursts of traffic.
> > >
> > > Developers often ask for a feature where they can set the number of warm
> > > containers kept in the cache for a function. This would allow them to
> > keep
> > > a higher number of warm containers for applications with bursty traffic
> > > and/or upgrade the cached number prior to an anticpated burst of traffic
> > > arriving. This would be exposed by the managed platforms as a chargable
> > > feature.
> > >
> > > Is this something we could support on OpenWhisk? Ignoring the complexity
> > > and feasibility of any solution, from a developer POV I can image having
> > an
> > > action annotation `max-warm` which would set the maximum number of warm
> > > containers allowed in the cache.
> > >
> > > Tyson is currently working on concurrent activation processing, which is
> > > one approach to reducing cold-start delays[4]. However, there are some
> > > downsides to concurrent activations, like no runtime isolation for
> > request
> > > processing, which might make this feature inappropraite for some users.
> > >
> > > [1]
> > > https://www.reddit.com/r/aws/comments/6w1hip/how_many_
> > > successive_lambda_invocations_will_use_a/
> > > [2]
> > > https://twitter.com/search?f=tweets&vertical=default&q=%20%
> > > 23AWSWishlist%20warm&src=typd
> > > [3]
> > > https://theburningmonk.com/2018/01/im-afraid-youre-
> > > thinking-about-aws-lambda-cold-starts-all-wrong/
> > > [4] - https://github.com/apache/incubator-openwhisk/pull/2795
> > >
> > > --
> > > Regards,
> > > James Thomas
> > >
> >
> 
> 
> 
> -- 
> Regards,
> James Thomas

Reply via email to