That would be very useful. If you fire a feature request, I would start it
as well.

On Tue, Jul 17, 2012 at 12:10 AM, Jeff Schnitzer <j...@infohazard.org>wrote:

> Hi Takashi.  I've read the performancesettings documentation a dozen
> times and yet the scheduler behavior still seems flawed to me.
>
> Once a request is taken from the pending queue and sent to an instance
> (cold or otherwise), it's dedicated to execution on that instance.  In
> the queue, it can still be routed to any instance that becomes
> available.  Why would we *ever* want to send a request to a cold
> instance, which has an unknown and unpredictable response time?  If I
> were that request, I'd want to sit in the queue until a known-good
> instance becomes available.  Depending on the queue fill rate I might
> still end up waiting for an instance to come online... but there's
> also a good chance I'll get handled by an existing instance,
> especially if traffic is bursty.
>
> "the scheduler starts a new dynamic instance because it is really
> needed at that moment."  -- this is not an accurate characterization,
> because new instances don't provide immediate value.  They only
> provide value 5+ (sometimes 50+) seconds after they start.  In the
> mean time, they have captured and locked up user-facing requests which
> might have been processed by running instances much faster.
>
> The min latency setting is actually working against us here.  What I
> really want is a high (possibly infinite) minimum latency for moving
> items from pending queue to a cold instance, but a low minimum latency
> for warming up new instances.  I don't want requests waiting in the
> pending queue, but it does me no good to have them sent to cold
> instances.  I'd rather they wait in the queue until fresh instances
> come online.
>
> Jeff
>
> On Mon, Jul 16, 2012 at 1:15 PM, Takashi Matsuo <tmat...@google.com>
> wrote:
> >
> > Richard,
> >
> >> But Tom seems to think that "1" is an appropriate number for his app.
> Why
> >> offer that option if it's automatically wrong?
> >
> > If his purpose is reduce the number of user-facing loading requests, and
> he
> > still sees many user-facing loading requests, the current settings is not
> > enough.
> >
> > Jeff,
> >
> >> I vaguely expect something like this:
> >>
> >>  * All incoming requests go into a pending queue.
> >>  * Requests in this queue are handed off to warm instances only.
> >>  * Requests in the pending queue are only sent to warmed up instances.
> >>  * New instances can be started up based on (adjustable) depth of the
> >> pending queue.
> >>  * If there aren't enough instances to serve load, the pending queue
> >> will back up until more instances come online.
> >>
> >> Isn't this fairly close to the way appengine works?  What puzzles me
> >> is why requests would ever be removed from the pending queue and sent
> >> to a cold instance.  Even in Pythonland, 5-10s startup times are
> >> common.  Seems like the request is almost certainly better off waiting
> >> in the queue.
> >
> > Probably reading the following section woud help understanding the
> > scheduler:
> >
> https://developers.google.com/appengine/docs/adminconsole/performancesettings#scheduler
> >
> > A request comes in, if there's available dynamic instance, he'll be
> handled
> > by that dynamic instance. Then, if there's available resident instance,
> > he'll be handled by that resident instance. Then he goes to the pending
> > queue. He can be sent any available instances at any time(it's fortunate
> for
> > him). Then according to the pending latency settings, he will be sent to
> a
> > new cold instance.
> >
> > So, if you prefer pending queue rather than a cold instance, you can set
> > high minimum latency, however, it might not be what you really want
> because
> > it will cause a bad performance on subsequent requests.
> >
> > Generally speaking, just looking at a statistic for a spiky event, you
> might
> > have a feeling that our scheduler can do better, however, the difficult
> part
> > is that those requests in that statistic were not issued at the flat
> rate.
> > In other words, the scheduler starts a new dynamic instance because it is
> > really needed at that moment.
> >
> > Well again, in order to reduce the number of user-facing loading
> requests,
> > the most effective thing is to set sufficient number of min idle
> instances.
> > The second thing to consider would be, if you have longer backend tasks,
> > putting those tasks into another version, in order to avoid blocking
> other
> > frontend requests. If you use python2.7 runtime with concurrent request
> > enabled, probably you'd better isolate CPU bound operations from
> user-facing
> > frontend version in order to avid slow-down in the frontend version.
> >
> > Probably it is great if we offer an API for dynamically configuring the
> > performance settings especially in terms of cost efficiency, and I think
> > it's worth filing a feature request.
> >
> > --
> > Takashi Matsuo
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appengine@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com.
> > For more options, visit this group at
> > http://groups.google.com/group/google-appengine?hl=en.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>


-- 
Best Regards,
Rerngvit Yanggratoke

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to