Thanks for the explanation Jon (sorry I has used John before).

Hopefully you all can continue to explore how TQ tasks can be managed
separately by The Scheduler** for better instance optimization.

cheers,
stevep

**Caps pun intended: http://www.imdb.com/title/tt0113762/

On Sep 12, 10:32 am, Jon McAlister <jon...@google.com> wrote:
> Backends are one good way to do this. You can direct tasks at a
> backend, and then control the number of instances for that backend
> directly.http://code.google.com/appengine/docs/python/backends/overview.html
>
> Once the billing rollout is complete, the treatment of tasks and
> non-task requests, regardless of their latency, will become pretty
> much the same. The scheduler will try to find an instance for them.
> The only difference for tasks is that by default the scheduler will
> accept a bit more pending latency (proportional to the average request
> latency for that queue) than it would for non-task requests. The "1s
> rule" (although in reality, it was much more nuanced) will be removed,
> the app, regardless of request latencies, will be able to get as many
> instances as it wants (and can pay for). If you want to limit the
> throughput of a task (to limit the number of instances it turns up),
> use the queue configuration to do 
> so:http://code.google.com/appengine/docs/python/config/queue.html#Queue_...
>
>
>
>
>
>
>
> On Sat, Sep 10, 2011 at 10:41 AM, Robert Kluin <robert.kl...@gmail.com> wrote:
> > I'd very much like to know how long-running (over 1000ms) requests are
> > treated by the new scheduler as well.  Previously I believe they were
> > basically ignored, and hence would not cause new instances to be spun
> > up.
>
> > And, yes I would very much like to have control over how task queues
> > are treated with regards to the scheduler.  We've currently got the
> > fail-fast header (X-AppEngine-FailFast), which helps quite a bit.
> > But, I'd really love to let my queues spin up new instances once the
> > latency hits a certain point while always serving user requests with
> > high priority.
>
> > Robert
>
> > On Sat, Sep 10, 2011 at 12:04, stevep <prosse...@gmail.com> wrote:
> >> +1 However please include sub-second tasks
>
> >> Just today I was looking at my logs/appstats. A client "new recod"
> >> write function I have that consists of three separate new kinds being
> >> put. It seems to run consistently at 250-300ms per HR put(). These
> >> occur serially: first one in my on-line handler, second in a high-rate/
> >> high-token task queue, third in a low-rate/low-token queue. It is fine
> >> if the second and third puts occur minutes after the first. Seems much
> >> better than a 750 ms on-line handler function.
>
> >> Looking at my logs, nearly every write I do for indexed kinds is in
> >> this ballpark for latency. Only one on-line handler task is up around
> >> 500 ms because I have to do two puts in it. Everything else is 300 ms
> >> or less. So I am very happy with this setup. The recent thread where
> >> Brandon/John analyzed high instance rates shows what might happen if
> >> average latency viewed by the scheduler is skewed by a few very high
> >> latency functions. (Fortunately for my read/query/write client needs,
> >> I can avoid big OLH functions, but it is a serious design challenge.)
> >> However, the downside right now is that I do not know how the Task
> >> Queue scheduler interacts with the Instance Scheduler.
>
> >> My imagined ideal would be for developers to eventually be able to
> >> specify separate TQ instances (I believe Robert K. asked for this when
> >> he suggested TQ calls could be made to a separate version.) The
> >> Scheduler for these separate TQ instances would need to analyze
> >> cumulative pending queue tasks (I think the current TQ Scheduler does
> >> some of this), and only spawns new instances when the cumulative total
> >> exceeded a developer set value -- which would allow minute values
> >> rather than seconds.
>
> >> thanks,
> >> stevep
>
> >> On Sep 10, 6:03 am, John <supp...@weespr.com> wrote:
> >>> I'd like to know what is the impact of tasks on the scheduler.
>
> >>> Obviously tasks have very high latency (up to 10 minutes, but not using 
> >>> much
> >>> cpu - mostly I/O). What is their impact on the scheduler if any ?
> >>> Would be nice to have some sample use cases on how the scheduler is 
> >>> supposed
> >>> to react. For example if I have 1 task which takes 1 minute, spawn every 
> >>> 1s,
> >>> vs every 10s, vs 1 min ?
>
> >>> Since the tasks use very low cpu, technically an instance could easily run
> >>> 60 of them concurrently so 1 qps with 1-min tasks could take only one
> >>> instance. But I doubt the scheduler would spawn only one instance.
>
> >>> App Engine team, any insights ?
>
> >>> Thanks
>
> >> --
> >> You received this message because you are subscribed to the Google Groups 
> >> "Google App Engine" group.
> >> To post to this group, send email to google-appengine@googlegroups.com.
> >> To unsubscribe from this group, send email to 
> >> google-appengine+unsubscr...@googlegroups.com.
> >> For more options, visit this group 
> >> athttp://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to