Hey Dave,
  Hopefully Nick will be able to offer some insight into the cause of
your issues.  I'd guess it is something related to having very few
tasks (one) in the queue, and it not getting scheduled rapidly.

  In your case, you could use pull queues to immediately fetch the
next task when finished with a task.  Or even to fetch multiple tasks
and do the work in parallel.  Basically you'd have a backend that ran
a loop (possibly initiated via a push task) that would lease a task,
or tasks, from the pull queue, do the work, delete those tasks, then
repeat from the lease stage.  The cool thing is that if you're, for
example, using URL Fetch to pull data  this might let you do it in
parallel without increasing your costs much (if any).

Robert




On Wed, Feb 1, 2012 at 14:25, Dave Loomer <dloo...@gmail.com> wrote:
> Here are logs from three consecutive task executions over the past weekend,
> with only identifying information removed. You'll see that each task
> completes in a few milliseconds, but are 20 seconds apart (remember: I've
> already checked my queue configurations, nothing else is running on this
> backend, and I later solved the problem by setting countdown=1 when adding
> the task).  I don't see any pending latency mentioned.
>
> 0.1.0.2 - - [27/Jan/2012:18:33:20 -0800] 200 124 ms=10 cpu_ms=47
> api_cpu_ms=0 cpm_usd=0.000060 queue_name=overnight-tasks
> task_name=15804554889304913211 instance=0
> 0.1.0.2 - - [27/Jan/2012:18:33:00 -0800] 200 124 ms=11 cpu_ms=0 api_cpu_ms=0
> cpm_usd=0.000060 queue_name=overnight-tasks task_name=15804554889304912461
> instance=0
> 0.1.0.2 - - [27/Jan/2012:18:32:41 -0800] 200 124 ms=26 cpu_ms=0 api_cpu_ms=0
> cpm_usd=0.000060 queue_name=overnight-tasks task_name=4499136807998063691
> instance=0
>
>
> The 20 seconds seems to happen regardless of length of task. Even though my
> tasks mostly complete in a couple minutes, I do have cases where they take
> several minutes, and I don't see a difference. Of course, when a task takes
> 5-10 minutes to complete, I'm going to notice and care about a 20-second
> delay much less than when I'm trying to spin through a few tasks in a minute
> (which is a real-world need for me as well).
>
> When reading up on pull queues a while back, I was a little confused about
> where I would use them with my own backends. I definitely could see an
> application for offloading work to an AWS Linux instance. But in either
> case, could you explain why it might help?
>
> I saw you mention in a separate thread how M/S can perform differently from
> HRD even in cases where one wouldn't expect to see a difference. When I get
> around to it I'm going to create a tiny HRD app and run the same tests
> through that.
>
> I also wonder if M/S could be responsible for frequent latencies in my admin
> console. Those have gotten more frequent and annoying the past couple of
> months ...
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/google-appengine/-/lbNQRQdSx0AJ.
>
> To post to this group, send email to google-appengine@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to