I am getting this error too and I am not doing any URL fetches. I have
put a lot of logging throughout my application and getting this error
before any of my logging is hit

#

   1.
      08-25 05:30AM 43.978 / 500 10011ms 0cpu_ms 0kb Mozilla/5.0
(Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.6 (KHTML, like
Gecko) Chrome/6.0.495.0 Safari/534.6,gzip(gfe),gzip(gfe)
      See details

      89.21.226.10 - - [25/Aug/2010:05:30:53 -0700] "GET / HTTP/1.1"
500 0 - "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/
534.6 (KHTML, like Gecko) Chrome/6.0.495.0 Safari/
534.6,gzip(gfe),gzip(gfe)" "www.theautomatedtester.co.uk" ms=10012
cpu_ms=0 api_cpu_ms=0 cpm_usd=0.000056

   2.
      W 08-25 05:30AM 53.989

      Request was aborted after waiting too long to attempt to service
your request. This may happen sporadically when the App Engine serving
cluster is under unexpectedly high or uneven load. If you see this
message frequently, please contact the App Engine team.



My App does a quick read from the Datastore to work out what it needs
to do from the URL and then carries on. My datastore is tiny (680kb
according to the datastore statistics) so don't think that it could be
due to me hitting the datastore.

David

On Aug 26, 12:03 pm, "Jan Z/ Hapara" <jan.zawad...@gmail.com> wrote:
> A mix.  We have a work package that is pretty exclusively async and
> another just using gdata (with zero async).
>
> There is no discernible difference, performance-wise.
>
> J
>
> On Aug 26, 10:46 pm, Tim Hoffman <zutes...@gmail.com> wrote:
>
>
>
>
>
>
>
> > Are you doing using async urlfetches ?
>
> > T
>
> > On Aug 26, 6:03 pm, "Jan Z/ Hapara" <jan.zawad...@gmail.com> wrote:
>
> > > Tim, your app is probably not doing much urlfetching?
>
> > > We have an app (h-script) that performs several urlfetch operations
> > > per task.  We are talking to other Google services only.  On a good
> > > day, each task takes between 3 and 10 seconds.  This goes up to 20
> > > seconds sometimes.  The CPU time for the tasks is minimal - 60-100
> > > ms.  It's all spent in urlfetch.
>
> > > This should present GAE with no problem - we time out gracefully, etc.
>
> > > Trying to run this full bore (several queues, 50/s) results in 98+%
> > > task failure rate.
>
> > > Running across 5 queues and at 6/m drops that to about 30% failure
> > > (and MUCH longer processing time obviously).  With the back-off
> > > kicking in, some tasks end up running all day.  I understand the 1sec
> > > rule, but it supposedly does NOT apply to tasks.  And with a
> > > dependence on urlfetch, there is no way to optimize to that level.
>
> > > As it stands, there is either a bug in there somewhere triggering a
> > > clampdown erroneously, or a limiter Google has not documented...  In
> > > either case, it makes it very difficult to do certain things that GAE
> > > should be awesome at.  I would guess your usecase is different..?
>
> > > J
>
> > > On Aug 26, 8:24 pm, Tim Hoffman <zutes...@gmail.com> wrote:
>
> > > > Unfortunately you haven't provided a great deal of information about
> > > > what you are doing in your tasks.
>
> > > > My guess is whatever you are running in the task is just taking too
> > > > long.
>
> > > > Queued will retry unless you exit cleanly.  And each failed tasks time
> > > > between retries will
> > > > will increase.
>
> > > > You should put some logging in your task to see where it is getting
> > > > too. Then you will have some idea
> > > > where the bottleneck is.
>
> > > > The task queue subsystem is in my experience quite robust and reliable
> > > > if you
> > > > code you tasks taking into account its limitations.
>
> > > > Regards
>
> > > > Tim
>
> > > > On Aug 26, 2:06 pm, "Jan Z/ Hapara" <jan.zawad...@gmail.com> wrote:
>
> > > > > Not relevant unfortunately.
>
> > > > > There is not try / catch here - when this happens, your code doesn't
> > > > > even start executing - GAE just fails it outright, and worse yet, the
> > > > > back-off algorithm kicks in so after a few generations the whole thing
> > > > > degenerates into uselessness (I've seen tasks that fail with this
> > > > > error 23 times in a row)
>
> > > > > Jan
>
> > > > > On Aug 17, 6:58 am, Alon Carmel <a...@aloncarmel.me> wrote:
>
> > > > > > When you fetch data from another server you use the urlfetch 
> > > > > > service. url
> > > > > > fetch service holds another limit for requests. when you relay on 
> > > > > > external
> > > > > > servers they hold another load which you cannot expect on your end. 
> > > > > > so if an
> > > > > > external service has some latency suddenly and plus fall under a 
> > > > > > database
> > > > > > high load your tasks will fail enventually.
>
> > > > > > try
>
> > > > > > catch
>
> > > > > > :)
>
> > > > > > -
> > > > > > Cheers,
>
> > > > > > def AlonCarmel(request)
> > > > > >      import simplejson as json
> > > > > >      contact = {}
> > > > > >      contant['email'] = '....@aloncarmel.me'
> > > > > >      contact['twitter'] = '@aloncarmel'
> > > > > >      contact['web'] = 'http://aloncarmel.me'
> > > > > >      contact['phone'] = '+972-54-4860380'
> > > > > >      return HttpResponse(json.dumps(contact))
>
> > > > > > * If you received an unsolicited email from by mistake that wasn't 
> > > > > > of your
> > > > > > matter please delete immediately. All E-mail sent from Alon Carmel 
> > > > > > is
> > > > > > copyrighted to Alon Carmel 2008. Any details revealed in e-mails 
> > > > > > sent by
> > > > > > Alon Carmel are owned by the Author only. Any attempt to duplicate 
> > > > > > or
> > > > > > imitate any of the Content is prohibited under copyright law 2008.
>
> > > > > > On Sat, Aug 14, 2010 at 8:28 PM, Dmitry <dmitry.lukas...@gmail.com> 
> > > > > > wrote:
> > > > > > > Hi app team!
>
> > > > > > > "Request was aborted after waiting too long to attempt to service 
> > > > > > > your
> > > > > > > request. This may happen sporadically when the App Engine serving
> > > > > > > cluster is under unexpectedly high or uneven load. If you see this
> > > > > > > message frequently, please contact the App Engine team."
>
> > > > > > > What may cause this error? I'm getting this quite often (may be 
> > > > > > > every
> > > > > > > 1 minute) when using task queue. Queue rate isn't so big (5-7
> > > > > > > simultaneous requests, ~10 sec in average). Task just fetchs data 
> > > > > > > from
> > > > > > > another server and processes it.
>
> > > > > > > thx
>
> > > > > > > --
> > > > > > > You received this message because you are subscribed to the 
> > > > > > > Google Groups
> > > > > > > "Google App Engine" group.
> > > > > > > To post to this group, send email to 
> > > > > > > google-appeng...@googlegroups.com.
> > > > > > > To unsubscribe from this group, send email to
> > > > > > > google-appengine+unsubscr...@googlegroups.com<google-appengine%2Bunsubscrib
> > > > > > >  e...@googlegroups.com>
> > > > > > > .
> > > > > > > For more options, visit this group at
> > > > > > >http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to