(what I was getting it with the previous post is: if any of you need this
for your day jobs, you are welcome to try the same yourselves in the
meantime ;) )
On Wednesday, April 29, 2015 at 11:56:52 AM UTC-5, Dave Loomer wrote:
All: I won't have time to do this in the next several hours (my
All: I won't have time to do this in the next several hours (my app engine
app is not my day job), but I think tonight I'll try changing the
api.twitter.com endpoint to point to a URL I own, and then examine the URLs
/ query strings and headers received from my prod vs. dev code to see if
the
I am reaching similar conclusions as David (I am @kidneybingos on that
Twitter dev thread). I have tried lots of punctuation characters, and most
succeed, but the following always cause errors:
' (apostrophe)
!
*
(
)
My conclusion is similarly that urlfetch is encoding something different as
in the
urlfetch code which we can't debug in prod.
On Wednesday, April 29, 2015 at 11:46:38 AM UTC-5, Dave Loomer wrote:
I am reaching similar conclusions as David (I am @kidneybingos on that
Twitter dev thread). I have tried lots of punctuation characters, and most
succeed, but the following always
Interesting. Perhaps the difference is that I tested only GET and not POST
(still, I get behavioral differences between dev and test with GET, even if the
request looks the same in my tests).
Or, maybe your method uncovers something that mine does not. Could you try a
GET (maybe using the
with pretty much any tweet content and
username. i reproduced it just now by attempting to tweet foo (just those
three characters) as @schnarfed. sigh.
On Wednesday, April 29, 2015 at 9:58:38 AM UTC-7, Dave Loomer wrote:
(what I was getting it with the previous post is: if any of you need
Same here.
Why is there no system status dashboard for admin infrastructure?
The dashboard seems to ignore things like deployment errors, problems
loading task queue pages in the admin console, etc. that seem to plague a
lot of us regularly. I know there's an issue for it someplace, just too
lazy
Issues occurring over the past several hours:
- Non-console-related: one of my backends wouldn't start. Tasks would
just error out in the queue, and nothing (error or otherwise) showed in the
logs for that backend. Instances page would show no instance running. This
started last
Correction on item #1 regarding backend not starting. This is still
occurring. The log shows that some tasks did run on the backend overnight,
but right now I can't start any tasks on it. Still shows no instances
running.
On Thursday, February 7, 2013 7:14:52 AM UTC-6, Dave Loomer wrote
...
Akitoshi Abe
2013/2/7 Dave Loomer dloo...@gmail.com
Issues occurring over the past several hours:
- Non-console-related: one of my backends wouldn't start. Tasks
would just error out in the queue, and nothing (error or otherwise)
showed
in the logs for that backend. Instances page would
I'm getting the 503 error on backend frequent-tasks for app mn-live.
There's an existing issue for this; not sure if everyone who experiences
this needs to open a new issue or if we should just keep the issue open
since it hasn't been resolved.
Fetch to pull data this might let you do it in
parallel without increasing your costs much (if any).
Robert
On Wed, Feb 1, 2012 at 14:25, Dave Loomer dloo...@gmail.com wrote:
Here are logs from three consecutivetaskexecutions over the past weekend,
with only identifying information
that I have full control on the number of requests that will spin up,
err, number of instances that will spin up, rather ...
On Feb 5, 11:30 am, Dave Loomer dloo...@gmail.com wrote:
In my case, since I was getting the 20-second delay almost 100% of the
time, setting countdown=1 was the answer
tasks, then
repeat from the lease stage. The cool thing is that if you're, for
example, using URL Fetch to pull data this might let you do it in
parallel without increasing your costs much (if any).
Robert
On Wed, Feb 1, 2012 at 14:25, Dave Loomer dloo...@gmail.com wrote
Here are logs from three consecutive task executions over the past weekend,
with only identifying information removed. You'll see that each task
completes in a few milliseconds, but are 20 seconds apart (remember: I've
already checked my queue configurations, nothing else is running on this
The tasks are not run transactionally, and in my testing the task is the
only one in queue. In fact, I also ran the tests *somewhat* successfully on
a separate app where this was the only code running. I say somewhat
because, as I stated in my original post, the 20-second delays didn't
happen
To do one better, here is the entirety of the Python code:
#!/usr/bin/env python
#
# Copyright 2007 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the License);
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#
And here is backends.yaml:
backends:
- name: overnight-external-data
class: B1
options: dynamic
instances: 1
and queue.yaml:
queue:
- name: overnight-tasks
rate: 50/s
bucket_size: 50
retry_parameters:
max_backoff_seconds: 1800
--
You received this message because you are
Finally, it's probably an important clue that when I explicitly set
countdown=1 when creating the task, the delay in executing the task is
always almost exactly 1.5 seconds (not sure why it's not 1.0). If I don't
set a countdown value, it's almost as if I had set countdown=20. Except
that the
My ignorant question: Why are we discussing M/S vs. HRD when the OP said he
isn't accessing any data in serving his page?
--
You received this message because you are subscribed to the Google Groups
Google App Engine group.
To view this discussion on the web visit
Hi Robert,
When targeting backend thebackend, this was done merely by specifying
target=thebackend when creating the Task object. And then in backends.yaml
I modify the instances parameter for the backend in question for each of my
various tests.
I'm not really too concerned about failfast,
Interesting. I saw your thread but wasn't entirely sure if it was the
issue. I think the thing that threw me off was that your delays were
being reflected in the request ms in the logs, while in my case they
mostly aren't.
Does setting the task countdown work for you? Or is ~1 second delay
still
The abstract is that I have a hobby app (granted, I put a lot of time and
energy into it) that does tons of mapreduce-esque backend processing
through tasks that execute, then create a new task for the next step, etc.
My site will never generate revenue so I aim to someday get my daily costs
FWIW should note that my app is master/slave. The pressure to move to
HRD/Python 2.7 is very real, but right now I have too many concerns with
replication delays and reading others' migration headaches with data
volumes similar to mine, so I have no short-term plans to migrate.
--
You
I've been able to nearly solve the delay problem by setting countdown=1 in
the Task constructor. This reduces the delay from 20 to about 1.5. Not sure
why it's not closer to 1.0 but this will be fine. The time to serve the
simple request is unaffected.
Still a strange bug (?).
Also, some
I've seen cases where the reason for the failure just plain isn't in the
log. I think his happens when cron isn't able to find an available backend
instance (kind of rare, but can happen when things are busy or if you
configure a limited number of instances). It will keep trying for a few
flush logs during long-running requests, and to
examine an application's request logs and application logs.
On Sun, Jan 15, 2012 at 3:27 AM, Dave Loomer dloo...@gmail.com wrote:
The docs for downloading logs make no specific mention of backends,
and from my attempts it seems that you can
I should add that I don't have any truly long-running backend
processes -- typically, they all complete in a few minutes, and I run
thousands per day.
On Jan 18, 12:19 pm, Dave Loomer dloo...@gmail.com wrote:
Amy, I don't think it has anything to do with flushing or delays, as
even when I run
work:
appcfg.py request_logs --version=worker --vhost=2.worker.your_appid.
appspot.com project_dir outfile
On Thu, Jan 19, 2012 at 5:19 AM, Dave Loomer dloo...@gmail.com wrote:
Amy, I don't think it has anything to do with flushing or delays, as
even when I run with the --num_days=2
The docs for downloading logs make no specific mention of backends,
and from my attempts it seems that you can only download logs for your
frontend. That would be strange though, and to make matters worse a
web search for google app engine download backend logs (no quotes
obvs.) reveals nothing
30 matches
Mail list logo