I can't get into the datastore viewer - whenever the CPU quota is
exceeded all the datastore admin interfaces return http 503 -
something else to fix ?

On Nov 14, 5:29 pm, Erik <erik.e.wil...@gmail.com> wrote:
> If you check in the datastore viewer you might be able to find and
> delete your jobs from one of the tables.  You may also need to go into
> your task queues and purge the default.
>
> On this topic, why does deleting data have such a large difference
> between actual time spent and billed time?
>
> For instance, I had two mapreduce shards running to delete data, which
> took a combined a total of 15 minutes, but I was actually charged for
> 11(!) hours.  I know there isn't a 1:1 correlation but a >40x
> difference is a little silly!
>
> On Nov 14, 4:25 am, Justin <justin.worr...@gmail.com> wrote:
>
>
>
> > I've been trying to bulk delete data from my application as described
> > here
>
> >http://code.google.com/appengine/docs/python/datastore/creatinggettin...
>
> > This seems to have kicked off a series of mapreduce workers, whose
> > execution is killing my CPU - approximately 5 mins later I have
> > reached 100% CPU time and am locked out for the rest of the day.
>
> > I figure I'll just delete by hand; create some appropriate :delete
> > controllers and wait till the next day.
>
> > Unfortunately the mapreduce process still seems to be running - 10
> > past midnight and my CPU has reached 100% again.
>
> > Is there some way to kill these processes and get back control of my
> > app?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to