Joshua is right : contention during statistics writes has to be fixed
with sharding.
http://code.google.com/appengine/articles/sharding_counters.html
Seems odd at the beginning, but give it a try, works like a charm.
If you wish, I have some code that I can share to shard automatically
when the
Thanks Guys, I've been trying to catch the error and start tasks to
try again, but it seems to be taking me down a bad path. Sharding
seems like the way to go and i'm going to give it a try.
Cyrille - it'd be great to see your code
Ben
nimbits.com
On Oct 10, 7:12 pm, Cyrille Vincey
A further option involves using pull task queues.
1. Write a task to the pull queue for each item you wish to remember.
2. Have a queue reader run periodically - reading up to 1000 tasks
at a time and writing the relevant information to the datastore.
Although this sounds simple, you
I believe that the advised write-rate for a single entity group is 1 write
per second.
--
You received this message because you are subscribed to the Google Groups
Google App Engine group.
To view this discussion on the web visit
huh. It must be that the task queue is attempting several writes in
second. I could use some guidance on the best way to handle this kind
of demand. I was considering maintaining the statistics in memcache
and then having them trickle back into the datastore using a cron task
so i can control the
I believe you have a couple of options:
- Change your task queue configuration, so that it can't run updates
concurrently and slow the task queue down so that it doesn't go over the
desired rate of 1 per second. Docs for this configuration can be found at
One thing you can try is to persist a copy of your entity with a timestamp
everytime you update it. If you want the latest version, you simply query
for the latest version in the datastore. Then you can setup a cronjob to
update the master copy of the entity and delete all the copies.
--
You