[google-appengine] Re: Too much contention on these datastore entities. please try again.

2011-10-11 Thread Cyrille Vincey
Joshua is right : contention during statistics writes has to be fixed
with sharding.
http://code.google.com/appengine/articles/sharding_counters.html

Seems odd at the beginning, but give it a try, works like a charm.
If you wish, I have some code that I can share to shard automatically
when the datastore throws a contention exception.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Too much contention on these datastore entities. please try again.

2011-10-11 Thread Benjamin
Thanks Guys, I've been trying to catch the error and start tasks to
try again, but it seems to be taking me down a bad path. Sharding
seems like the way to go and i'm going to give it a try.

Cyrille - it'd be great to see your code

Ben
nimbits.com


On Oct 10, 7:12 pm, Cyrille Vincey cvin...@qunb.com wrote:
 Joshua is right : contention during statistics writes has to be fixed
 with sharding.http://code.google.com/appengine/articles/sharding_counters.html

 Seems odd at the beginning, but give it a try, works like a charm.
 If you wish, I have some code that I can share to shard automatically
 when the datastore throws a contention exception.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Too much contention on these datastore entities. please try again.

2011-10-10 Thread Matthew Jaggard
A further option involves using pull task queues.

1. Write a task to the pull queue for each item you wish to remember.
2. Have a queue reader run periodically - reading up to 1000 tasks
at a time and writing the relevant information to the datastore.

Although this sounds simple, you need to make sure that you run
this doesn't run too often (and burn instance time with no benefit) or
too infrequently (and fill up the queue because it never gets through
all the tasks). Scaling is not done for you like it is with a push
queue.

Mat.

On 5 October 2011 16:18, Simon Knott knott.si...@gmail.com wrote:
 I believe you have a couple of options:

 Change your task queue configuration, so that it can't run updates
 concurrently and slow the task queue down so that it doesn't go over the
 desired rate of 1 per second.  Docs for this configuration can be found at
 http://code.google.com/appengine/docs/python/config/queue.html - Ikai also
 drew a nice diagram at http://twitpic.com/3y5814!  This route may lead to a
 backlog of data, if your updates are frequent.
 Alternatively you can write the data to Memcache, as you suggested, and have
 the cron job periodically write the data into the datastore.  This route can
 lead to dataloss, as if MemCache is flushed you will lose any updates that
 were pending storage.

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To view this discussion on the web visit
 https://groups.google.com/d/msg/google-appengine/-/-dejKvvC6wAJ.
 To post to this group, send email to google-appengine@googlegroups.com.
 To unsubscribe from this group, send email to
 google-appengine+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Too much contention on these datastore entities. please try again.

2011-10-05 Thread Simon Knott
I believe that the advised write-rate for a single entity group is 1 write 
per second.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/T8eJv9wT9pwJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Too much contention on these datastore entities. please try again.

2011-10-05 Thread Benjamin
huh. It must be that the task queue is attempting several writes in 
second. I could use some guidance on the best way to handle this kind
of demand. I was considering maintaining the statistics in memcache
and then having them trickle back into the datastore using a cron task
so i can control the frequency.



On Oct 5, 10:47 am, Simon Knott knott.si...@gmail.com wrote:
 I believe that the advised write-rate for a single entity group is 1 write
 per second.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Too much contention on these datastore entities. please try again.

2011-10-05 Thread Simon Knott
I believe you have a couple of options:

   - Change your task queue configuration, so that it can't run updates 
   concurrently and slow the task queue down so that it doesn't go over the 
   desired rate of 1 per second.  Docs for this configuration can be found at 
   http://code.google.com/appengine/docs/python/config/queue.html - Ikai also 
   drew a nice diagram at http://twitpic.com/3y5814!  This route may lead to 
   a backlog of data, if your updates are frequent.
   - Alternatively you can write the data to Memcache, as you suggested, and 
   have the cron job periodically write the data into the datastore.  This 
   route can lead to dataloss, as if MemCache is flushed you will lose any 
   updates that were pending storage.
   

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/-dejKvvC6wAJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Too much contention on these datastore entities. please try again.

2011-10-05 Thread Gerald Tan
One thing you can try is to persist a copy of your entity with a timestamp 
everytime you update it. If you want the latest version, you simply query 
for the latest version in the datastore. Then you can setup a cronjob to 
update the master copy of the entity and delete all the copies.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/aUSOBEM6H6oJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.