I created a simple page that simply instantiates and stores a model
with about 9 primitive properties. The model is filled with the same
data each time, and that data is less than 100 bytes. I don't have
indices on any of the properties.

I did some load tests, storing approximately half a million of the
objects at most. Some of the tests were done during the recent latency
problems. Some were done after latency was stabilized. All tests with
significant concurrency (50) generated fairly high timeout and quota
exceeded error rates. So, I got significant log spew.

My total data size should be around 50 megs of real user data.

Today, my data usage jumped from about 0.5% of my data storage limit
to 67% of my data storage limit without any intervening tests. That
was surprising since that's about 10x what I really should be using.
But, I figured that data limit might count all replicas or something
odd. And, perhaps there's a latency between the data store and the
data size calculation. So, I wrote a delete all page that deletes 50
items, then redirects to itself, to delete the next 50. I just delete
the first 50 items returned. I'm not paging or anything. The script
works locally.

I ran that delete page for a while, periodically getting annoying
timeouts and quota exceeded. I did not run the page in parallel.

Now that I'm using 99% of my data storage limit. I'm afraid I may have
to buy data storage before I can delete more items.

Any ideas?

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to