Yep, I know about that.  The internal limit is 1000msec, but you should aim 
for 750msec just to 'be safe'.

Here are my timings:

ms=15376 cpu_ms=37 api_cpu_ms=17 cpm_usd=0.003428 pending_ms=8265

37 msec of CPU .... after waiting 8.2 seconds to execute.

Some math:
 - 500 incoming requests
 - 40 servers
 = ~ 15 requests/server
 ... but each request takes ~55 msec of CPU + API .... 

That's 825 msec for THE WHOLE LOT !  .... So why on earth is there 8 second of 
waiting before that request was serviced ?


-R

On Sunday, July 8, 2012 10:13:28 PM UTC-4, Per wrote:
>
>
> Not sure where I saw it, but I believe an application must respond within 
> a second at most *on average*, or GAE reserves the right to throttle it. 
> So, if you have some kind of slowdown inside your app, then requests start 
> piling up. I'm not a Python developer, but if you had the equivalent of a 
> synchronised map in there, with lots of concurrent access, then you might 
> end up in a situation like this. Not sure about what's really to blame of 
> course, but I'd strongly recommend setting up a controlled loadtesting 
> environment (just copy the app, and start firing requests at it).  Add 
> plenty of logging to your app, enable appstats, and then slowly increase 
> the load. Maybe even strip the application down, starting only with read 
> access. I'm sure you will find something, and I would love to hear what it 
> was! I wish you luck!
>
>
>
>
>
> On Monday, July 9, 2012 3:19:04 AM UTC+2, Richard wrote:
>>
>> I could if memcache actually worked.  But it does not.  I originally 
>> tried to use it and found that I could not push the game state to memcache 
>> and then have the other instances pull it.  They would get versions of it 
>> that were up to 5 minutes old.  My timings are 5 second windows.
>> 5 secs to submit all scores
>> 5 secs to reap scores and calc leaderboards
>> 5 secs to fan out results to clients
>>
>> Experience shows that memcache is just broken for that sort of timing.
>>
>> As for using Go instead of Python.  I am not sure I follow why Go should 
>> be better.  The lag is not coming from CPU or queries.
>>
>> Right now I am running 50 instances to serve 500 game clients. $48 for 
>> the last 18 hours.  11% of my requests result in "Request was aborted".  
>> Yeah, that is 12 THOUSAND fails.
>>
>> Back in the year 1995, ftp.cdrom.com could serve 2000 clients 
>> simultaneously on a Pentium Pro 200MHz .... and I cannot serve 20 clients 
>> on a 500Mhz virtual box ?  
>>
>> I still contend there is some internal throttling going on somewhere.
>>
>> -R
>>
>>
>>
>> On Sunday, July 8, 2012 6:23:02 PM UTC-4, Kyle Finley wrote:
>>>
>>> Richard,
>>>
>>> Another option would be to move the Game State request to a 
>>> Go<https://developers.google.com/appengine/docs/go/overview> instance, 
>>> either as a 
>>> backend<https://developers.google.com/appengine/docs/go/config/backends>or 
>>> as a separate version. I believe a single Go instance should be able the 
>>> handle 500 request / second. You could then share the Game State between 
>>> the Python version and the Go version through Memcache, cacheing to 
>>> instance memory every 5 sec. 
>>>
>>> - Kyle 
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/J0pwVLK7RoEJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to