On Thursday, May 29, 2014 3:43:14 PM UTC-7, alex wrote:
>
> Could be related to this thread? 
> https://groups.google.com/forum/#!topic/google-appengine/D1b_ZC4pKww
>

Hi Alex,

Compared to ProtoRPC, Endpoints does involve an extra hop. The thread was 
from a long time ago. Do you want to update the current numbers and how you 
measure it (end to end?)?

On Sunday, 25 May 2014 09:53:15 UTC+1, Robert King wrote:
>>
>> Don't get me wrong - I absolutely love cloud endpoints - they speed up my 
>> development time and simplify my code significantly.
>>
>
HI Robert,

Glad to hear that!
 

> Having said that, I'd really like to see some clarification from google. 
>> Are endpoints intended to be high performance? I haven't once seen 
>> mentioned in any google documentation that endpoints are low latency?  
>>
>  

> I've often been waiting 5-20 seconds for calls such as /_ah/api/discovery/
>> v1/apis/archivedash/v1/rpc?fields=methods%2F*%2Fid&pp=0.
>> even on apps that have little traffic, 
>>
>
Our infrastructure handles many many APIs. And thus some parts are loaded 
only on demand. Currently we don't load an API everywhere when we see the 
first request (we should and we are working on it). If you warm your API 
using ~50 requests, you should see fast responses from then on.
 

> tiny payloads and no rpc calls. One of the new systems i'm building is 
>> using endpoints but i'll have to switch away from endpoints ASAP if I can't 
>> get some reassurance. Also I don't have time to wait "a couple of months" 
>> to see if they get faster. I'd also be interested to know how efficient 
>> python / go / java / php endpoints are at encoding & decoding different 
>> sized payloads with json or protobuff protocols. (Will probably have to 
>> generate these statistics myself & present some graphs etc - although I'm 
>> assuming google would have already performance tested their own product?)
>> cheers
>>
>> On Sunday, 25 May 2014 08:29:48 UTC+12, Diego Duclos wrote:
>>>
>>> I've done some (non extensive) tests on google appengine,
>>> and my response times vary from anywhere between 100ms and 5000ms when 
>>> directly sending http requests to a cloud endpoints.
>>>
>>> Regardless of the actual response time, the google cloud console always 
>>> shows a processing time of around 50ms, which, while also somewhat 
>>> long-ish, is much more reasonable.
>>>
>>> For the 100ms requests, I can safely know that the other 50ms are just 
>>> regular latency, but I have no idea where the cloud endpoint could be 
>>> spending 4.5 seconds at, and the logs show nothing useful at all.
>>>
>>> Does anyone have some guidance for me regarding to this ? 5 seconds is 
>>> unacceptable slow and makes them completely unusable.
>>>
>>
HI Diego,

Do you have the URL of your request? I'd like to see if you are seeing the 
same issue as Robert.

Thanks all for your feedback!

Jun

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/d/optout.

Reply via email to