HI again
We tcpdump the traffic.
We see the first 16kb frame is sent from the client to the server, but the 
2nd one is not sent until an ack is returned from the server.
In our case, our msg size is 30KB, and round trip latency is ~220ms.
So because of this, call takes ~440ms instead of ~200ms.

We set the InitialWindowSize and InitialConnWindowSize to 1MB, on both 
server and client - no change.
We checked both unary RPCs and Stream RPCs - the same.
We set the WriteBufferSize and ReadBufferSize to zero (write/read directly 
to/from the wire) - 95% latency remained the same, but Avg latency dropped 
by 100ms - not sure why it had this effect.

Again, in all of the conditions above, if we increase to rate of messages 
to more then 2 or 3 per second, the latency drops to 450ms
Looking at http2debug=2 logs, we see it seems that in the higher rate the 
when latency is low, grpc somehow uses a previous opened stream used by 
another RPC to sent the new RPC....

Anyone has encountered a similar behaviour. ? 
Anyone that understand GRPC implementation (Go lang) can explain why it 
behaves this why and if there is a way to make it work better ?
Any help would be very much appreciated 
Thanks !


On Wednesday, August 17, 2022 at 4:44:28 PM UTC+3 Alon Kaftan wrote:

> Also, if we run few dummy (small payload) calls per seconds, in parallel 
> to the big payload calls, on the same connection => the latency of the big 
> payload calls is reduced to 250ms as well
> Thoughts ?
>
> On Wednesday, August 17, 2022 at 1:46:20 PM UTC+3 Alon Kaftan wrote:
>
>> Hi
>> we have a go grpc client running in US and a go grpc server on APAC.
>> Both on AWS.
>> on the same connection we make Unary calls
>> when the message size is lower than 16KB, roundtrip latency is 250ms
>> when the message size is crossing the 16KKB, round trip latency jumps to 
>> 450ms
>> when the message size is crossing the 100KB, round trip latency jumps to 
>> 650ms
>>
>> Also, if we increase the rate of the 16KB+ messages from 1/sec to 2/sec 
>> or more, the latency drops to 250ms
>>
>> we ruled out load balancers as we connected the client & server pods 
>> directly and observed the behaviour remains the same
>>
>> Any ideas where to start with this kind of behaviour ?
>>
>> Thanks!
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/b3e09191-d887-435b-b1e3-d81d34b020b8n%40googlegroups.com.

Reply via email to