It's hard to tell, given there are a few variables here. Are you running 
ghz on the same machine as the gRPC server? How many threads are being 
spawned in both scenarios? It might be valuable for you to run something 
like perf and analyze the results to see where both processes are spending 
their time.

On Tuesday, May 31, 2022 at 11:44:28 PM UTC-7 Roshan Chaudhari wrote:

> I am expecting approach 2 perform better. But this is not the case. Any 
> idea what I could be going wrong here?
>
> On Wednesday, June 1, 2022 at 12:12:25 PM UTC+5:30 Roshan Chaudhari wrote:
>
>> I am using async implementation of C++ server. I tried 2 approaches:
>>
>> 1. While starting up server, start only 1 outstanding RPC. When I receive 
>> client connection each of my Bidi RPC will schedule one outstanding RPC for 
>> next client. Once RPC finishes, I destroy my BidiState/BidiContext using 
>> "delete this". 
>>
>> 2. I know max number (n) of clients could try to connect my server. So I 
>> start n outstanding RPCs in the beggining. Once I get client request, I do 
>> not fire up outstanding RPC as in 1. Once RPC finishes, I refresh 
>> BidiState/BidiContext instead of calling "delete this". This will make sure 
>> I will always have number of outstanding RPCs = number of possible clients 
>> could connect.
>>
>> Now, I am using ghz benchmarking tool with the command:
>>
>> ghz -c 100 -n 1000000 --insecure --proto <>  --call <> 
>>
>> Approach 2:
>> Summary:
>>   Count:        1000000
>>   Total:        38.53 s
>>   Slowest:      12.01 ms
>>   Fastest:      0.33 ms
>>   Average:      3.08 ms
>>   Requests/sec: 25954.63
>>
>>
>> Latency distribution:
>>   10 % in 1.88 ms 
>>   25 % in 2.12 ms 
>>   50 % in 2.46 ms 
>>   75 % in 3.65 ms 
>>   90 % in 5.27 ms 
>>   95 % in 6.28 ms 
>>   99 % in 7.96 ms 
>>
>> Status code distribution:
>>   [OK]   1000000 responses 
>>
>> Approach 1:
>> Summary:
>>   Count:        1000000
>>   Total:        31.12 s
>>   Slowest:      10.21 ms
>>   Fastest:      0.88 ms
>>   Average:      2.68 ms
>>   Requests/sec: 32138.66
>>
>>
>> Latency distribution:
>>   10 % in 1.65 ms 
>>   25 % in 1.78 ms 
>>   50 % in 2.03 ms 
>>   75 % in 3.27 ms 
>>   90 % in 4.79 ms 
>>   95 % in 5.56 ms 
>>   99 % in 6.91 ms 
>>
>> Status code distribution:
>>   [OK]   1000000 responses   
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/125613a7-0f5b-43d5-bdfb-74b15a2f7f38n%40googlegroups.com.

Reply via email to