[grpc-io] Re: limiting grpc memory usage

2022-06-13 Thread 'AJ Heller' via grpc.io
Which gRPC library are you using? And which language? C++, Java, Python, etc
On Monday, June 6, 2022 at 11:42:26 AM UTC-7 amandee...@gmail.com wrote:

> So, we identified that it might be because of 
> CodedInputStream::ReadStringFallback in protocol buffers.
> We do not reserve the buffer upfront and use string's append repeatedly 
> for some reason. This leads to a string capacity of 8MB for a 4MB string.
>
> Any pointers would be helpful.
> On Thursday, May 26, 2022 at 2:52:14 PM UTC-4 amandee...@gmail.com wrote:
>
>> We have a mechanism to limit the memory used by a process. To make sure 
>> that there are no violators, we rely on maxrss of the process. We check 
>> maxrss every few mins to see if we had seen a spike in memory which was 
>> beyond the permitted value.
>>
>> We have a grpc server and what we are seeing is that for a request with 
>> 4MB of payload, the maxrss of the process is becoming slightly greater than 
>> 8MB. This limits our effective memory utilization to just half in most of 
>> the scenario without violating the memory limit. My guess it that this is 
>> because grpc is not zero copy. Is there a way to make grpc zero copy? If 
>> not, is there a way to limit the spike in memory when multiple requests 
>> come in? 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/98244c2a-ee10-40d0-a204-e90c55ae44cdn%40googlegroups.com.


[grpc-io] Re: C++ Async Server Performance Issue

2022-06-13 Thread 'AJ Heller' via grpc.io
It's hard to tell, given there are a few variables here. Are you running 
ghz on the same machine as the gRPC server? How many threads are being 
spawned in both scenarios? It might be valuable for you to run something 
like perf and analyze the results to see where both processes are spending 
their time.

On Tuesday, May 31, 2022 at 11:44:28 PM UTC-7 Roshan Chaudhari wrote:

> I am expecting approach 2 perform better. But this is not the case. Any 
> idea what I could be going wrong here?
>
> On Wednesday, June 1, 2022 at 12:12:25 PM UTC+5:30 Roshan Chaudhari wrote:
>
>> I am using async implementation of C++ server. I tried 2 approaches:
>>
>> 1. While starting up server, start only 1 outstanding RPC. When I receive 
>> client connection each of my Bidi RPC will schedule one outstanding RPC for 
>> next client. Once RPC finishes, I destroy my BidiState/BidiContext using 
>> "delete this". 
>>
>> 2. I know max number (n) of clients could try to connect my server. So I 
>> start n outstanding RPCs in the beggining. Once I get client request, I do 
>> not fire up outstanding RPC as in 1. Once RPC finishes, I refresh 
>> BidiState/BidiContext instead of calling "delete this". This will make sure 
>> I will always have number of outstanding RPCs = number of possible clients 
>> could connect.
>>
>> Now, I am using ghz benchmarking tool with the command:
>>
>> ghz -c 100 -n 100 --insecure --proto <>  --call <> 
>>
>> Approach 2:
>> Summary:
>>   Count:100
>>   Total:38.53 s
>>   Slowest:  12.01 ms
>>   Fastest:  0.33 ms
>>   Average:  3.08 ms
>>   Requests/sec: 25954.63
>>
>>
>> Latency distribution:
>>   10 % in 1.88 ms 
>>   25 % in 2.12 ms 
>>   50 % in 2.46 ms 
>>   75 % in 3.65 ms 
>>   90 % in 5.27 ms 
>>   95 % in 6.28 ms 
>>   99 % in 7.96 ms 
>>
>> Status code distribution:
>>   [OK]   100 responses 
>>
>> Approach 1:
>> Summary:
>>   Count:100
>>   Total:31.12 s
>>   Slowest:  10.21 ms
>>   Fastest:  0.88 ms
>>   Average:  2.68 ms
>>   Requests/sec: 32138.66
>>
>>
>> Latency distribution:
>>   10 % in 1.65 ms 
>>   25 % in 1.78 ms 
>>   50 % in 2.03 ms 
>>   75 % in 3.27 ms 
>>   90 % in 4.79 ms 
>>   95 % in 5.56 ms 
>>   99 % in 6.91 ms 
>>
>> Status code distribution:
>>   [OK]   100 responses   
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/125613a7-0f5b-43d5-bdfb-74b15a2f7f38n%40googlegroups.com.