Thanks for sharing the link. I'm wondering if the issue referenced in this 
PR <https://github.com/grpc/grpc/issues/14899> is addressed, and if the 
numbers reported for C++ are correct? In other words, should I expect ~260K 
QPS for running the streaming, secure `ping` benchmark against an 8 core 
server? If so, why is the C++ implementation not as efficient as the Go and 
Java?

[image: Screenshot 2024-10-03 at 6.17.42 PM.png]

On Thursday, October 3, 2024 at 5:24:02 PM UTC-4 [email protected] wrote:

> https://grpc.io/docs/guides/benchmarking/ is what we have for this topic. 
> You can get continuous benchmark data from 
> https://grafana-dot-grpc-testing.appspot.com/?orgId=1
>
> On Tuesday, October 1, 2024 at 1:44:25 PM UTC-7 Amirsaman Memaripour wrote:
>
>> Pinging in case this didn't show up in your radar :)
>>
>> On Tuesday, September 24, 2024 at 1:11:21 PM UTC-4 Amirsaman Memaripour 
>> wrote:
>>
>>> Hi folks,
>>>
>>> Are there any published latency numbers for the baseline overhead of a 
>>> C++ ping server, using gRPC's CQs? I'm primarily interested in the per 
>>> request CPU overhead of gRPC's RPC stack on the server-side, and if there 
>>> are studies on tuning the number of polling threads and CQs to optimize 
>>> that cost and maximize throughput on a few CPU cores.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/322549d6-8110-4b35-95a6-b9b2f0bac33en%40googlegroups.com.

Reply via email to