Those numbers look correct to me.

FYI, We're actively working on C++ performance enhancements (namely 
EventEngine and Promise migration), and we anticipate seeing improvements 
once those are implemented. I'm hoping that those changes are complete some 
time next year.


On Thursday, October 3, 2024 at 3:21:34 PM UTC-7 Amirsaman Memaripour wrote:

> Thanks for sharing the link. I'm wondering if the issue referenced in this 
> PR <https://github.com/grpc/grpc/issues/14899> is addressed, and if the 
> numbers reported for C++ are correct? In other words, should I expect ~260K 
> QPS for running the streaming, secure `ping` benchmark against an 8 core 
> server? If so, why is the C++ implementation not as efficient as the Go and 
> Java?
>
> [image: Screenshot 2024-10-03 at 6.17.42 PM.png]
>
> On Thursday, October 3, 2024 at 5:24:02 PM UTC-4 [email protected] wrote:
>
>> https://grpc.io/docs/guides/benchmarking/ is what we have for this 
>> topic. You can get continuous benchmark data from 
>> https://grafana-dot-grpc-testing.appspot.com/?orgId=1
>>
>> On Tuesday, October 1, 2024 at 1:44:25 PM UTC-7 Amirsaman Memaripour 
>> wrote:
>>
>>> Pinging in case this didn't show up in your radar :)
>>>
>>> On Tuesday, September 24, 2024 at 1:11:21 PM UTC-4 Amirsaman Memaripour 
>>> wrote:
>>>
>>>> Hi folks,
>>>>
>>>> Are there any published latency numbers for the baseline overhead of a 
>>>> C++ ping server, using gRPC's CQs? I'm primarily interested in the per 
>>>> request CPU overhead of gRPC's RPC stack on the server-side, and if there 
>>>> are studies on tuning the number of polling threads and CQs to optimize 
>>>> that cost and maximize throughput on a few CPU cores.
>>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d482c882-d73a-4a91-889d-0f7c318a5ec6n%40googlegroups.com.

Reply via email to