That makes sense. Thank you Esun!

On Thu, Oct 3, 2024 at 6:55 PM Esun Kim <[email protected]> wrote:

> The benchmark results might not directly apply to your situation. Since
> the benchmark server only sends data back and doesn't perform anything
> meaningful, unlike real workload servers, the actual QPS you'll see will
> likely be much lower and the difference between C++ and Java/Go becomes
> insignificant. The raw numbers you see from the benchmark should be correct.
>
>
> On Thu, Oct 3, 2024 at 3:43 PM 'Amirsaman Memaripour' via grpc.io <
> [email protected]> wrote:
>
>> Thanks again. Just to verify, we expect to see the C++ implementation to
>> offer half of the throughput of the Go and Java implementations, providing
>> a peak of 260K QPS on an 8 core server, correct?
>>
>> On Thursday, October 3, 2024 at 6:36:46 PM UTC-4 [email protected] wrote:
>>
>>> Those numbers look correct to me.
>>>
>>> FYI, We're actively working on C++ performance enhancements (namely
>>> EventEngine and Promise migration), and we anticipate seeing improvements
>>> once those are implemented. I'm hoping that those changes are complete some
>>> time next year.
>>>
>>>
>>> On Thursday, October 3, 2024 at 3:21:34 PM UTC-7 Amirsaman Memaripour
>>> wrote:
>>>
>>>> Thanks for sharing the link. I'm wondering if the issue referenced in this
>>>> PR <https://github.com/grpc/grpc/issues/14899> is addressed, and if
>>>> the numbers reported for C++ are correct? In other words, should I expect
>>>> ~260K QPS for running the streaming, secure `ping` benchmark against an 8
>>>> core server? If so, why is the C++ implementation not as efficient as the
>>>> Go and Java?
>>>>
>>>> [image: Screenshot 2024-10-03 at 6.17.42 PM.png]
>>>>
>>>> On Thursday, October 3, 2024 at 5:24:02 PM UTC-4 [email protected]
>>>> wrote:
>>>>
>>>>> https://grpc.io/docs/guides/benchmarking/ is what we have for this
>>>>> topic. You can get continuous benchmark data from
>>>>> https://grafana-dot-grpc-testing.appspot.com/?orgId=1
>>>>>
>>>>> On Tuesday, October 1, 2024 at 1:44:25 PM UTC-7 Amirsaman Memaripour
>>>>> wrote:
>>>>>
>>>>>> Pinging in case this didn't show up in your radar :)
>>>>>>
>>>>>> On Tuesday, September 24, 2024 at 1:11:21 PM UTC-4 Amirsaman
>>>>>> Memaripour wrote:
>>>>>>
>>>>>>> Hi folks,
>>>>>>>
>>>>>>> Are there any published latency numbers for the baseline overhead of
>>>>>>> a C++ ping server, using gRPC's CQs? I'm primarily interested in the per
>>>>>>> request CPU overhead of gRPC's RPC stack on the server-side, and if 
>>>>>>> there
>>>>>>> are studies on tuning the number of polling threads and CQs to optimize
>>>>>>> that cost and maximize throughput on a few CPU cores.
>>>>>>>
>>>>>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "grpc.io" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/grpc-io/N1q33b5qEP8/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/grpc-io/d6763f5e-b05c-4633-82c7-ce08823be005n%40googlegroups.com
>> <https://groups.google.com/d/msgid/grpc-io/d6763f5e-b05c-4633-82c7-ce08823be005n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>
>
> --
> Regards,
> Esun.
>
>

-- 

*{* name     : "Amirsaman Memaripour",
  title    : "Staff Engineer",
  location : "New York, NY",
  twitter  : "@MongoDB
<https://www.google.com/url?q=https%3A%2F%2Ftwitter.com%2Fmongodb&sa=D&sntz=1&usg=AFQjCNGEAIAhZyZhF7Z9ORWsRliTuc-2dg>
",
  facebook : "MongoDB
<https://www.google.com/url?q=https%3A%2F%2Fwww.facebook.com%2Fmongodb&sa=D&sntz=1&usg=AFQjCNGPMcaFBzmWsh-MpaWeTH6vMQoDIg>
" *}*

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CABUkSnHKSU68k_kPV0UZTbzM%2BzqWWjRiSMfmpheXS3%2B0Me2xrw%40mail.gmail.com.

Reply via email to