On 2/17/24 11:15, maobibo wrote:
On 2024/2/15 下午6:25, WANG Xuerui wrote:
On 2/15/24 18:11, WANG Xuerui wrote:
Sorry for the late reply (and Happy Chinese New Year), and thanks for providing microbenchmark numbers! But it seems the more comprehensive CoreMark results were omitted (that's also absent in v3)? While the

Of course the benchmark suite should be UnixBench instead of CoreMark. Lesson: don't multi-task code reviews, especially not after consuming beer -- a cup of coffee won't fully cancel the influence. ;-)

Where is rule about benchmark choices like UnixBench/Coremark for ipi improvement?

Sorry for the late reply. The rules are mostly unwritten, but in general you can think of the preference of benchmark suites as a matter of "effectiveness" -- the closer it's to some real workload in the wild, the better. Micro-benchmarks is okay for illustrating the points, but without demonstrating the impact on realistic workloads, a change could be "useless" in practice or even decrease various performance metrics (be that throughput or latency or anything that matters in the certain case), but get accepted without notice.

--
WANG "xen0n" Xuerui

Linux/LoongArch mailing list: https://lore.kernel.org/loongarch/


Reply via email to