On Wednesday, 9 September 2015 at 14:00:07 UTC, qznc wrote:
That is a good idea, if you want to measure compiler optimizations. Ideally g++ and gdc should always yield the same performance then?

Hopefully, as I understand GCC uses a highlevel IR, but if performance is equal that is a pretty strong argument to get people to adopt, if the rest of the language is polished.

And you could measure regressions.

Suppose you consider using D with C/C++ as the stable alternative. D lures you with its high level features. However, you know that you will have to really optimize some hot spots sooner or later. Will D impose a penalty on you and C/C++ could have provided better performance?

Walter argues that there is no technical reason why D should be slower than C/C++. My experience with the benchmarks says, there seem to be such penalties. For example, there is no __builtin_ia32_cmplepd or __builtin_ia32_movmskpd like gcc has.

Ok, I see your point. You want to measure maximum throughput for critical applications that might benefit from language specific intrinsics.

Multithreaded applications could probably show some differences too, due to TLS/shared...

Maybe some kind of actor based benchmark. Essentially running thousands of fibers with lots of intercommunication.

Reply via email to