On Friday, 20 June 2014 at 12:32:39 UTC, Nick Treleaven wrote:
Hi,
A Perlin noise benchmark was quoted in this reddit thread:

http://www.reddit.com/r/rust/comments/289enx/c0de517e_where_is_my_c_replacement/cibn6sr

It apparently shows the 3 main D compilers producing slower code than Go, Rust, gcc, clang, Nimrod:

https://github.com/nsf/pnoise#readme

I initially wondered about std.random, but got this response:

"Yeah, but std.random is not used in that benchmark, it just initializes 256 random vectors and permutates 256 sequential integers. What spins in a loop is just plain FP math and array read/writes. I'm sure it can be done faster, maybe D compilers are bad at automatic inlining or something. "

Obviously this is only one person's benchmark, but I wondered if people would like to check their code and suggest reasons for the speed deficit.

I saw this thread when searching for something on the site, been a few months since anyone posted-

I fixed the D flags, gdc is now about 15% faster than the second fastest in the benchmark(C - gcc) which obviously puts D in first.
some notes:

LDC is missing _tons_ of inline opportunities, killing it in comparison to GDC. I think GDC inlined pretty much everything. LDC is about 50% slower.

Also, AFAICT there's no fast-math switch for LDC(enabling this for GDC might actually be compromising it though : ) )

I think LDC turns the floor in std.math into the same as the stdc one, but GDC does not. std.math.floor is still abysmally slow, I thought it was because it was still using reals but that does not seem to be the case. GDC slows to a crawl(10-20x slower) if you replace the stdc floor with the one in std.math(just remove the alias)

I thought this might be interesting to someone(i.e, LDC/GDC folks or phobos math folks)

bye.

Reply via email to