On Friday, 19 January 2024 at 10:15:57 UTC, evilrat wrote:
On Friday, 19 January 2024 at 09:08:17 UTC, Renato wrote:

I forgot to mention: the Java version is using a Trie... and it consistently beats the Rust numeric algorithm (which means it's still faster than your D solution), but the Java version that's equivalent to Rust's implementation is around 3x slower... i.e. it runs at about the same speed as my current fastest numeric algorithm in D as well.

This is what I would like to be discussing in this thread: why is D running at Java speeds and not at D speeds when using the same algorithm? I know there's small differences in the implementations, they are different languages after all, but not enough IMO to justify anything like 3x difference from Rust.


My guess is that's because int128 is not that much optimized due to being not so popular type, though to answer what's wrong would require to look at assembly code produced for both D and Rust.

Additionally if you comparing D by measuring DMD performance - don't. It is valuable in developing for fast iterations, but it lacks many modern optimization techniques, for that we have LDC and GDC.

I am not using int128 anymore. I explained why a few posts back. I am using a byte array and computing the hash incrementally when trying different input, so that partially computed hashes are re-used on each try (this is a bit cheating, as Rust is not doing that, but I consider that to be acceptable as it's still computing hashes and looking up entries in the associative array).

I used all D compilers and picked the fastest one (GDC in the case of int128, but LDC2 in the current case).

Reply via email to