On Wednesday, 30 December 2015 at 00:24:38 UTC, Ilya Yaroshenko wrote:
Awesome!

Please find my notes below.

Thanks for the feedback. Good thing I posted this here before releasing it.

Funny thing, when I made the D example use the mean lambda function, it got way faster. Even with the larger array size, the D code went from 138 ns with the small array and the function to 58 ns with the large array and the lambda. And, the Python code expectantly got slower, even when I made sure to only test the np.mean function, the time taking the mean of the large array was 145 µs up from 10.5 µs with allocation and taking the mean with the smaller array.

So now the D version is 2474x faster.

Also, I was unable to get LDC numbers, as when I compiled my test program with all of the optimization flags, LDC returns 0 hnsecs

the code:

===============

import std.range;
import std.algorithm.iteration;
import std.experimental.ndslice;
import std.stdio;
import std.datetime;
import std.conv : to;

void f0() {
auto means = 100_000.iota.sliced(100, 1000).transposed.map!(r => sum(r) / r.length);
}

void main() {
    auto r = benchmark!(f0)(10_000);
    auto f0Result = to!Duration(r[0]);
    f0Result.writeln;
}

I'm assuming that LLVM realizes that the variable means is never used, so it removes it from the final version.

Reply via email to