On Sunday, 3 January 2016 at 00:17:23 UTC, Ilya Yaroshenko wrote:
On Sunday, 3 January 2016 at 00:09:33 UTC, Jack Stouffer wrote:
On Saturday, 2 January 2016 at 23:51:09 UTC, Ilya Yaroshenko wrote:
This benchmark is _not_ lazy, so ndslice faster than Numpy only 3.5 times.

I don't know what you mean here, I made sure to call std.array.array to force allocation.

In the article:
    auto means = 100_000.iota <---- 100_000.iota is lazy range
        .sliced(100, 1000)
        .transposed
        .map!(r => sum(r) / r.length)
        .array;               <---- allocation of the result

In GitHub:
means = data <---- data is allocated array, it is fair test for real world
        .sliced(100, 1000)
        .transposed
        .map!(r => sum(r, 0L) / cast(double) r.length)
        .array;             <---- allocation of the result
 -- Ilya

I still have to disagree with you that the example I submitted was fair. Accessing global memory in D is going to be much slower than accessing stack memory, and sense most std.ndslice calculations are going to be on the stack, I believe my benchmark is indicative of normal use.

Reply via email to