http://d.puremagic.com/issues/show_bug.cgi?id=9582



--- Comment #11 from bearophile_h...@eml.cc 2013-02-24 16:11:10 PST ---
(In reply to comment #10)

> But the way I see it, your argument is that loop unrolling justifies copying 
> an
> entire array.

If your array is a ubyte[6] then I think copying it is OK, otherwise it's
better to take the fixed size array by reference.


> Furthermore, is the performance gain *also* worth the template bloat,
> since you are instantiating 1 reduce algorithm per array *size*.

The compile-time knowledge of the array length gives a performance advantage
for small arrays only, that's why I have used a items int[3] in my example. A
way to keep the template bloat low is to introduce a limit of the length with a
template constraint:


import std.stdio;

void foo(T, size_t N)(ref T[N] items) if (N < 10) {
    writeln("foo 1");
}

void foo(T)(T[] items) {
    writeln("foo 2");
}

void main() {
    int[3] a1;
    int[20] a2;
    int[] a3;
    foo(a1);
    foo(a2);
    foo(a3);
}

Prints:

foo 1
foo 2
foo 2


(There is another way to solve this problem, but I think it requires a small
improvement of D language (it's an idea to help reduce the template bloat in
some situations, like with fixed size arrays), but while you design reduce()
you can't assume such change, that currently is not even written in a bugzilla
enhancement request entry.)


> The ideal solution would be one akin to cycle: A specialized overload that
> takes static arrays by reference. You get your cheap pass by ref, but also 
> keep
> your compile time info.
> 
> But still, that is a *very* specific use case, with a code deployment cost, 
> and
> shouldn't be used many other things.

OK.

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
------- You are receiving this mail because: -------

Reply via email to