On Sunday, 25 August 2013 at 14:54:04 UTC, bearophile wrote:
I have modified and improved and fixed your code a little:
[..]
Thank you for your analysis. I've run it on Linux with DMD64 2.063.2: dmd -O -noboundscheck -inline -release and relative timings are different than on Windows:

500000 7160
500000 18588
500000 18592
500000 14466
500000 28368

I'm also running it with LDC, but it reports timings too good to be true - something meaningful is getting optimized away and I'm trying to find out why. I have a few newbie questions below:

int test(alias F)(in int nLoops) /*pure nothrow*/ {
    int[10] a;
    typeof(return) n = 0;

    foreach (immutable _; 0 .. nLoops) {
        a[4] = !a[4];
        n += F(a);
    }

    return n;
}

What is the purpose of "immutable _;" above? Why not "i;"?

void main() {
    enum size_t nLoops = 60_000_000;
    StopWatch sw;

    foreach (alg; TypeTuple!(isPalindrome0, isPalindrome1,
                             isPalindrome2, isPalindrome3,
                             isPalindrome4)) {
        sw.reset;
        sw.start;
        immutable n = test!alg(nLoops);
        sw.stop;
        writeln(n, " ", sw.peek.msecs);
    }
}

Same question here: why immutable?

I wanted to get more stable run-to-run results, so I'm looking for minimums:

void main()
{
   enum      N = 100;
   enum      M = 1_000_000;
   StopWatch sw;

foreach (alg; TypeTuple!(isPalindrome0, isPalindrome1, isPalindrome2,
                                  isPalindrome3, isPalindrome4)) {
      int [N] results;
      long[N] timings;

      foreach (i; 0..N) {
         sw.reset;
         sw.start;
         results[i] = test!alg(M);
         sw.stop;
         timings[i] = sw.peek.usecs;
      }

      writeln(results.reduce!min, "  ", timings.reduce!min);
   }
}

Reply via email to