On Thursday, 18 June 2015 at 10:27:58 UTC, Russel Winder wrote:
On a given machine, the code:

double sequential_loop(const int n, const double delta) {
  auto sum = 0.0;
  foreach (immutable i; 1 .. n + 1) {
    immutable x = (i - 0.5) * delta;
    sum += 1.0 / (1.0 + x * x);
  }
  return 4.0 * delta * sum;
}

runs in about 6.70s. However the code:

double sequential_reduce(const int n, const double delta) {
return 4.0 * delta * reduce!((double t, int i){immutable x = (i - 0.5) * delta; return t + 1.0 / (1.0 + x * x);})(0.0, iota(1, n + 1));
}

runs in about 17.03s, whilst:

double sequential_reduce_alt(const int n, const double delta) {
  return 4.0 * delta * reduce!"a + b"(
map!((int i){immutable x = (i - 0.5) * delta; return 1.0 /
(1.0 + x * x);})(iota(1, n + 1)));
}

takes about 28.02s. Unless I am missing something (very possible), this is not going to create a good advert for D as an imperative language
with declarative (internal iteration) expression.

first two run in the same time on LDC, third one runs about 30-40% faster.

Reply via email to