Googling to see differences between *in* and *const ref* I found detailed explanation here:
http://stackoverflow.com/questions/8515579/difference-between-const-ref-and-in

One response is: "If you have identified the copying as a bottleneck and you want to optimize, using *const ref* is a good idea."

Doesn't this imply there some other benefit to *in* - otherwise *const ref* would always be chosen?

Later on another response is: "A huge difference between *in* and *const ref* which you don't cover at all is the fact that *const ref* must take an lvalue, whereas *in* doesn't have to"

Why is this benefit huge? Is it just the convenience of being able to pass in literals or is it something more?

If I'm dealing with types T, where T is *not* a delegate, why not always just choose *const ref* and be done with it? If the only downside is client code must declare variables where they would have a literal, is that such a big price?

Also, it does not mention *in ref*, which I guess is same as *const ref* but with *scope*.

Assume I'm trying to decide on the signature of a setter property, I could do any of:
@property auto field(const ref Field f) {...}
@property auto field(in ref Field f) {...}
@property auto field(in Field f) {...}

If Field is big *const ref* clearly wins. Even if Field is small now it does not mean it will not grow in the future and become an issue. But when does *in* win?

I wrote a small benchmark comparison and get the following results. If it is a bogus comparison for whatever reason let me know. It seems if performance is the only issue, just use *const ref* or *in ref*.

Thanks
Dan

---------
2 bytes: using cref_(int size) took 39[ms]
2 bytes: using inref(int size) took 40[ms]
2 bytes: using in___(int size) took 31[ms]

4 bytes: using cref_(int size) took 29[ms]
4 bytes: using inref(int size) took 29[ms]
4 bytes: using in___(int size) took 30[ms]

8 bytes: using cref_(int size) took 29[ms]
8 bytes: using inref(int size) took 28[ms]
8 bytes: using in___(int size) took 31[ms]

16 bytes: using cref_(int size) took 29[ms]
16 bytes: using inref(int size) took 29[ms]
16 bytes: using in___(int size) took 32[ms]

32 bytes: using cref_(int size) took 29[ms]
32 bytes: using inref(int size) took 29[ms]
32 bytes: using in___(int size) took 39[ms]

64 bytes: using cref_(int size) took 29[ms]
64 bytes: using inref(int size) took 29[ms]
64 bytes: using in___(int size) took 157[ms]

128 bytes: using cref_(int size) took 29[ms]
128 bytes: using inref(int size) took 29[ms]
128 bytes: using in___(int size) took 290[ms]

--------

import std.stdio;
import std.datetime;

struct S(int size) {
  char[size] c;
}

int x = 1;
void in___(int size)(in S!size t) { x++; }
void inref(int size)(in ref S!size t) { x++; }
void cref_(int size)(const ref S!size t) { x++; }
void compare(int size)() {
  callIt!(cref_,size);
  callIt!(inref,size);
  callIt!(in___,size);
  writeln();
}
@property void callIt(alias func, int size)() {
  const int iterations = 10_000_000;
  {
    S!size s;
    auto sw = StopWatch(AutoStart.yes);
    for(size_t i=0; i<iterations; ++i) {
      func(s);
    }
    sw.stop();
writeln(size, " bytes: using ", func.stringof, " took ", sw.peek().msecs, "[ms]");
  }
}

void main() {
  compare!(2)();
  compare!(4)();
  compare!(8)();
  compare!(16)();
  compare!(32)();
  compare!(64)();
  compare!(128)();
}




Reply via email to