Lars T. Kyllingstad wrote:
Michal Minich wrote:
Hello rmcguire,

why is this not a compiler bug?
because:
import std.stdio;
void main() {
float f=0.01;
writefln("%0.2f->%d",f,cast(int)(f*100f));
writefln("%0.2f->%d",f,cast(int)(.01*100f));
writefln("%0.2f->%f",f,(f*100f));
}
results in:
0.01->0
0.01->1
0.01->1.000000
I would say something is dodgy.

-Rory


I think this may be case of:
At comple time floating point computations may be done at a higher
precision than run time.


Yes, if you do this:

float f = 0.01;
float g = f * 100f;
real r = f * 100f;
writeln("%s, %s, %s", f, cast(int) g, cast(int) r);

you get:

0.01, 0, 1

I believe just writing cast(int)(f*100f) is more or less the same as the
'real' case above.

-Lars
Can that *really* be the explanation?? I know that float doesn't have all that much precision, but I thought it was more than 5 or 6 places...and this is, essentially, two places.

Reply via email to