On 10/23/2013 11:39 AM, Apollo Hogan wrote:
There are a couple of points here:

- it seems that whatever the semantics of floating-point arithmetic, they should
be the same at compile-time as at run-time.

It's not very practical, especially considering that the compile time environment may be not at all the same as the runtime one.


- I agree that the majority of floating point code is only improved by
increasing the working precision.  (If we don't worry about reproducibility
across compilers/machines/etc.)

As I mentioned earlier, Java initially mandated that floating point results be exactly reproducible in multiple environments. This was a fine idea, and turned out to be completely unworkable in practice.


The "real" data-type seems to be designed
exactly for this: use "real" in numerical code and the compiler will give you a
good answer at the highest performant precision.  However there _are_ cases
where it can be very useful to have precise control of the precision that one is
using.  Implementing double-double or quadruple-double data types is an example
here.

It's not that bad. You can also force a reduction in precision by calling a function like this:

    double identity(double d) { return d; }

and ensuring (via separate compilation) that the compiler cannot inline calls to identity().

> Viewing D as a _systems_ language, I'd like to have the ability to just
> have it do what I ask (and being forced to go to assembler seems
> unreasonable...)

Perhaps, but I think you are treading on implementation defined behavior here for most languages, and will be hard pressed to find a language that *guarantees* the loss of precision you desire, even though it may deliver that behavior on various platforms with various compiler switches.


Anyway, thanks for the replies.  I guess I've got to go off and design the brand
new D^^2 language to conform to my whims now.

Join the club! :-)

Reply via email to