On Wednesday, 6 November 2013 at 06:28:59 UTC, Walter Bright wrote:
On 11/5/2013 8:19 AM, Don wrote:
On Wednesday, 30 October 2013 at 18:28:14 UTC, Walter Bright wrote:
Not exactly what I meant - I mean the algorithm should be designed so that
extra precision does not break it.

Unfortunately, that's considerably more difficult than writing an algorithm for
a known precision.
And it is impossible in any case where you need full machine precision (which
applies to practically all library code, and most of my work).

I have a hard time buying this. For example, when I wrote matrix inversion code, more precision was always gave more accurate results.

I had a chat with a fluid simulation expert (mostly plasma and microfluids) with a broad computing background and the only algorithms he could think of that are by necessity fussy about max precision are elliptical curve algorithms.


A compiler intrinsic, which generates no code (simply inserting a barrier for
the optimiser) sounds like the correct approach.

Coming up for a name for this operation is difficult.

float toFloatPrecision(real arg) ?

Reply via email to