On Friday, 23 November 2012 at 06:10:38 UTC, Walter Bright wrote:
On 11/22/2012 7:03 PM, xenon325 wrote:
I would think it's actually not preferable.
Imagine you developed and tuned all the code on x86 and everything is fine. Then
run it on ARM and suddenly all computations are inaccurate.

Floating point algorithms don't get less precise when precision is increased. I can't think of any.

Actually I meant opposite: on x86 you would work with (kind of) 32-bit `half` and on ARM with 16-bit.

Then, suppose I did math analysis, and picked up algorithms which *should* work correctly, but I have missed something and for 0.01% of input rounding error is too big on 16-bits, but invisible on 32-bits.

If I develop on x86 (with 32-bit precision) and run my app on ARM (16-bit precision), than I'm messed up.

Anyway all this beside the point, as Manu have explained me in other reply.

Reply via email to