On 5/14/2016 3:16 AM, John Colvin wrote:
This is all quite discouraging from a scientific programmers point of view.
Precision is important, more precision is good, but reproducibility and
predictability are critical.

I used to design and build digital electronics out of TTL chips. Over time, TTL chips got faster and faster. The rule was to design the circuit with a minimum signal propagation delay, but never a maximum. Therefore, putting in faster parts will never break the circuit.

Engineering is full of things like this. It's sound engineering practice. I've never ever heard of a circuit requiring a resistor with 20% tolerance that would fail if a 10% tolerance one was put in, for another example.


Tables of constants that change value if I put a `static` in front of them?

Floating point code that produces different results after a compiler upgrade /
with different non-fp-related switches?

Ewwwww.

Floating point is not exact calculation. It just isn't. Designing an algorithm that relies on worse answers is absurd to my ears.

Results should be tested to have a minimum number of correct bits in the answer, not a maximum number. This is, in fact, how std.math checks the result of the algorithms implemented in it, and how it should be done.


This is not some weird crazy idea of mine, as I said, the x87 FPU in every x86 chip has been doing this for several decades.

Reply via email to