On Sunday, 15 May 2016 at 18:30:20 UTC, Walter Bright wrote:
Can you provide an example of a legitimate algorithm that produces degraded results if the precision is increased?

I've just been observing this argument (over and over again) and I think the two sides are talking about different things.

Walter, you keep saying degraded results.

The scientific folks keep saying *consistent* results.


Think about a key goal in scientific experiments: to measure changes across repeated experiments, to reproduce and confirm or falsify results. They want to keep as much equal as they can.

I suppose people figure if they use the same compiler, same build options, same source code and feed the same data into it, they expect to get the *same* results. It is a deterministic machine, right?

You might argue they should add "same hardware" to that list, but apparently it isn't that easy, or I doubt people would be talking about this.

Reply via email to