On 5/15/2016 1:49 PM, poliklosio wrote:
Can you provide an example of a legitimate algorithm that produces degraded
results if the precision is increased?
The real problem here is the butterfly effect (the chaos theory thing). Imagine
programming a multiplayer game. Ideally you only need to synchronize user
events, like key presses etc. Other computation can be duplicated on all
machines participating in a session. Now imagine that some logic other than
display (e.g. player-bullet collision detection) is using floating point. If
those computations are not reproducible, a higher precision on one player's
machine can lead to huge inconsistencies in game states between the machines
(e.g. my character is dead on your machine but alive on mine)!
If the game developer cannot achieve reproducibility or it takes too much work,
the workarounds can be wery costly. He can, for example, convert implementation
to soft float or increase amount of synchronization over the network.
If you use the same program binary, you *will* get the same results.
Also I think Adam is making a very good point about generl reproducibility here.
If a researcher gets a little bit different results, he has to investigate why,
because he needs to rule out all the serious mistakes that could be the cause of
the difference. If he finds out that the source was an innocuous refactoring of
some D code, he will be rightly frustrated that D has caused so much unnecessary
churn.
I think the same problem can occur in mission-critical software which undergoes
strict certification.
Frankly, I think you are setting unreasonable expectations. Today, if you take a
standard compliant C program, and compile it with different switch settings, or
run it on a machine with a different CPU, you can very well get different
answers. If you reorganize the code expressions, you can very well get different
answers.