On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote:
In this case, not increasing precision gets the more accurate
result, but other examples could be constructed that _heavily_
favor increasing precision. In fact, almost any real-world,
non-toy calculation would favor it.
Please stop saying this. It is very wrong.
Algorithms that need higher accuracy need error correction
mechanisms, not unpredictable precision and rounding.
Unpredictable precision and rounding makes adding error
correction difficult so it does not improve accuracy, it harms
accuracy when you need it.