Raymond Hettinger <raymond.hettin...@gmail.com> added the comment:

> Cheapest way I know of that "seemingly always" reproduces 
> the Decimal result (when that's rounded back to float) 
> combines fsum(), Veltkamp splitting, and the correction 
> trick from the paper.

That's impressive.  Do you think this is worth implementing?  Or should we 
declare victory with the current PR which is both faster and more accurate than 
what we have now?

Having 1-ulp error 17% of the time and correctly rounded 83% of the time is 
pretty darned good (and on par with C library code for the two-argument case).

Unless we go all-out with the technique you described, the paper shows that 
we're already near the limit of what can be done by trying to make the sum of 
squares more accurate, "... the correctly rounded square root of the correctly 
rounded a**2+b**2 can still be off by as much as one ulp. This hints at the 
possibility that working harder to compute a**2+b**2 accurately may not be the 
best path to a better answer".  

FWIW, in my use cases, the properties that matter most are monotonicity, 
commutativity, cross-platform portability, and speed.  Extra accuracy would 
nice to have but isn't essential and would likely never be noticed.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue41513>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to