On 2/17/23 03:27, Stephen Tucker wrote: > Thanks, one and all, for your reponses. > > This is a hugely controversial claim, I know, but I would consider this > behaviour to be a serious deficiency in the IEEE standard.
No matter how you do it, there are always tradeoffs and inaccuracies moving from real numbers in base 10 to base 2. That's just the nature of the math. Any binary floating point representation is going to have problems. There are techniques for mitigating this: https://en.wikipedia.org/wiki/Floating-point_error_mitigation It's interesting to note that the article points out that floating point error was first talked about in the 1930s. So no matter what binary scheme you choose there will be error. That's just the nature of converting a real from one base to another. Also we weren't clear on this, but the IEEE standard is not just implemented in software. It's the way your CPU represents floating point numbers in silicon. And in your GPUs (where speed is preferred to precision). So it's not like Python could just arbitrarily do something different unless you were willing to pay a huge penalty for speed. For example the decimal module which is arbitrary precision, but quite slow. Have you tried the numpy cbrt() function? It is probably going to be more accurate than using power to 0.3333. > Perhaps this observation should be brought to the attention of the IEEE. I > would like to know their response to it. Rest assured the IEEE committee that formalized the format decades ago knew all about the limitations and trade-offs. Over the years CPUs have increased in capacity and now we can use 128-bit floating point numbers which mitigate some of the accuracy problems by simply having more binary digits. But the fact remains that some rational numbers in decimal are irrational in binary, so arbitrary decimal precision using floating point is not possible. -- https://mail.python.org/mailman/listinfo/python-list