Re: Precision Tail-off?

2023-02-18 Thread Oscar Benjamin
On Sat, 18 Feb 2023 at 11:19, Peter J. Holzer  wrote:
>
> On 2023-02-18 03:52:51 +, Oscar Benjamin wrote:
> > On Sat, 18 Feb 2023 at 01:47, Chris Angelico  wrote:
> > > On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list
> > > > To avoid it you would need to use an algorithm that computes nth
> > > > roots directly rather than raising to the power 1/n.
> > > >
> > >
> > > It's somewhat curious that we don't really have that. We have many
> > > other inverse operations - addition and subtraction (not just "negate
> > > and add"), multiplication and division, log and exp - but we have
> > > exponentiation without an arbitrary-root operation. For square roots,
> > > that's not a problem, since we can precisely express the concept
> > > "raise to the 0.5th power", but for anything else, we have to raise to
> > > a fractional power that might be imprecise.
> >
> > Various libraries can do this. Both SymPy and NumPy have cbrt for cube 
> > roots:
>
> Yes, but that's a special case. Chris was talking about arbitrary
> (integer) roots. My calculator has a button labelled [x√y], but my
> processor doesn't have an equivalent operation.

All three of SymPy, mpmath and gmpy2 can do this as accurately as
desired for any integer root:

  >>> n = 12345678900

  >>> sympy.root(n, 6)
  10*13717421**(1/6)*3**(1/3)
  >>> sympy.root(n, 6).evalf(50)
  22314431635.562095902499928269233656421704825692573

  >>> mpmath.root(n, 6)
  mpf('22314431635.562096')
  >>> mpmath.mp.dps = 50
  >>> mpmath.root(n, 6)
  mpf('22314431635.562095902499928269233656421704825692572746')

  >>> gmpy2.root(n, 6)
  mpfr('22314431635.562096')
  >>> gmpy2.get_context().precision = 100
  >>> gmpy2.root(n, 6)
  mpfr('22314431635.56209590249992826924',100)

There are also specific integer only root routines like
sympy.integer_nthroot or gmpy2.iroot.

  >>> gmpy2.iroot(n, 6)
  (mpz(22314431635), False)
  >>> sympy.integer_nthroot(n, 6)
  (22314431635, False)

Other libraries like the stdlib math module and numpy define some
specific examples like cbrt or isqrt but not a full root or iroot.
What is lacking is a plain 64-bit floating point routine like:

  def root(x: float, n: int) -> float:
  return x ** (1/n) # except more accurate than this

It could be a good candidate for numpy and/or the math module. I just
noticed from the docs that the math module has a new in 3.11 cbrt
function that I didn't know about which suggests that a root function
might also be considered a reasonable addition in future. Similarly
isqrt was new in 3.8 and it is not a big leap from there to see
someone adding iroot.

--
Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-18 Thread Peter J. Holzer
On 2023-02-18 03:52:51 +, Oscar Benjamin wrote:
> On Sat, 18 Feb 2023 at 01:47, Chris Angelico  wrote:
> > On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list
> > > To avoid it you would need to use an algorithm that computes nth
> > > roots directly rather than raising to the power 1/n.
> > >
> >
> > It's somewhat curious that we don't really have that. We have many
> > other inverse operations - addition and subtraction (not just "negate
> > and add"), multiplication and division, log and exp - but we have
> > exponentiation without an arbitrary-root operation. For square roots,
> > that's not a problem, since we can precisely express the concept
> > "raise to the 0.5th power", but for anything else, we have to raise to
> > a fractional power that might be imprecise.
> 
> Various libraries can do this. Both SymPy and NumPy have cbrt for cube roots:

Yes, but that's a special case. Chris was talking about arbitrary
(integer) roots. My calculator has a button labelled [x√y], but my
processor doesn't have an equivalent operation. Come to think of it, it
doesn't even have a a y**x operation - just some simpler operations
which can be used to implement it. GCC doesn't inline pow(y, x) on
x86/64 - it just calls the library function.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Oscar Benjamin
On Sat, 18 Feb 2023 at 01:47, Chris Angelico  wrote:
>
> On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list
>  wrote:
> >
> > On 18/02/23 7:42 am, Richard Damon wrote:
> > > On 2/17/23 5:27 AM, Stephen Tucker wrote:
> > >> None of the digits in RootNZZZ's string should be different from the
> > >> corresponding digits in RootN.
> > >
> > > Only if the storage format was DECIMAL.
> >
> > Note that using decimal wouldn't eliminate this particular problem,
> > since 1/3 isn't exactly representable in decimal either.
> >
> > To avoid it you would need to use an algorithm that computes nth
> > roots directly rather than raising to the power 1/n.
> >
>
> It's somewhat curious that we don't really have that. We have many
> other inverse operations - addition and subtraction (not just "negate
> and add"), multiplication and division, log and exp - but we have
> exponentiation without an arbitrary-root operation. For square roots,
> that's not a problem, since we can precisely express the concept
> "raise to the 0.5th power", but for anything else, we have to raise to
> a fractional power that might be imprecise.

Various libraries can do this. Both SymPy and NumPy have cbrt for cube roots:

  >>> np.cbrt(12345678900.)
  4.979338592181745e+20

SymPy can also evaluate any rational power either exactly or to any
desired accuracy. Under the hood SymPy uses mpmath for the approximate
numerical evaluation part of this and mpmath can also be used directly
with its cbrt and nthroot functions to do this working with any
desired precision.

> But maybe, in practice, this isn't even a problem?

I'd say it's a small problem. Few people would use such a feature but
it would be have a little usefulness for those people if it existed.
Libraries like mpmath and SymPy provide this and can offer a big step
up for those who are really concerned about exactness or accuracy
though so there are already options for those who care. These are a
lot slower though than working with plain old floats but on the other
hand offer vastly more than a math.cbrt function could offer to
someone who needs something more accurate than x**(1/3).

For those who are working with floats the compromise is clear: errors
can accumulate in calculations. Taking the OPs example to the extreme,
the largest result that does not overflow is:

  >>> (123456789. * 10**300) ** (1.0 / 3.0)
  4.979338592181679e+102

Only the last 3 digits are incorrect so the error is still small. It
is not hard to find other calculations where *all* the digits are
wrong though:

  >>> math.cos(3)**2 + math.sin(3)**2 - 1
  -1.1102230246251565e-16

So if you want to use floats then you need to learn to deal with this
as appropriate for your use case. IEEE standards do their best to make
results reproducible across machines as well as limiting avoidable
local errors so that global errors in larger operations are *less
likely* to dominate the result. Their guarantees are only local though
so as soon as you have more complicated calculations you need your own
error analysis somehow. IEEE guarantees are in that case also useful
for those who actually want to do a formal error analysis.

--
Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Michael Torrie
On 2/17/23 15:03, Grant Edwards wrote:
> Every fall, the groups were again full of a new crop of people who had
> just discovered all sorts of bugs in the way 
> implemented floating point, and pointing them to a nicely written
> document that explained it never did any good.

But to be fair, Goldberg's article is pretty obtuse and formal for most
people, even programmers.  I don't need lots of formal proofs as he
shows.  Just a summary is sufficient I'd think.  Although I've been
programming for many years, I have no idea what he means with most of
the notation in that paper.

Although I have a vague notion of what's going on, as my last post
shows, I don't know any of the right terminology.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Chris Angelico
On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list
 wrote:
>
> On 18/02/23 7:42 am, Richard Damon wrote:
> > On 2/17/23 5:27 AM, Stephen Tucker wrote:
> >> None of the digits in RootNZZZ's string should be different from the
> >> corresponding digits in RootN.
> >
> > Only if the storage format was DECIMAL.
>
> Note that using decimal wouldn't eliminate this particular problem,
> since 1/3 isn't exactly representable in decimal either.
>
> To avoid it you would need to use an algorithm that computes nth
> roots directly rather than raising to the power 1/n.
>

It's somewhat curious that we don't really have that. We have many
other inverse operations - addition and subtraction (not just "negate
and add"), multiplication and division, log and exp - but we have
exponentiation without an arbitrary-root operation. For square roots,
that's not a problem, since we can precisely express the concept
"raise to the 0.5th power", but for anything else, we have to raise to
a fractional power that might be imprecise.

But maybe, in practice, this isn't even a problem?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Greg Ewing via Python-list

On 18/02/23 7:42 am, Richard Damon wrote:

On 2/17/23 5:27 AM, Stephen Tucker wrote:

None of the digits in RootNZZZ's string should be different from the
corresponding digits in RootN.


Only if the storage format was DECIMAL.


Note that using decimal wouldn't eliminate this particular problem,
since 1/3 isn't exactly representable in decimal either.

To avoid it you would need to use an algorithm that computes nth
roots directly rather than raising to the power 1/n.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Grant Edwards
On 2023-02-17, Mats Wichmann  wrote:

> And... this topic as a whole comes up over and over again, like
> everywhere.

That's an understatement.

I remember it getting rehashed over and over again in various USENET
groups 35 years ago when when the VAX 11/780 BSD machine on which I
read news exchanged postings with peers using a half-dozen dial-up
modems and UUCP.

One would have thought it would be a time-saver when David Goldberg
wrote "the paper" in 1991, and you could tell people to go away and
read this:

  https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
  https://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf

It didn't help.

Every fall, the groups were again full of a new crop of people who had
just discovered all sorts of bugs in the way 
implemented floating point, and pointing them to a nicely written
document that explained it never did any good.

--
Grant


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Mats Wichmann

On 2/17/23 11:42, Richard Damon wrote:

On 2/17/23 5:27 AM, Stephen Tucker wrote:


The key factor here is IEEE floating point is storing numbers in BINARY, 
not DECIMAL, so a multiply by 1000 will change the representation of the 
number, and thus the possible resolution errors.


Store you numbers in IEEE DECIMAL floating point, and the variations by 
multiplying by powers of 10 go away.


The development of the original IEEE standard led eventually to 
consistent implementation in hardware (when they implement floating 
point at all, which embedded/IoT class chips in particular often don't) 
that aligned with how languages/compilers treated floating point, so 
that's been a really successful standard, whatever one might feel about 
the tradeoffs. Standards are all about finding a mutually acceptable way 
forward, once people admit there is no One Perfect Answer.


Newer editions of 754 (since 2008) have added this decimal floating 
point representation, which is supported by some software such as IBM 
and Intel floating-point libraries.  Hardware support has been slower to 
arrive.  The only ones I've heard of have been the IBM z series 
(mainframes) and somebody else mentioned Power though I'd never seen 
that. It's possible some of the GPU lines may be going this direction.


As far as Python goes... the decimal module has this comment:

> It is a complete implementation of Mike Cowlishaw/IBM's General 
Decimal Arithmetic Specification.


Cowlishaw was the editor of the 2008 and 2019 editions of IEEE 754, fwiw.

And... this topic as a whole comes up over and over again, like 
everywhere.  See Stack Overflow for some amusement.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Grant Edwards
On 2023-02-17, Richard Damon  wrote:
> [...]
>
>> Perhaps this observation should be brought to the attention of the IEEE. I
>> would like to know their response to it.
>
> That is why they have developed the Decimal Floating point format, to 
> handle people with those sorts of problems.
>
> They just aren't common enough for many things to have adopted the
> use of it.

Back before hardware floating point was common, support for deciaml
floating point was very common.  All of the popular C, Pascal, and
BASIC compilers (for microcomputers) I remember let you choose (at
compile time) whether you wanted to use binary floating point or
decimal (BCD) floating point. People doing scientific stuff usually
chose binary because it was a little faster and you got more
resolution for the same amount of stoage. If you were doing
accounting, you chose BCD (or used fixed-point).

Once hardware (binary) floating point became common, support for
software BCD floating point just sort of went away...

--
Grant




-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Peter J. Holzer
On 2023-02-17 14:39:42 +, Weatherby,Gerard wrote:
> IEEE did not define a standard for floating point arithmetics. They
> designed multiple standards, including a decimal float point one.
> Although decimal floating point (DFP) hardware used to be
> manufactured, I couldn’t find any current manufacturers.

Doesn't IBM any more? Their POWER processors used to implement decimal
FP (starting with POWER8, if I remember correctly).

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Peter J. Holzer
On 2023-02-17 10:27:08 +, Stephen Tucker wrote:
> This is a hugely controversial claim, I know, but I would consider this
> behaviour to be a serious deficiency in the IEEE standard.
> 
> Consider an integer N consisting of a finitely-long string of digits in
> base 10.
> 
> Consider the infinitely-precise cube root of N (yes I know that it could
> never be computed

However, computers exist to compute. Something which can never be
computed is outside of the realm of computing.

> unless N is the cube of an integer, but this is a mathematical
> argument, not a computational one), also in base 10. Let's call it
> RootN.
> 
> Now consider appending three zeroes to the right-hand end of N (let's call
> it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).
> 
> The *only *difference between RootN and RootNZZZ is that the decimal point
> in RootNZZZ is one place further to the right than the decimal point in
> RootN.

No. In mathematics there is no such thing as a decimal point. The only
difference is that RootNZZZ is RootN*10. But there is nothing special
about 10. You could multiply your original number by 512 and then the
new cube root would differ by a factor of 8 (which would show up as
shifted "binary point"[1] in binary but completely different digits in
decimal) or you could multiply by 1728 and then you would need base 12
to get the same digits with a shifted "duodecimal point".

hp

[1] It's really unfortunate that the point which separates the integer
and the fractional part of a number is called a "decimal point" in
English. Makes it hard to talk about non-integer numbers in other
bases.

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Peter J. Holzer
On 2023-02-17 08:38:58 -0700, Michael Torrie wrote:
> On 2/17/23 03:27, Stephen Tucker wrote:
> > Thanks, one and all, for your reponses.
> > 
> > This is a hugely controversial claim, I know, but I would consider this
> > behaviour to be a serious deficiency in the IEEE standard.
> 
> No matter how you do it, there are always tradeoffs and inaccuracies
> moving from real numbers in base 10 to base 2.

This is phrased ambiguosly. So just to clarify:

Real numbers are not in base 10. Or base 2 or base 37 or base e. A
positional system (which uses a base) is just a convenient way to write
a small subset of real numbers. By using any base you limit yourself to
rational numbers (no e or π or √2) and in fact only those rational
numbers where the denominator is a power of the base.

Converting numbers from one base to another with any finite precision
will generally involve rounding - so do that as little as possible.


> That's just the nature of the math.  Any binary floating point
> representation is going to have problems.

Any decimal floating point representation is also going to have
problems.

There is nothing magical about base 10. It's just what we are used to
(which also means that we are used to the rounding errors and aren't
surprised by them as much).

> Also we weren't clear on this, but the IEEE standard is not just
> implemented in software. It's the way your CPU represents floating point
> numbers in silicon.  And in your GPUs (where speed is preferred to
> precision).  So it's not like Python could just arbitrarily do something
> different unless you were willing to pay a huge penalty for speed.

I'm pretty sure that compared to the interpreter overhead of CPython the
overhead of a software FP implementation (whether binary or decimal)
would be rather small, maybe negligible.


> > Perhaps this observation should be brought to the attention of the IEEE. I
> > would like to know their response to it.
> Rest assured the IEEE committee that formalized the format decades ago
> knew all about the limitations and trade-offs.  Over the years CPUs have
> increased in capacity and now we can use 128-bit floating point numbers

The very first IEEE compliant processor (the Intel 8087) had an 80 bit
extended type (in fact it did all computations in 80 bit and only
rounded down to 64 or 32 bits when storing the result). By the 1990s, 96
and 128 bit was quite common.

> which mitigate some of the accuracy problems by simply having more
> binary digits. But the fact remains that some rational numbers in
> decimal are irrational in binary,

Be careful: "Rational" and "irrational" have a standard meaning in
mathematics and it's independent of base.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Oscar Benjamin
On Fri, 17 Feb 2023 at 10:29, Stephen Tucker  wrote:
>
> Thanks, one and all, for your reponses.
>
> This is a hugely controversial claim, I know, but I would consider this
> behaviour to be a serious deficiency in the IEEE standard.
[snip]
>
> Perhaps this observation should be brought to the attention of the IEEE. I
> would like to know their response to it.

Their response would be that they are well aware of what you are
saying and knew about this already since before writing any standards.
The basic limitation of the IEEE standard in this respect is that it
describes individual operations rather than composite operations. Your
calculation involves composing operations, specifically:

result = x ** (n / d)

The problem is that there is more than one operation so we have to
evaluate this in two steps:

e = n / d
result = x ** e

Now the problem is that although n / d is correctly rounded e has a
small error because the exact value of n / d cannot be represented. In
the second operation taking this slightly off value of e as the
intended input means that the correctly rounded result for x ** e is
not the closest float to the true value of the *compound* operation.
The exponentiation operator in particular is very sensitive to changes
in the exponent when the base is large so the tiny error in e leads to
a more noticeable relative error in x ** e.

The only way to prevent this in full generality is to to have a system
in which no intermediate inexact operations are computed eagerly which
means representing expressions symbolically in some way. That is what
the SymPy code I showed does:

In [6]: from sympy import cbrt

In [7]: e = cbrt(1234567890)

In [8]: print(e)
1000*123456789**(1/3)

In [9]: e.evalf(50)
Out[9]: 49793385921817.447440261250171604380899353243631762

Because the *entire* expression is represented here *exactly* as e it
is then possible to evaluate different parts of the expression
repeatedly with different levels of precision and it is necessary to
do that for full accuracy in this case. Here evalf will use more than
50 digits of precision internally so that at the end you have a result
specified to 50 digits but where the error for the entire expression
is smaller than the final digit. If you give it a more complicated
expression then it will use even more digits internally for deeper
parts of the expression tree because that is what is needed to get a
correctly rounded result for the expression as a whole.

This kind of symbolic evaluation is completely outside the scope of
what the IEEE floating point standards are for. Any system based on
fixed precision and eager evaluation will show the same problem that
you have identified. It is very useful though to have a system with
fixed precision and eager evaluation despite these limitations. The
context for which the IEEE standards are mainly intended (e.g. FPU
instructions) is one in which fixed precision and eager evaluation are
the only option.

--
Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Richard Damon

On 2/17/23 5:27 AM, Stephen Tucker wrote:

Thanks, one and all, for your reponses.

This is a hugely controversial claim, I know, but I would consider this
behaviour to be a serious deficiency in the IEEE standard.

Consider an integer N consisting of a finitely-long string of digits in
base 10.

Consider the infinitely-precise cube root of N (yes I know that it could
never be computed unless N is the cube of an integer, but this is a
mathematical argument, not a computational one), also in base 10. Let's
call it RootN.

Now consider appending three zeroes to the right-hand end of N (let's call
it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).


The key factor here is IEEE floating point is storing numbers in BINARY, 
not DECIMAL, so a multiply by 1000 will change the representation of the 
number, and thus the possible resolution errors.


Store you numbers in IEEE DECIMAL floating point, and the variations by 
multiplying by powers of 10 go away.




The *only *difference between RootN and RootNZZZ is that the decimal point
in RootNZZZ is one place further to the right than the decimal point in
RootN.


No, since the floating point number is stored as a fraction times a 
power of 2, the fraction has changed as well as the power of 2.




None of the digits in RootNZZZ's string should be different from the
corresponding digits in RootN.


Only if the storage format was DECIMAL.



I rest my case.

Perhaps this observation should be brought to the attention of the IEEE. I
would like to know their response to it.


That is why they have developed the Decimal Floating point format, to 
handle people with those sorts of problems.


They just aren't common enough for many things to have adopted the use 
of it.




Stephen Tucker.


--
Richard Damon

--
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Michael Torrie
On 2/17/23 03:27, Stephen Tucker wrote:
> Thanks, one and all, for your reponses.
> 
> This is a hugely controversial claim, I know, but I would consider this
> behaviour to be a serious deficiency in the IEEE standard.

No matter how you do it, there are always tradeoffs and inaccuracies
moving from real numbers in base 10 to base 2.  That's just the nature
of the math.  Any binary floating point representation is going to have
problems.  There are techniques for mitigating this:
https://en.wikipedia.org/wiki/Floating-point_error_mitigation
It's interesting to note that the article points out that floating point
error was first talked about in the 1930s.  So no matter what binary
scheme you choose there will be error. That's just the nature of
converting a real from one base to another.

Also we weren't clear on this, but the IEEE standard is not just
implemented in software. It's the way your CPU represents floating point
numbers in silicon.  And in your GPUs (where speed is preferred to
precision).  So it's not like Python could just arbitrarily do something
different unless you were willing to pay a huge penalty for speed.  For
example the decimal module which is arbitrary precision, but quite slow.

Have you tried the numpy cbrt() function?  It is probably going to be
more accurate than using power to 0..

> Perhaps this observation should be brought to the attention of the IEEE. I
> would like to know their response to it.
Rest assured the IEEE committee that formalized the format decades ago
knew all about the limitations and trade-offs.  Over the years CPUs have
increased in capacity and now we can use 128-bit floating point numbers
which mitigate some of the accuracy problems by simply having more
binary digits. But the fact remains that some rational numbers in
decimal are irrational in binary, so arbitrary decimal precision using
floating point is not possible.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Peter Pearson
On Fri, 17 Feb 2023 10:27:08, Stephen Tucker wrote:[Head-posting undone.]
> On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson 
> wrote:
>> On Tue, 14 Feb 2023 11:17:20 +, Oscar Benjamin wrote:
>> > On Tue, 14 Feb 2023 at 07:12, Stephen Tucker 
>> wrote:
>> [snip]
>> >> I have just produced the following log in IDLE (admittedly, in Python
>> >> 2.7.10 and, yes I know that it has been superseded).
>> >>
>> >> It appears to show a precision tail-off as the supplied float gets
>> bigger.
>> [snip]
>> >>
>> >> For your information, the first 20 significant figures of the cube root
>> in
>> >> question are:
>> >>49793385921817447440
>> >>
>> >> Stephen Tucker.
>> >> --
>> >> >>> 123.456789 ** (1.0 / 3.0)
>> >> 4.979338592181744
>> >> >>> 1234567890. ** (1.0 / 3.0)
>> >> 49793385921817.36
>> >
>> > You need to be aware that 1.0/3.0 is a float that is not exactly equal
>> > to 1/3 ...
>> [snip]
>> > SymPy again:
>> >
>> > In [37]: a, x = symbols('a, x')
>> >
>> > In [38]: print(series(a**x, x, Rational(1, 3), 2))
>> > a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
>> >
>> > You can see that the leading relative error term from x being not
>> > quite equal to 1/3 is proportional to the log of the base. You should
>> > expect this difference to grow approximately linearly as you keep
>> > adding more zeros in the base.
>>
>> Marvelous.  Thank you.
[snip]
> Now consider appending three zeroes to the right-hand end of N (let's call
> it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).
>
> The *only *difference between RootN and RootNZZZ is that the decimal point
> in RootNZZZ is one place further to the right than the decimal point in
> RootN.
>
> None of the digits in RootNZZZ's string should be different from the
> corresponding digits in RootN.
>
> I rest my case.
[snip]


I believe the pivotal point of Oscar Benjamin's explanation is
that within the constraints of limited-precision binary floating-point
numbers, the exponent of 1/3 cannot be represented precisely, and
is in practice represented by something slightly smaller than 1/3;
and accordingly, when you multiply your argument by 1000, its
not-quit-cube-root gets multiplied by something slightly smaller
than 10, which is why the number of figures matching the "right"
answer gets steadily smaller.

Put slightly differently, the crux of the problem lies not in the
complicated process of exponentiation, but simply in the failure
to represent 1/3 exactly.  The fact that the exponent is slightly
less than 1/3 means that you would observe the steady loss of
agreement that you report, even if the exponentiation process
were perfect.

-- 
To email me, substitute nowhere->runbox, invalid->com.
-- 
https://mail.python.org/mailman/listinfo/python-list


RE: Precision Tail-off?

2023-02-17 Thread avi.e.gross
 ten to the 80th or so
particles we think are in our observable universe. But knowing pi to that
precision may not be meaningful if an existing value already is so precise
that given an exact number for the diameter of something the size of the
universe (Yes, I know this is nonsense) you could calculate the
circumference (ditto) to less than the size (ditto) of a proton. Any errors
in such a measurement would be swamped by all kinds of things such as
uncertainties in what we can measure, or niggling details about how space
expands irregularly in the area as we speak and so on.

So if you want a new IEEE (or other such body) standard, would you be
satisfied with a new one for say a 16,384 byte monstrosity that holds
gigantic numbers with lots more precision, or hold out for a relatively
flexible and unlimited version that can be expanded until your computer or
planet runs out of storage room and provides answers after a few billion
years when used to just add two of them together?



-Original Message-
From: Python-list  On
Behalf Of Stephen Tucker
Sent: Friday, February 17, 2023 5:27 AM
To: python-list@python.org
Subject: Re: Precision Tail-off?

Thanks, one and all, for your reponses.

This is a hugely controversial claim, I know, but I would consider this
behaviour to be a serious deficiency in the IEEE standard.

Consider an integer N consisting of a finitely-long string of digits in base
10.

Consider the infinitely-precise cube root of N (yes I know that it could
never be computed unless N is the cube of an integer, but this is a
mathematical argument, not a computational one), also in base 10. Let's call
it RootN.

Now consider appending three zeroes to the right-hand end of N (let's call
it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

The *only *difference between RootN and RootNZZZ is that the decimal point
in RootNZZZ is one place further to the right than the decimal point in
RootN.

None of the digits in RootNZZZ's string should be different from the
corresponding digits in RootN.

I rest my case.

Perhaps this observation should be brought to the attention of the IEEE. I
would like to know their response to it.

Stephen Tucker.


On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson 
wrote:

> On Tue, 14 Feb 2023 11:17:20 +, Oscar Benjamin wrote:
> > On Tue, 14 Feb 2023 at 07:12, Stephen Tucker 
> > 
> wrote:
> [snip]
> >> I have just produced the following log in IDLE (admittedly, in 
> >> Python
> >> 2.7.10 and, yes I know that it has been superseded).
> >>
> >> It appears to show a precision tail-off as the supplied float gets
> bigger.
> [snip]
> >>
> >> For your information, the first 20 significant figures of the cube 
> >> root
> in
> >> question are:
> >>49793385921817447440
> >>
> >> Stephen Tucker.
> >> --
> >> >>> 123.456789 ** (1.0 / 3.0)
> >> 4.979338592181744
> >> >>> 1234567890. ** (1.0 / 3.0)
> >> 49793385921817.36
> >
> > You need to be aware that 1.0/3.0 is a float that is not exactly 
> > equal to 1/3 ...
> [snip]
> > SymPy again:
> >
> > In [37]: a, x = symbols('a, x')
> >
> > In [38]: print(series(a**x, x, Rational(1, 3), 2))
> > a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
> >
> > You can see that the leading relative error term from x being not 
> > quite equal to 1/3 is proportional to the log of the base. You 
> > should expect this difference to grow approximately linearly as you 
> > keep adding more zeros in the base.
>
> Marvelous.  Thank you.
>
>
> --
> To email me, substitute nowhere->runbox, invalid->com.
> --
> https://mail.python.org/mailman/listinfo/python-list
>
--
https://mail.python.org/mailman/listinfo/python-list

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Weatherby,Gerard
IEEE did not define a standard for floating point arithmetics. They designed 
multiple standards, including a decimal float point one.  Although decimal 
floating point (DFP) hardware used to be manufactured, I couldn’t find any 
current manufacturers. There was a company that seemed to be active until a few 
years ago, but they seem to have gone dark: https://twitter.com/SilMinds



From: Python-list  on 
behalf of Thomas Passin 
Date: Friday, February 17, 2023 at 9:02 AM
To: python-list@python.org 
Subject: Re: Precision Tail-off?
*** Attention: This is an external email. Use caution responding, opening 
attachments or clicking on links. ***

On 2/17/2023 5:27 AM, Stephen Tucker wrote:
> Thanks, one and all, for your reponses.
>
> This is a hugely controversial claim, I know, but I would consider this
> behaviour to be a serious deficiency in the IEEE standard.
>
> Consider an integer N consisting of a finitely-long string of digits in
> base 10.

What you are not considering is that the IEEE standard is about trying
to achieve a balance between resource use (memory and registers),
precision, speed of computation, reliability (consistent and stable
results), and compatibility.  So there have to be many tradeoffs.  One
of them is the use of binary representation.  It has never been about
achieving ideal mathematical perfection for some set of special cases.

Want a different set of tradeoffs?  Fine, go for it.  Python has Decimal
and rational libraries among others.  They run more slowly than IEEE,
but maybe that's a good tradeoff for you.  Use a symbolic math library.
Trap special cases of interest to you and calculate them differently.
Roll your own.  Trouble is, you have to know one heck of a lot to roll
your own, and it may take decades of debugging to get it right.  Even
then it won't have hardware assistance like IEEE floating point usually has.

> Consider the infinitely-precise cube root of N (yes I know that it could
> never be computed unless N is the cube of an integer, but this is a
> mathematical argument, not a computational one), also in base 10. Let's
> call it RootN.
>
> Now consider appending three zeroes to the right-hand end of N (let's call
> it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).
>
> The *only *difference between RootN and RootNZZZ is that the decimal point
> in RootNZZZ is one place further to the right than the decimal point in
> RootN.
>
> None of the digits in RootNZZZ's string should be different from the
> corresponding digits in RootN.
>
> I rest my case.
>
> Perhaps this observation should be brought to the attention of the IEEE. I
> would like to know their response to it.
>
> Stephen Tucker.
>
>
> On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson 
> wrote:
>
>> On Tue, 14 Feb 2023 11:17:20 +, Oscar Benjamin wrote:
>>> On Tue, 14 Feb 2023 at 07:12, Stephen Tucker 
>> wrote:
>> [snip]
>>>> I have just produced the following log in IDLE (admittedly, in Python
>>>> 2.7.10 and, yes I know that it has been superseded).
>>>>
>>>> It appears to show a precision tail-off as the supplied float gets
>> bigger.
>> [snip]
>>>>
>>>> For your information, the first 20 significant figures of the cube root
>> in
>>>> question are:
>>>> 49793385921817447440
>>>>
>>>> Stephen Tucker.
>>>> --
>>>>>>> 123.456789 ** (1.0 / 3.0)
>>>> 4.979338592181744
>>>>>>> 1234567890. ** (1.0 / 3.0)
>>>> 49793385921817.36
>>>
>>> You need to be aware that 1.0/3.0 is a float that is not exactly equal
>>> to 1/3 ...
>> [snip]
>>> SymPy again:
>>>
>>> In [37]: a, x = symbols('a, x')
>>>
>>> In [38]: print(series(a**x, x, Rational(1, 3), 2))
>>> a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
>>>
>>> You can see that the leading relative error term from x being not
>>> quite equal to 1/3 is proportional to the log of the base. You should
>>> expect this difference to grow approximately linearly as you keep
>>> adding more zeros in the base.
>>
>> Marvelous.  Thank you.
>>
>>
>> --
>> To email me, substitute nowhere->runbox, invalid->com.
>> --
>> https://urldefense.com/v3/__https://mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!jqgolDJWMiHsy0l-fRvM6Flcs478R5LIidNh2fAfa3kuPrtqTm0FC6uQmnUuyWLNypQZd3PkzzGyRzZlkbA$<https://urldefense.com/v3/__https:/mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!jqgolDJWMiHsy0l-fRvM6Flcs478R5LIidNh2fAfa3kuPrtqTm0FC6uQmnUuyWLNypQZd3PkzzGyRzZlkbA$>
>>

--
https://urldefense.com/v3/__https://mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!jqgolDJWMiHsy0l-fRvM6Flcs478R5LIidNh2fAfa3kuPrtqTm0FC6uQmnUuyWLNypQZd3PkzzGyRzZlkbA$<https://urldefense.com/v3/__https:/mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!jqgolDJWMiHsy0l-fRvM6Flcs478R5LIidNh2fAfa3kuPrtqTm0FC6uQmnUuyWLNypQZd3PkzzGyRzZlkbA$>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Thomas Passin

On 2/17/2023 5:27 AM, Stephen Tucker wrote:

Thanks, one and all, for your reponses.

This is a hugely controversial claim, I know, but I would consider this
behaviour to be a serious deficiency in the IEEE standard.

Consider an integer N consisting of a finitely-long string of digits in
base 10.


What you are not considering is that the IEEE standard is about trying 
to achieve a balance between resource use (memory and registers), 
precision, speed of computation, reliability (consistent and stable 
results), and compatibility.  So there have to be many tradeoffs.  One 
of them is the use of binary representation.  It has never been about 
achieving ideal mathematical perfection for some set of special cases.


Want a different set of tradeoffs?  Fine, go for it.  Python has Decimal 
and rational libraries among others.  They run more slowly than IEEE, 
but maybe that's a good tradeoff for you.  Use a symbolic math library. 
Trap special cases of interest to you and calculate them differently. 
Roll your own.  Trouble is, you have to know one heck of a lot to roll 
your own, and it may take decades of debugging to get it right.  Even 
then it won't have hardware assistance like IEEE floating point usually has.



Consider the infinitely-precise cube root of N (yes I know that it could
never be computed unless N is the cube of an integer, but this is a
mathematical argument, not a computational one), also in base 10. Let's
call it RootN.

Now consider appending three zeroes to the right-hand end of N (let's call
it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

The *only *difference between RootN and RootNZZZ is that the decimal point
in RootNZZZ is one place further to the right than the decimal point in
RootN.

None of the digits in RootNZZZ's string should be different from the
corresponding digits in RootN.

I rest my case.

Perhaps this observation should be brought to the attention of the IEEE. I
would like to know their response to it.

Stephen Tucker.


On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson 
wrote:


On Tue, 14 Feb 2023 11:17:20 +, Oscar Benjamin wrote:

On Tue, 14 Feb 2023 at 07:12, Stephen Tucker 

wrote:
[snip]

I have just produced the following log in IDLE (admittedly, in Python
2.7.10 and, yes I know that it has been superseded).

It appears to show a precision tail-off as the supplied float gets

bigger.
[snip]


For your information, the first 20 significant figures of the cube root

in

question are:
49793385921817447440

Stephen Tucker.
--

123.456789 ** (1.0 / 3.0)

4.979338592181744

1234567890. ** (1.0 / 3.0)

49793385921817.36


You need to be aware that 1.0/3.0 is a float that is not exactly equal
to 1/3 ...

[snip]

SymPy again:

In [37]: a, x = symbols('a, x')

In [38]: print(series(a**x, x, Rational(1, 3), 2))
a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))

You can see that the leading relative error term from x being not
quite equal to 1/3 is proportional to the log of the base. You should
expect this difference to grow approximately linearly as you keep
adding more zeros in the base.


Marvelous.  Thank you.


--
To email me, substitute nowhere->runbox, invalid->com.
--
https://mail.python.org/mailman/listinfo/python-list



--
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Stephen Tucker
As a follow-up to my previous message, I have just produced the following
log on IDLE, for your information:
--
>>> math.e ** (math.log
(12345678900) / 3)
4.979338592181741e+16
>>> 10 ** (math.log10 (12345678900)
/ 3)
4.979338592181736e+16
>>> 12345678900 ** (1.0 / 3.0)
4.979338592181734e+16
>>> 123456789e42 ** (1.0 / 3.0)
4.979338592181734e+16
--

Stephen Tucker.


On Fri, Feb 17, 2023 at 10:27 AM Stephen Tucker 
wrote:

> Thanks, one and all, for your reponses.
>
> This is a hugely controversial claim, I know, but I would consider this
> behaviour to be a serious deficiency in the IEEE standard.
>
> Consider an integer N consisting of a finitely-long string of digits in
> base 10.
>
> Consider the infinitely-precise cube root of N (yes I know that it could
> never be computed unless N is the cube of an integer, but this is a
> mathematical argument, not a computational one), also in base 10. Let's
> call it RootN.
>
> Now consider appending three zeroes to the right-hand end of N (let's call
> it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).
>
> The *only *difference between RootN and RootNZZZ is that the decimal
> point in RootNZZZ is one place further to the right than the decimal point
> in RootN.
>
> None of the digits in RootNZZZ's string should be different from the
> corresponding digits in RootN.
>
> I rest my case.
>
> Perhaps this observation should be brought to the attention of the IEEE. I
> would like to know their response to it.
>
> Stephen Tucker.
>
>
> On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson 
> wrote:
>
>> On Tue, 14 Feb 2023 11:17:20 +, Oscar Benjamin wrote:
>> > On Tue, 14 Feb 2023 at 07:12, Stephen Tucker 
>> wrote:
>> [snip]
>> >> I have just produced the following log in IDLE (admittedly, in Python
>> >> 2.7.10 and, yes I know that it has been superseded).
>> >>
>> >> It appears to show a precision tail-off as the supplied float gets
>> bigger.
>> [snip]
>> >>
>> >> For your information, the first 20 significant figures of the cube
>> root in
>> >> question are:
>> >>49793385921817447440
>> >>
>> >> Stephen Tucker.
>> >> --
>> >> >>> 123.456789 ** (1.0 / 3.0)
>> >> 4.979338592181744
>> >> >>> 1234567890. ** (1.0 / 3.0)
>> >> 49793385921817.36
>> >
>> > You need to be aware that 1.0/3.0 is a float that is not exactly equal
>> > to 1/3 ...
>> [snip]
>> > SymPy again:
>> >
>> > In [37]: a, x = symbols('a, x')
>> >
>> > In [38]: print(series(a**x, x, Rational(1, 3), 2))
>> > a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
>> >
>> > You can see that the leading relative error term from x being not
>> > quite equal to 1/3 is proportional to the log of the base. You should
>> > expect this difference to grow approximately linearly as you keep
>> > adding more zeros in the base.
>>
>> Marvelous.  Thank you.
>>
>>
>> --
>> To email me, substitute nowhere->runbox, invalid->com.
>> --
>> https://mail.python.org/mailman/listinfo/python-list
>>
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-17 Thread Stephen Tucker
Thanks, one and all, for your reponses.

This is a hugely controversial claim, I know, but I would consider this
behaviour to be a serious deficiency in the IEEE standard.

Consider an integer N consisting of a finitely-long string of digits in
base 10.

Consider the infinitely-precise cube root of N (yes I know that it could
never be computed unless N is the cube of an integer, but this is a
mathematical argument, not a computational one), also in base 10. Let's
call it RootN.

Now consider appending three zeroes to the right-hand end of N (let's call
it NZZZ) and NZZZ's infinitely-precise cube root (RootNZZZ).

The *only *difference between RootN and RootNZZZ is that the decimal point
in RootNZZZ is one place further to the right than the decimal point in
RootN.

None of the digits in RootNZZZ's string should be different from the
corresponding digits in RootN.

I rest my case.

Perhaps this observation should be brought to the attention of the IEEE. I
would like to know their response to it.

Stephen Tucker.


On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson 
wrote:

> On Tue, 14 Feb 2023 11:17:20 +, Oscar Benjamin wrote:
> > On Tue, 14 Feb 2023 at 07:12, Stephen Tucker 
> wrote:
> [snip]
> >> I have just produced the following log in IDLE (admittedly, in Python
> >> 2.7.10 and, yes I know that it has been superseded).
> >>
> >> It appears to show a precision tail-off as the supplied float gets
> bigger.
> [snip]
> >>
> >> For your information, the first 20 significant figures of the cube root
> in
> >> question are:
> >>49793385921817447440
> >>
> >> Stephen Tucker.
> >> --
> >> >>> 123.456789 ** (1.0 / 3.0)
> >> 4.979338592181744
> >> >>> 1234567890. ** (1.0 / 3.0)
> >> 49793385921817.36
> >
> > You need to be aware that 1.0/3.0 is a float that is not exactly equal
> > to 1/3 ...
> [snip]
> > SymPy again:
> >
> > In [37]: a, x = symbols('a, x')
> >
> > In [38]: print(series(a**x, x, Rational(1, 3), 2))
> > a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
> >
> > You can see that the leading relative error term from x being not
> > quite equal to 1/3 is proportional to the log of the base. You should
> > expect this difference to grow approximately linearly as you keep
> > adding more zeros in the base.
>
> Marvelous.  Thank you.
>
>
> --
> To email me, substitute nowhere->runbox, invalid->com.
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-16 Thread Peter Pearson
On Tue, 14 Feb 2023 11:17:20 +, Oscar Benjamin wrote:
> On Tue, 14 Feb 2023 at 07:12, Stephen Tucker  wrote:
[snip]
>> I have just produced the following log in IDLE (admittedly, in Python
>> 2.7.10 and, yes I know that it has been superseded).
>>
>> It appears to show a precision tail-off as the supplied float gets bigger.
[snip]
>>
>> For your information, the first 20 significant figures of the cube root in
>> question are:
>>49793385921817447440
>>
>> Stephen Tucker.
>> --
>> >>> 123.456789 ** (1.0 / 3.0)
>> 4.979338592181744
>> >>> 1234567890. ** (1.0 / 3.0)
>> 49793385921817.36
>
> You need to be aware that 1.0/3.0 is a float that is not exactly equal
> to 1/3 ...
[snip]
> SymPy again:
>
> In [37]: a, x = symbols('a, x')
>
> In [38]: print(series(a**x, x, Rational(1, 3), 2))
> a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))
>
> You can see that the leading relative error term from x being not
> quite equal to 1/3 is proportional to the log of the base. You should
> expect this difference to grow approximately linearly as you keep
> adding more zeros in the base.

Marvelous.  Thank you.


-- 
To email me, substitute nowhere->runbox, invalid->com.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-15 Thread Weatherby,Gerard
All languages that use IEEE floating point will indeed have the same 
limitations, but it is not true that Python3 only uses IEEE floating point. 
Using the Decimal class and cribbing a method from StackOverflow, 
https://stackoverflow.com/questions/47191533/how-to-efficiently-calculate-cube-roots-using-decimal-in-python


import decimal
from decimal import Decimal

decimal.getcontext().prec = 1_000_000


def cube_root(A: Decimal):
guess = (A - Decimal(1)) / Decimal(3)
x0 = (Decimal(2) * guess + A / Decimal(guess * guess)) / Decimal(3.0)
while 1:
xn = (Decimal(2) * x0 + A / Decimal(x0 * x0)) / Decimal(3.0)
if xn == x0:
break
x0 = xn
return xn


float_root = 5 ** (1.0 / 3)
float_r3 = float_root * float_root * float_root
print(5 - float_r3)
five = Decimal(5.0)
r = cube_root(five)
decimal_r3 = r * r * r
print(5 - decimal_r3)


8.881784197001252e-16
1E-99

From: Python-list  on 
behalf of Michael Torrie 
Date: Tuesday, February 14, 2023 at 5:52 PM
To: python-list@python.org 
Subject: Re: Precision Tail-off?
*** Attention: This is an external email. Use caution responding, opening 
attachments or clicking on links. ***

On 2/14/23 00:09, Stephen Tucker wrote:
> I have two questions:
> 1. Is there a straightforward explanation for this or is it a bug?
To you 1/3 may be an exact fraction, and the definition of raising a
number to that power means a cube root which also has an exact answer,
but to the computer, 1/3 is 0.333 repeating in decimal,
which is some other fraction in binary.  And even rational numbers like
0.2, which are precise and exact, are not in binary
(0.01010101010101010101).  0.2 is .0011011011011011011 on and on forever.

IEEE floating point has very well known limitations.  All languages that
use IEEE floating point will be subject to these limitations.  So it's
not a bug in the sense that all languages will exhibit this behavior.

> 2. Is the same behaviour exhibited in Python 3.x?
Yes. And Java, C++, and any other language that uses IEEE floating point.

--
https://urldefense.com/v3/__https://mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!jjhLqksliV_IjxQAHxXvdnOLB00sJU_hfHNIfK2U1NK-yO2X2kOxJtk6nbqEzXZkyOPBOaMdIlz_sHGkpA$<https://urldefense.com/v3/__https:/mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!jjhLqksliV_IjxQAHxXvdnOLB00sJU_hfHNIfK2U1NK-yO2X2kOxJtk6nbqEzXZkyOPBOaMdIlz_sHGkpA$>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-14 Thread Michael Torrie
On 2/14/23 00:09, Stephen Tucker wrote:
> I have two questions:
> 1. Is there a straightforward explanation for this or is it a bug?
To you 1/3 may be an exact fraction, and the definition of raising a
number to that power means a cube root which also has an exact answer,
but to the computer, 1/3 is 0.333 repeating in decimal,
which is some other fraction in binary.  And even rational numbers like
0.2, which are precise and exact, are not in binary
(0.01010101010101010101).  0.2 is .0011011011011011011 on and on forever.

IEEE floating point has very well known limitations.  All languages that
use IEEE floating point will be subject to these limitations.  So it's
not a bug in the sense that all languages will exhibit this behavior.

> 2. Is the same behaviour exhibited in Python 3.x?
Yes. And Java, C++, and any other language that uses IEEE floating point.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-14 Thread Weatherby,Gerard
Use Python3

Use the decimal module:  https://docs.python.org/3/library/decimal.html


From: Python-list  on 
behalf of Stephen Tucker 
Date: Tuesday, February 14, 2023 at 2:11 AM
To: Python 
Subject: Precision Tail-off?
*** Attention: This is an external email. Use caution responding, opening 
attachments or clicking on links. ***

Hi,

I have just produced the following log in IDLE (admittedly, in Python
2.7.10 and, yes I know that it has been superseded).

It appears to show a precision tail-off as the supplied float gets bigger.

I have two questions:
1. Is there a straightforward explanation for this or is it a bug?
2. Is the same behaviour exhibited in Python 3.x?

For your information, the first 20 significant figures of the cube root in
question are:
   49793385921817447440

Stephen Tucker.
--
>>> 123.456789 ** (1.0 / 3.0)
4.979338592181744
>>> 123456.789 ** (1.0 / 3.0)
49.79338592181744
>>> 123456789. ** (1.0 / 3.0)
497.9338592181743
>>> 123456789000. ** (1.0 / 3.0)
4979.338592181743
>>> 12345678900. ** (1.0 / 3.0)
49793.38592181742
>>> 1234567890. ** (1.0 / 3.0)
497933.8592181741
>>> 123456789. ** (1.0 / 3.0)
4979338.59218174
>>> 123456789000. ** (1.0 / 3.0)
49793385.9218174
>>> 12345678900. ** (1.0 / 3.0)
497933859.2181739
>>> 1234567890. ** (1.0 / 3.0)
4979338592.181739
>>> 123456789. ** (1.0 / 3.0)
49793385921.81738
>>> 123456789000. ** (1.0 / 3.0)
497933859218.1737
>>> 12345678900. ** (1.0 / 3.0)
4979338592181.736
>>> 1234567890. ** (1.0 / 3.0)
49793385921817.36
>>> 123456789. ** (1.0 / 3.0)
497933859218173.56
>>> 123456789000. ** (1.0 / 3.0)
4979338592181735.0
>>> 12345678900. ** (1.0 / 3.0)
4.979338592181734e+16
>>> 1234567890. ** (1.0 / 3.0)
4.979338592181734e+17
>>> 123456789. ** (1.0 /
3.0)
4.979338592181733e+18
>>> 123456789000. ** (1.0 /
3.0)
4.979338592181732e+19
>>> 12345678900. **
(1.0 / 3.0)
4.9793385921817313e+20
--
--
https://urldefense.com/v3/__https://mail.python.org/mailman/listinfo/python-list__;!!Cn_UX_p3!kSE4mNp5KxTEp6SKzpQeBukScLYsmEoDfLpSTuc2Zv8Z3pZQhTm0usq-k4eVquxM08u8VSUX1X6id9IICJHA2B4mzw$
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Precision Tail-off?

2023-02-14 Thread Oscar Benjamin
On Tue, 14 Feb 2023 at 07:12, Stephen Tucker  wrote:
>
> Hi,
>
> I have just produced the following log in IDLE (admittedly, in Python
> 2.7.10 and, yes I know that it has been superseded).
>
> It appears to show a precision tail-off as the supplied float gets bigger.
>
> I have two questions:
> 1. Is there a straightforward explanation for this or is it a bug?
> 2. Is the same behaviour exhibited in Python 3.x?
>
> For your information, the first 20 significant figures of the cube root in
> question are:
>49793385921817447440
>
> Stephen Tucker.
> --
> >>> 123.456789 ** (1.0 / 3.0)
> 4.979338592181744
> >>> 1234567890. ** (1.0 / 3.0)
> 49793385921817.36

You need to be aware that 1.0/3.0 is a float that is not exactly equal
to 1/3 and likewise the other float cannot have as many accurate
digits as is suggested by the number of zeros shown. Therefore you
should compare what exactly it means for the numbers you really have
rather than comparing with an exact cube root of the number that you
intended. Here I will do this with SymPy and calculate many more
digits than are needed. First here is the exact cube root:

In [29]: from sympy import *

In [30]: n = 1234567890

In [31]: cbrt(n).evalf(50)
Out[31]: 49793385921817.447440261250171604380899353243631762

So that's 50 digits of the exact cube root of the exact number and the
first 20 match what you showed. However in your calculation you use
floats so the exact expression that you evaluate is:

In [32]: e = Pow(Rational(float(n)), Rational(1.0/3.0), evaluate=False)

In [33]: print(e)
1234567888830049821836693930508288**(6004799503160661/18014398509481984)

Neither base or exponent is really the number that you intended it to
be. The first 50 decimal digits of this number are:

In [34]: e.evalf(50)
Out[34]: 49793385921817.360106660998131166304296436896582873

All of the digits in the calculation you showed match with the first
digits given here. The output from the float calculation is correct
given what the inputs actually are and also the available precision
for 64 bit floats (53 bits or ~16 decimal digits).

The reason that the results get further from your expectations as the
base gets larger is because the exponent is always less than 1/3 and
the relative effect of that difference is magnified for larger bases.
You can see this in a series expansion of a^x around x=1/3. Using
SymPy again:

In [37]: a, x = symbols('a, x')

In [38]: print(series(a**x, x, Rational(1, 3), 2))
a**(1/3) + a**(1/3)*(x - 1/3)*log(a) + O((x - 1/3)**2, (x, 1/3))

You can see that the leading relative error term from x being not
quite equal to 1/3 is proportional to the log of the base. You should
expect this difference to grow approximately linearly as you keep
adding more zeros in the base.

--
Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list