Re: [Tutor] Decimals 'not equal to themselves' (e.g. 0.2 equals 0.200000001)
On Sun, 3 Aug 2008, CNiall wrote: > >>> 0.2 > 0.20001 > >>> 0.33 > 0.33002 > > As you can see, the last two decimals are very slightly inaccurate. > However, it appears that when n in 1/n is a power of two, the decimal > does not get 'thrown off'. How might I make Python recognise 0.2 as 0.2 > and not 0.20001? It's not a Python thing, it's a computer thing. Thsi would be present regardless of what computer language you're using. An oversimplificationis that computers use binary, i.e., base-2. The only non-1 factor of 2 is 2 itself, and computers can only give you an exact representation of a fraction whose denominator only has factors of 2. So these are exact representations: >>> .5 0.5 >>> .5 # 1/2 0.5 >>> .25# 1/4 0.25 >>> .125 # 1/8 0.125 >>> .875 # 7/8 0.875 But suppose you want 1/10. Well, 10 is factored into 2 and 5, and that pesky 5 makes it impossible for a computer to store exactly; same with 1/5: >>> .1 # 1/10 0.10001 >>> .2 # 1/5 0.20001 Do you remember when you learned decimal fractions in garde school, and realized you couldn't represent, for example, 1/3 or 1/7 exactly? You had to content yourself with 0.333 or 0.14285714285714285... with digits repeating forever? That's the equivalent problem in our usual base-10: we can't exactly represent any fraction unless the denominator factors only into 2s and 5s (which are the factors of 10). So representing 1/2, 1/4, 1/20 all are no problem in decimal; but 1/3 and 1/21 can't be exactly represented. A corallary of this, by the way, is that, because there are so many fractions that can't be exactly represented in binary notation, you should never compare floating point numbers looking for equality. It just won't work. Consider the following code: >>> x = 0.0 >>> while x != 1.0: ... print x ... x = x + 1.0/7.0 You might expect this to look through 0.0, 0.142, 0.285..., 0.428 up to 10., and then stop. But it won't. because it never quite equals 1.0. It goes right up to a number near 1.0, that might display as 1.0, but is not really 1.0, and blasts on through to 1.142... etc, in an endless loop. So when comparing floating point numbers you should either use a comparison like >= (if using it as a limit) or a construct like if abs(x-y)<.1: to see if they're close enough to equal to keep you happy. ___ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Decimals 'not equal to themselves' (e.g. 0.2 equals 0.200000001)
On Sun, Aug 3, 2008 at 10:04 AM, CNiall <[EMAIL PROTECTED]> wrote: > I want to make a simple script that calculates the n-th root of a given > number (e.g. 4th root of 625--obviously five, but it's just an example :P), > and because there is no nth-root function in Python I will do this with > something like x**(1/n). > > However, with some, but not all, decimals, they do not seem to 'equal > themselves'. This is probably a bad way of expressing what I mean, so I'll > give an example: > 0.125 0.2 > 0.20001 0.33 > 0.33002 > > As you can see, the last two decimals are very slightly inaccurate. However, > it appears that when n in 1/n is a power of two, the decimal does not get > 'thrown off'. How might I make Python recognise 0.2 as 0.2 and not > 0.20001? This is a limitation of floaating point numbers. A discussion is here: http://docs.python.org/tut/node16.html Your root calculator can only find answers that are as accurate as the representation allows. Kent ___ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Decimals 'not equal to themselves' (e.g. 0.2 equals 0.200000001)
CNiall wrote: I want to make a simple script that calculates the n-th root of a given number (e.g. 4th root of 625--obviously five, but it's just an example :P), and because there is no nth-root function in Python I will do this with something like x**(1/n). Side note: of course there are python built-in ways to do that. You just named one yourself: In [6]: 625**(1.0/4) Out[6]: 5.0 also: In [9]: pow(625, 1.0/4) Out[9]: 5.0 However, with some, but not all, decimals, they do not seem to 'equal themselves'. As you can see, the last two decimals are very slightly inaccurate. However, it appears that when n in 1/n is a power of two, the decimal does not get 'thrown off'. How might I make Python recognise 0.2 as 0.2 and not 0.20001? You just can't store 0.1 as a binary floating point. You might want to read: http://www.network-theory.co.uk/docs/pytut/FloatingPointArithmeticIssuesandLimitations.html http://www.network-theory.co.uk/docs/pytut/RepresentationError.html The decimal module provides decimal floating point arithmetic: http://docs.python.org/lib/module-decimal.html like in: In [1]: 0.2 * 2 Out[1]: 0.40002 In [2]: from decimal import Decimal In [3]: Decimal('0.2') * 2 Out[3]: Decimal("0.4") thomas ___ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor
[Tutor] Decimals 'not equal to themselves' (e.g. 0.2 equals 0.200000001)
I am very new to Python (I started learning it just yesterday), but I have encountered a problem. I want to make a simple script that calculates the n-th root of a given number (e.g. 4th root of 625--obviously five, but it's just an example :P), and because there is no nth-root function in Python I will do this with something like x**(1/n). However, with some, but not all, decimals, they do not seem to 'equal themselves'. This is probably a bad way of expressing what I mean, so I'll give an example: >>> 0.5 0.5 >>> 0.25 0.25 >>> 0.125 0.125 >>> 0.2 0.20001 >>> 0.33 0.33002 As you can see, the last two decimals are very slightly inaccurate. However, it appears that when n in 1/n is a power of two, the decimal does not get 'thrown off'. How might I make Python recognise 0.2 as 0.2 and not 0.20001? This discrepancy is very minor, but it makes the whole n-th root calculator inaccurate. :\ ___ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor