Re: Python math is off by .000000000000045
Michael Torrie wrote: He's simply showing you the hex (binary) representation of the floating-point number's binary representation. As you can clearly see in the case of 1.1, there is no finite sequence that can store that. You end up with repeating numbers. Thanks for the explanation. This should help you understand why you get errors doing simple things like x/y*y doesn't quite get you back to x. I already understood that. I just didn't understand what point he was trying to make since he gave no explanation. ~Ethan~ -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On 02/27/2012 10:28 AM, Ethan Furman wrote: > jmfauth wrote: >> On 25 fév, 23:51, Steven D'Aprano > +comp.lang.pyt...@pearwood.info> wrote: >>> On Sat, 25 Feb 2012 13:25:37 -0800, jmfauth wrote: >>> (2.0).hex() '0x1.0p+1' >>> (4.0).hex() '0x1.0p+2' >>> (1.5).hex() '0x1.8p+0' >>> (1.1).hex() '0x1.1999ap+0' jmf >>> What's your point? I'm afraid my crystal ball is out of order and I have >>> no idea whether you have a question or are just demonstrating your >>> mastery of copy and paste from the Python interactive interpreter. >> >> It should be enough to indicate the right direction >> for casual interested readers. > > I'm a casual interested reader and I have no idea what your post is > trying to say. He's simply showing you the hex (binary) representation of the floating-point number's binary representation. As you can clearly see in the case of 1.1, there is no finite sequence that can store that. You end up with repeating numbers. Just like 1/3, when represented in base 10 fractions (x1/10 + x2/100, x3/1000, etc), is a repeating sequence, the number base 10 numbers 1.1 or 0.2, or many others that are represented by exact base 10 fractions, end up as repeating sequences in base 2 fractions. This should help you understand why you get errors doing simple things like x/y*y doesn't quite get you back to x. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
jmfauth wrote: On 25 fév, 23:51, Steven D'Aprano wrote: On Sat, 25 Feb 2012 13:25:37 -0800, jmfauth wrote: (2.0).hex() '0x1.0p+1' (4.0).hex() '0x1.0p+2' (1.5).hex() '0x1.8p+0' (1.1).hex() '0x1.1999ap+0' jmf What's your point? I'm afraid my crystal ball is out of order and I have no idea whether you have a question or are just demonstrating your mastery of copy and paste from the Python interactive interpreter. It should be enough to indicate the right direction for casual interested readers. I'm a casual interested reader and I have no idea what your post is trying to say. ~Ethan~ -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On 02/27/2012 08:02 AM, Grant Edwards wrote: > On 2012-02-27, Steven D'Aprano wrote: >> On Sun, 26 Feb 2012 16:24:14 -0800, John Ladasky wrote: >> >>> Curiosity prompts me to ask... >>> >>> Those of you who program in other languages regularly: if you visit >>> comp.lang.java, for example, do people ask this question about >>> floating-point arithmetic in that forum? Or in comp.lang.perl? >> >> Yes. >> >> http://stackoverflow.com/questions/588004/is-javascripts-math-broken >> >> And look at the "Linked" sidebar. Obviously StackOverflow users no >> more search the internet for the solutions to their problems than do >> comp.lang.python posters. >> >> http://compgroups.net/comp.lang.java.programmer/Floating-point-roundoff-error > > One might wonder if the frequency of such questions decreases as the > programming language becomes "lower level" (e.g. C or assembly). I think that most of the use cases in C or assembly of math are integer-based only. For example, counting, bit-twiddling, addressing character cells or pixel coordinates, etc. Maybe when programmers have to statically declare a variable type in advance, since the common use cases require only integer, that gets used far more, so experiences with float happen less often. Some of this could have to do with the fact that historically floating point required a special library to do floating point math, and since a lot of people didn't have floating-point coprocessors back then, most code was integer-only. Early BASIC interpreters defaulted to floating point for everything, and implemented all the floating point arithmetic internally with integer arithmetic, without the help of the x87 processor, but no doubt they did round the results when printing to the screen. They also did not have very much precision to begin with. Anyone remember Microsoft's proprietary floating point binary system and how there were function calls to convert back and forth between the IEEE standard? Another key thing is that most C programmers don't normally just print out floating point numbers without a %.2f kind of notation that properly rounds a number. Now, of course, every processor has a floating-point unit, and the C compilers can generate code that uses it just as easily as integer code. No matter what language, or what floating point scheme you use, significant digits is definitely important to understand! -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On 2012-02-27, Steven D'Aprano wrote: > On Sun, 26 Feb 2012 16:24:14 -0800, John Ladasky wrote: > >> Curiosity prompts me to ask... >> >> Those of you who program in other languages regularly: if you visit >> comp.lang.java, for example, do people ask this question about >> floating-point arithmetic in that forum? Or in comp.lang.perl? > > Yes. > > http://stackoverflow.com/questions/588004/is-javascripts-math-broken > > And look at the "Linked" sidebar. Obviously StackOverflow users no > more search the internet for the solutions to their problems than do > comp.lang.python posters. > > http://compgroups.net/comp.lang.java.programmer/Floating-point-roundoff-error One might wonder if the frequency of such questions decreases as the programming language becomes "lower level" (e.g. C or assembly). -- Grant Edwards grant.b.edwardsYow! World War III? at No thanks! gmail.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On Sun, 26 Feb 2012 16:24:14 -0800, John Ladasky wrote: > Curiosity prompts me to ask... > > Those of you who program in other languages regularly: if you visit > comp.lang.java, for example, do people ask this question about > floating-point arithmetic in that forum? Or in comp.lang.perl? Yes. http://stackoverflow.com/questions/588004/is-javascripts-math-broken And look at the "Linked" sidebar. Obviously StackOverflow users no more search the internet for the solutions to their problems than do comp.lang.python posters. http://compgroups.net/comp.lang.java.programmer/Floating-point-roundoff-error -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On 2/26/2012 7:24 PM, John Ladasky wrote: > I always found it helpful to ask someone who is confused by this issue > to imagine what the binary representation of the number 1/3 would be. > > 0.011 to three binary digits of precision: > 0.0101 to four: > 0.01011 to five: > 0.010101 to six: > 0.0101011 to seven: > 0.01010101 to eight: > > And so on, forever. So, what if you want to do some calculator-style > math with the number 1/3, that will not require an INFINITE amount of > time? You have to round. Rounding introduces errors. The more > binary digits you use for your numbers, the smaller those errors will > be. But those errors can NEVER reach zero in finite computational > time. Ditto for 1/3 in decimal. ... 0. to eitht If ALL the numbers you are using in your computations are rational numbers, you can use Python's rational and/or decimal modules to get error-free results. Decimal floats are about as error prone as binary floats. One can only exact represent a subset of rationals of the form n / (2**j * 5**k). For a fixed number of bits of storage, they are 'lumpier'. For any fixed precision, the arithmetic issues are the same. The decimal module decimals have three advantages (sometimes) over floats. 1. Variable precision - but there are multiple-precision floats also available outside the stdlib. 2. They better imitate calculators - but that is irrelevant or a minus for scientific calculation. 3. They better follow accounting rules for financial calculation, including a multiplicity of rounding rules. Some of these are laws that *must* be followed to avoid nasty consequences. This is the main reason for being in the stdlib. > Learning to use them is a bit of a specialty. Definitely true. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
Curiosity prompts me to ask... Those of you who program in other languages regularly: if you visit comp.lang.java, for example, do people ask this question about floating-point arithmetic in that forum? Or in comp.lang.perl? Is there something about Python that exposes the uncomfortable truth about practical computer arithmetic that these other languages obscure? For of course, arithmetic is surely no less accurate in Python than in any other computing language. I always found it helpful to ask someone who is confused by this issue to imagine what the binary representation of the number 1/3 would be. 0.011 to three binary digits of precision: 0.0101 to four: 0.01011 to five: 0.010101 to six: 0.0101011 to seven: 0.01010101 to eight: And so on, forever. So, what if you want to do some calculator-style math with the number 1/3, that will not require an INFINITE amount of time? You have to round. Rounding introduces errors. The more binary digits you use for your numbers, the smaller those errors will be. But those errors can NEVER reach zero in finite computational time. If ALL the numbers you are using in your computations are rational numbers, you can use Python's rational and/or decimal modules to get error-free results. Learning to use them is a bit of a specialty. But for those of us who end up with numbers like e, pi, or the square root of 2 in our calculations, the compromise of rounding must be accepted. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On 25 fév, 23:51, Steven D'Aprano wrote: > On Sat, 25 Feb 2012 13:25:37 -0800, jmfauth wrote: > (2.0).hex() > > '0x1.0p+1' > (4.0).hex() > > '0x1.0p+2' > (1.5).hex() > > '0x1.8p+0' > (1.1).hex() > > '0x1.1999ap+0' > > > jmf > > What's your point? I'm afraid my crystal ball is out of order and I have > no idea whether you have a question or are just demonstrating your > mastery of copy and paste from the Python interactive interpreter. > It should be enough to indicate the right direction for casual interested readers. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On 2/25/2012 9:49 PM, Devin Jeanpierre wrote: What this boils down to is to say that, basically by definition, the set of numbers representable in some finite number of binary digits is countable (just count up in binary value). But the whole of the real numbers are uncountable. The hard part is then accepting that some countable thing is 0% of an uncountable superset. I don't really know of any "proof" of that latter thing, it's something I've accepted axiomatically and then worked out backwards from there. Informally, if the infinity of counts were some non-zero fraction f of the reals, then there would, in some sense, be 1/f times a many reals as counts, so the count could be expanded to count 1/f reals for each real counted before, and the reals would be countable. But Cantor showed that the reals are not countable. But as you said, this is all irrelevant for computing. Since the number of finite strings is practically finite, so is the number of algorithms. And even a countable number of algorithms would be a fraction 0, for instance, of the uncountable predicate functions on 0, 1, 2, ... . So we do what we actually can that is of interest. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On Sat, Feb 25, 2012 at 2:08 PM, Tim Wintle wrote: > > It seems to me that there are a great many real numbers that can be > > represented exactly by floating point numbers. The number 1 is an > > example. > > > > I suppose that if you divide that count by the infinite count of all > > real numbers, you could argue that the result is 0%. > > It's not just an argument - it's mathematically correct. ^ this The floating point numbers are a finite set. Any infinite set, even the rationals, is too big to have "many" floats relative to the whole, as in the percentage sense. In fact, any number we can reasonably deal with must have some finite representation, even if the decimal expansion has an infinite number of digits. We can work with pi, for example, because there are algorithms that can enumerate all the digits up to some precision. But we can't really work with a number for which no algorithm can enumerate the digits, and for which there are infinitely many digits. Most (in some sense involving infinities, which is to say, one that is not really intuitive) of the real numbers cannot in any way or form be represented in a finite amount of space, so most of them can't be worked on by computers. They only exist in any sense because it's convenient to pretend they exist for mathematical purposes, not for computational purposes. What this boils down to is to say that, basically by definition, the set of numbers representable in some finite number of binary digits is countable (just count up in binary value). But the whole of the real numbers are uncountable. The hard part is then accepting that some countable thing is 0% of an uncountable superset. I don't really know of any "proof" of that latter thing, it's something I've accepted axiomatically and then worked out backwards from there. But surely it's obvious, somehow, that the set of finite strings is tiny compared to the set of infinite strings? If we look at binary strings, representing numbers, the reals could be encoded as the union of the two, and by far most of them would be infinite. Anyway, all that aside, the real numbers are kind of dumb. -- Devin -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On Sat, 25 Feb 2012 13:25:37 -0800, jmfauth wrote: (2.0).hex() > '0x1.0p+1' (4.0).hex() > '0x1.0p+2' (1.5).hex() > '0x1.8p+0' (1.1).hex() > '0x1.1999ap+0' > jmf What's your point? I'm afraid my crystal ball is out of order and I have no idea whether you have a question or are just demonstrating your mastery of copy and paste from the Python interactive interpreter. -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
>>> (2.0).hex() '0x1.0p+1' >>> (4.0).hex() '0x1.0p+2' >>> (1.5).hex() '0x1.8p+0' >>> (1.1).hex() '0x1.1999ap+0' >>> jmf -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On 2/25/2012 12:56 PM, Tobiah wrote: It seems to me that there are a great many real numbers that can be represented exactly by floating point numbers. The number 1 is an example. Binary floats can represent and integer and any fraction with a denominator of 2**n within certain ranges. For decimal floats, substitute 10**n or more exactly, 2**j * 5**k since if J < k, n / (2**j * 5**k) = (n * 2**(k-j)) / 10**k and similarly if j > k. -- Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On Sat, 2012-02-25 at 09:56 -0800, Tobiah wrote: > > For every floating point > > number there is a corresponding real number, but 0% of real numbers > > can be represented exactly by floating point numbers. > > It seems to me that there are a great many real numbers that can be > represented exactly by floating point numbers. The number 1 is an > example. > > I suppose that if you divide that count by the infinite count of all > real numbers, you could argue that the result is 0%. It's not just an argument - it's mathematically correct. The same can be said for ints representing the natural numbers, or positive integers. However, ints can represent 100% of integers within a specific range, where floats can't represent all real numbers for any range (except for the empty set) - because there's an infinate number of real numbers within any non-trivial range. Tim -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
> For every floating point > number there is a corresponding real number, but 0% of real numbers > can be represented exactly by floating point numbers. It seems to me that there are a great many real numbers that can be represented exactly by floating point numbers. The number 1 is an example. I suppose that if you divide that count by the infinite count of all real numbers, you could argue that the result is 0%. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On Wed, 22 Feb 2012 19:26:26 +0100, Christian Heimes wrote: > Python uses the platforms double precision float datatype. Floats are > almost never exact. Well, that's not quite true. Python floats are always exact. They just may not be exactly what you want :) Pedantic-but-unhelpful-as-always-ly y'rs, -- Steven -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On 2012-02-22, Alec Taylor wrote: > Simple mathematical problem, + and - only: > 1800.00-1041.00-555.74+530.74-794.95 > -60.9500045 > > That's wrong. Oh good. We haven't have this thread for several days. > Proof > http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74%2B530.74-794.95 > -60.95 aka (-(1219/20)) > > Is there a reason Python math is only approximated? http://docs.python.org/tutorial/floatingpoint.html Python uses binary floating point with a fixed size (64 bit IEEE-754 on all the platforms I've ever run across). Floating point numbers are only approximations of real numbers. For every floating point number there is a corresponding real number, but 0% of real numbers can be represented exactly by floating point numbers. > - Or is this a bug? No, it's how floating point works. If you want something else, then perhaps you should use rationals or decimals: http://docs.python.org/library/fractions.html http://docs.python.org/library/decimal.html -- Grant Edwards grant.b.edwardsYow! What I want to find at out is -- do parrots know gmail.commuch about Astro-Turf? -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
Alec Taylor writes: > Simple mathematical problem, + and - only: > > >>> 1800.00-1041.00-555.74+530.74-794.95 > -60.9500045 > > That's wrong. Not by much. I'm not an expert, but my guess is that the exact value is not representable in binary floating point, which most programming languages use for this. Ah, indeed: >>> 0.95 0.94996 Some languages hide the error by printing fewer decimals than they use internally. > Proof > http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74%2B530.74-794.95 > -60.95 aka (-(1219/20)) > > Is there a reason Python math is only approximated? - Or is this a bug? There are practical reasons. Do learn about "floating point". There is a price to pay, but you can have exact rational arithmetic in Python when you need or want it - I folded the long lines by hand afterwards: >>> from fractions import Fraction >>> 1800 - 1041 - Fraction(55574, 100) + Fraction(53074, 100) - Fraction(79495, 100) Fraction(-1219, 20) >>> -1219/20 -61 >>> -1219./20 -60.953 >>> float(1800 - 1041 - Fraction(55574, 100) + Fraction(53074, 100) - Fraction(79495, 100)) -60.953 -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On Feb 22, 1:13 pm, Alec Taylor wrote: > Simple mathematical problem, + and - only: > > >>> 1800.00-1041.00-555.74+530.74-794.95 > > -60.9500045 > > That's wrong. > > Proofhttp://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74%2B530.74-... > -60.95 aka (-(1219/20)) > > Is there a reason Python math is only approximated? - Or is this a bug? > > Thanks for all info, > > Alec Taylor I get the right answer if I use the right datatype: >>> import decimal >>> D=decimal.Decimal >>> D('1800.00')-D('1041.00')-D('555.74')+D('530.74')-D('794.95') Decimal('-60.95') -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On Wed, Feb 22, 2012 at 10:13 AM, Alec Taylor wrote: > Simple mathematical problem, + and - only: > 1800.00-1041.00-555.74+530.74-794.95 > -60.9500045 > > That's wrong. Welcome to the world of finite-precision binary floating-point arithmetic then! Reality bites. > Proof > http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74%2B530.74-794.95 > -60.95 aka (-(1219/20)) > > Is there a reason Python math is only approximated? Because vanilla floating-point numbers have a finite bit length (and thus finite precision) but they try to represent a portion of the real number line, which has infinitely many points. Some approximation therefore has to occur. It's not a problem specific to Python; it's inherent to your CPU's floating point numeric types. Read http://docs.python.org/tutorial/floatingpoint.html and http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html Wolfram Alpha is either rounding off its answer to fewer decimal places (thus merely hiding the imprecision), or using some different, more computationally expensive arithmetic type(s) in its calculations, hence why it gives the exact answer. Alternatives to floats in Python include: * Fractions: http://docs.python.org/library/fractions.html * Arbitrary-precision decimal floating point: http://docs.python.org/library/decimal.html These aren't the default for both historical and performance reasons. Cheers, Chris -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On Feb 22, 2012 1:16 PM, "Alec Taylor" wrote: > > Simple mathematical problem, + and - only: > > >>> 1800.00-1041.00-555.74+530.74-794.95 > -60.9500045 > > That's wrong. > > Proof > http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74%2B530.74-794.95 > -60.95 aka (-(1219/20)) > > Is there a reason Python math is only approximated? - Or is this a bug? > > Thanks for all info, > > Alec Taylor > -- You aren't doing math with decimal numbers. you're using IEEE 754-compliant double precision floating point numbers. this isn't just a python thing. You'd get the same results in C, Java, VB, and pretty much every other general purpose language written in the last 40 years. Floats are represented in a form similar to scientific notation (a * 2^b), so just like scientific notation, there's a finite number of significant figures. And just like there are rational numbers that can't be represented in decimal, like 1/3, there are numbers that can't be represented in binary, like 1/10. Double-precision numbers are accurate to about 15 decimal digits. if you need more precision, there is the decimal module but it's way slower because your processor doesn't natively support it. > http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On Wed, Feb 22, 2012 at 11:13 AM, Alec Taylor wrote: > Simple mathematical problem, + and - only: > 1800.00-1041.00-555.74+530.74-794.95 > -60.9500045 > > That's wrong. > > Proof > http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74%2B530.74-794.95 > -60.95 aka (-(1219/20)) > > Is there a reason Python math is only approximated? - Or is this a bug? http://docs.python.org/faq/design.html#why-are-floating-point-calculations-so-inaccurate -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
Am 22.02.2012 19:13, schrieb Alec Taylor: > Simple mathematical problem, + and - only: > 1800.00-1041.00-555.74+530.74-794.95 > -60.9500045 > > That's wrong. That's only the correct answer for unlimited precision, not for IEEE-754 semantics. http://en.wikipedia.org/wiki/IEEE_754 > Proof > http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74%2B530.74-794.95 > -60.95 aka (-(1219/20)) > > Is there a reason Python math is only approximated? - Or is this a bug? Python uses the platforms double precision float datatype. Floats are almost never exact. Christian -- http://mail.python.org/mailman/listinfo/python-list
Re: Python math is off by .000000000000045
On 22/02/2012 18:13, Alec Taylor wrote: Simple mathematical problem, + and - only: 1800.00-1041.00-555.74+530.74-794.95 -60.9500045 That's wrong. Proof http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74%2B530.74-794.95 -60.95 aka (-(1219/20)) Is there a reason Python math is only approximated? - Or is this a bug? Thanks for all info, Alec Taylor Please google for floating point numbers. -- Cheers. Mark Lawrence. -- http://mail.python.org/mailman/listinfo/python-list
Python math is off by .000000000000045
Simple mathematical problem, + and - only: >>> 1800.00-1041.00-555.74+530.74-794.95 -60.9500045 That's wrong. Proof http://www.wolframalpha.com/input/?i=1800.00-1041.00-555.74%2B530.74-794.95 -60.95 aka (-(1219/20)) Is there a reason Python math is only approximated? - Or is this a bug? Thanks for all info, Alec Taylor -- http://mail.python.org/mailman/listinfo/python-list