In a message of Mon, 01 Jun 2015 20:36:14 +1000, Chris Angelico writes: >The problem isn't the decimal separator, though, because floats can >have problems even without it (and can have no problems with a decimal >separator). If you want to distinguish "computer numbers" from "real >numbers", you'd do better to pick a different set of symbols for them >- or at least a different numerical base. If all literals were written >in octal, people would understand that there's something special going >on here. But would that really help?
Maybe _your_ brain needs some resetting, too. :) You know too much so have lost the grasp of how the world looks to the new programmer. Problem: I want to use a computer to add up a whole lot of money amounts so I can figure out how much money to send some place. This is an absolute, dead simple first computer program newbies write. For a lot of people, this used to be the whole reason they bought a computer in the first place. Now they just want to do the same stuff on their phone, which they have anyway. And they immediately grab floating point numbers, and since adding a whole lot of them in range of 'prices for stuff at the supermarket' is one of the best ways to guarantee your will not get the correct, exact amount you want for your answer, they don't get it. You can pick any representation you want for float, as long as it _isn't_ the same one as we use for money, and a whole lot of problems will go away. Because the problems are in the users' heads, and no place else, and the problem is 'This looks familiar. I will use it an expect it to behave like I am used to.' All human beings do this all the time, so the way to prevent the problem is to make it look less familiar. But way back in time, you know. Von Neumann recommended against floating-point numbers for the 1951 IAS machine, arguing that fixed-point arithmetic is preferable. I agree, but, if John von Neumann couldn't win that argument, then there is no way on earth I could. So, if floating point is going in, at least we should represent them differently. But I am a bad arguer. When incompatibilites were going into Python 3.0 I wanted y = 1.3 to give you a decimal, not a float. If you wanted a float you would have to write y = 1.3f or something. I lost that one too. I still think it would be great. But, hell, I write accounting and bookkeeping systems. Your milage may vary. :) Laura -- https://mail.python.org/mailman/listinfo/python-list