Steven D'Aprano <st...@pearwood.info> writes:

> On Tue, 24 May 2016 12:29 am, Ben Bacarisse wrote:
>
>> Ian Kelly <ian.g.ke...@gmail.com> writes:
>> 
>>> On Mon, May 23, 2016 at 2:09 AM, Steven D'Aprano
>>> <steve+comp.lang.pyt...@pearwood.info> wrote:
>>>> Are you saying that the Egyptians, Babylonians and Greeks didn't know
>>>> how to work with fractions?
>>>>
>>>> http://mathworld.wolfram.com/EgyptianFraction.html
>>>>
>>>> http://nrich.maths.org/2515
>>>>
>>>> Okay, it's not quite 4000 years ago. Sometimes my historical sense of
>>>> the distant past is a tad inaccurate. Shall we say 2000 years instead?
>>>
>>> Those links give dates of 1650 BC and 1800 BC respectively, so I'd say
>>> your initial guess was closer.
>> 
>> Right, but this is to miss the point.  Let's say that 4000 years have
>> defined 1/3 to be one third, but Python 3 (as do many programming
>> languages) defines 1/3 to be something very very very very close to one
>> third, and *that* idea is very very very very new!  It's unfortunate
>> that the example in this thread does not illustrate the main problem of
>> shifting to binary floating point, because 1/2 happens to be exactly
>> representable.
>
> That's not really the point. I acknowledge that floats do not represent all
> rational numbers (a/b) exactly. Neither do decimal floats -- most school
> children will learn that 0.333333333333 is not 1/3 exactly, and anyone who
> has used a calculator will experience calculations that give (say)
> 0.999999999 or 1.0000000001 instead of 1.

Yes, I got that.  I'm sure you're aware of the consequences of a
floating point representation.

> And you know what? *People cope.*

Yes, I agree.

> For all the weirdness of floating point, for all the rounding errors and
> violations of mathematical properties, floating point maths is *still* the
> best way to perform numerical calculations for many purposes.

Indeed.

> In fact, even IEEE-754 arithmetic, which is correctly rounded and therefore
> introduces the least possible rounding error, is sometimes "too good" --
> people often turn to GPUs instead of CPUs for less accurate but faster bulk
> calculations.

Yes, they do.

> The point is, most people wouldn't really care that much whether 1/3
> returned a binary 0.3333333333, or decimal 0.3333333333, or an exact
> rational fraction 1/3. Most people wouldn't care too much if they got
> 32-bits of precision or 64, or 10 decimal places or 18. When cutting (say)
> a sheet of paper into three equal pieces, they are unlikely to be able to
> measure and cut with an accuracy better than 1/2 of a millimeter, so 15
> decimal places is overkill.

I agree here too.  Most people cope just fine with floating point.
Usenet magnifies the number of people reporting issues (due to testing
for equality or using them for currency and so on), but there is almost
certainly a silent majority who either just figure it out or understand
how to use floats from the get-go.

But there is an issue, however small, and it's due, largely, to the fact
that people can't see the representation of floats very well.  All sorts
of things conspire (for the very best of engineering reasons) to make it
hard to see what actual value you are holding.  Yes, people have been
doing arithmetic for thousands of years, but they've usually done it by
manipulating the representation themselves.  The "behind the scenes"
effects of floating point are a relatively new phenomenon though.

> But one thing is certain: very few people, Jon Ribbens being one of them,
> expects 1/3 to return 0. And that is why Python changed the meaning of
> the / operator: because using it for integer division was deeply unpopular
> and a bug magnet.

Yup.  I agree with everything you've said.

-- 
Ben.
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to