On Tuesday, May 31, 2011 8:57:57 PM UTC-7, Chris Angelico wrote:
> On Wed, Jun 1, 2011 at 1:30 PM, Carl Banks 
>  wrote:
> > I think you misunderstood what I was saying.
> >
> > It's not *possible* to represent a real number abstractly in any digital 
> > computer.  Python couldn't have an "abstract real number" type even it 
> > wanted to.
> 
> True, but why should the "non-integer number" type be floating point
> rather than (say) rational?

Python has several non-integer number types in the standard library.  The one 
we are talking about is called float.  If the type we were talking about had 
instead been called real, then your question might make some sense.  But the 
fact that it's called float really does imply that that underlying 
representation is floating point.


> Actually, IEEE floating point could mostly
> be implemented in a two-int rationals system (where the 'int' is
> arbitrary precision, so it'd be Python 2's 'long' rather than its
> 'int'); in a sense, the mantissa is the numerator, and the scale
> defines the denominator (which will always be a power of 2). Yes,
> there are very good reasons for going with the current system. But are
> those reasons part of the details of implementation, or are they part
> of the definition of the data type?

Once again, Python float is an IEEE double-precision floating point number.  
This is part of the language; it is not an implementation detail.  As I 
mentioned elsewhere, the Python library establishes this as part of the 
language because it includes several functions that operate on IEEE numbers.

And, by the way, the types you're comparing it to aren't as abstract as you say 
they are.  Python's int type is required to have a two's-compliment binary 
representation and support bitwise operations.


> > (Math aside: Real numbers are not countable, meaning they 
> > cannot be put into one-to-one correspondence with integers.
> >  A digital computer can only represent countable things
> > exactly, for obvious reasons; therefore, to model
> > non-countable things like real numbers, one must use a
> > countable approximation like floating-point.)
> 
> Right. Obviously a true 'real number' representation can't be done.
> But there are multiple plausible approximations thereof (the best
> being rationals).

That's a different question.  I don't care to discuss it, except to say that 
your default real-number type would have to be called something other than 
float, if it were not a floating point.


> Not asking for Python to be changed, just wondering why it's defined
> by what looks like an implementation detail. It's like defining that a
> 'character' is an 8-bit number using the ASCII system, which then
> becomes problematic with Unicode.

It really isn't.  Unlike with characters (which are trivially extensible to 
larger character sets, just add more bytes), different real number 
approximations differ in details too important to be left to the implementation.

For instance, say you are using an implementation that uses floating point, and 
you define a function that uses Newton's method to find a square root:

def square_root(N,x=None):
    if x is None:
        x = N/2
    for i in range(100):
        x = (x + N/x)/2
    return x

It works pretty well on your floating-point implementation.  Now try running it 
on an implementation that uses fractions by default....

(Seriously, try running this function with N as a Fraction.)

So I'm going to opine that the representation does not seem like an 
implementation detail.


Carl Banks
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to