Re: [Tutor] Fraction - differing interpretations for number and string - presentation

2015-04-17 Thread Oscar Benjamin
On 17 April 2015 at 03:29, Steven D'Aprano st...@pearwood.info wrote:
 On Thu, Apr 16, 2015 at 03:11:59PM -0700, Jim Mooney wrote:

 So the longer numerator and denominator would, indeed, be more accurate if
 used in certain calculations rather than being normalized to a float - such
 as in a Fortran subroutine or perhaps if exported to a machine with a
 longer bit-length? That's mainly what I was interested in - if there is any
 usable difference between the two results.

If it's okay to use float and then go onto machines with higher/lower
precision then the extra precision must be unnecessary in which case
why bother with the Fraction type? If you really need to transfer your
float from one program/machine to another then why not just send the
decimal representation. Your 64-bit float should be able to round-trip
to decimal text and back for any IEEE-754 compliant systems. Writing
it as a fraction doesn't gain any extra accuracy. (BTW Python the
language guarantees that float is always an IEEE-754 64-bit float on
any machine).

When you use floats the idea is that you're using fixed-width floating
point as an approximation of real numbers. You're expected to know
that there will be some rounding and not consider your computation to
be exact. So the difference between 1.64 and the nearest IEEE-754
64-bit binary float should be considered small enough not to worry
about.

It's possible that in your calculations you will also be using
functions from the math module such as sin, cos, etc. and these
functions cannot be computed *exactly* for an input such as 1.64.
However they can be computed up to any desired finite precision so we
can always get the nearest possible float which is what the math
module will do.

When you use the fractions module and the Fraction type the idea is
that floating point inexactness is unacceptable to you. You want to
perform *exact* arithmetic and convert from other numeric types or
text exactly. You won't be able to use functions like sin and cos but
that's no loss since you wouldn't be able to get an exact rational
result there anyway.

Because the Fractions module is designed for the I want everything to
be exact use case conversion from float to Fraction is performed
exactly. Conversion from string or Decimal to Fraction is also exact.
The Fraction will also display its exact value when printed and that's
what you're seeing.

If you really want higher accuracy than float consider ditching it
altogether in favour of Fraction which will compute everything you
want exactly. However there's no point in doing this if you're also
mixing floats into the calculations. float+Fraction coerces to float
discarding accuracy and then calculates with floating point rounding.
If that's acceptable then don't bother with Fraction in the first
place. If not make sure you stick to only using Fraction.

 You're asking simple questions that have complicated answers which
 probably won't be what you are looking for. But let's try :-)

 Let's calculate a number. The details don't matter:

 x = some_calculation(a, b)
 print(x)

 which prints 1.64. Great, we have a number that is the closest possible
 base 2 float to the decimal 164/100. If we convert that float to a
 fraction *exactly*, we get:

 py Fraction(1.64)
 Fraction(738590337613, 4503599627370496)

 So the binary float which displays as 1.64 is *actually* equal to
 738590337613/4503599627370496, which is just a tiny bit less than
 the decimal 1.64:

 py Fraction(1.64) - Fraction(1.64)
 Fraction(-11, 112589990684262400)

 That's pretty close. The decimal value that normally displays as 1.64 is
 perhaps more accurately displayed as:

 py %.23f % 1.64
 '1.63990230037'

 but even that is not exact.

Just to add to Steven's point. The easiest way to see the exact value
of a float in decimal format is:
 import decimal
 decimal.Decimal(1.64)
Decimal('1.6399023003738329862244427204132080078125')

 The reality is that very often, the difference
 isn't that important. The difference between

 1.64 inches

 and

 1.63990230037 inches

 is probably not going to matter, especially if you are cutting the
 timber with a chainsaw.

I once had a job surveying a building site as an engineer's assistant.
I'd be holding a reflector stick while he operated the laser sight
machine (EDM) from some distance away and spoke to me over the radio.
He'd say mark it 10mm to the left. The marker spray would make a
circle that was about 100mm in diameter. Then I would get 4 unevenly
shaped stones (that happened to be lying around) and drop them on the
ground to mark out the corners of a 1500mm rectangle by eye.
Afterwards the excavator would dig a hole there using a bucket that
was about 1000mm wide. While digging the stones would get moved around
and the operator would end up just vaguely guessing where the original
marks had been.

The engineer was absolutely insistent that it had to be 10mm to the
left though. I think that 

Re: [Tutor] Fraction - differing interpretations for number and string

2015-04-16 Thread Wolfgang Maier

On 04/16/2015 07:03 AM, Jim Mooney wrote:

Why does Fraction interpret a number and string so differently? They come
out the same, but it seems rather odd


from fractions import Fraction
Fraction(1.64)

Fraction(738590337613, 4503599627370496)

Fraction(1.64)

Fraction(41, 25)

41/25

1.64

738590337613 / 4503599627370496

1.64



That is because 1.64 cannot be represented exactly as a float.
Try:

 x = 1.64
 format(x,'.60f')
'1.6399023003738329862244427204132080078125'

If you construct a Fraction from a str OTOH, Fraction assumes you meant 
exactly that number.

And in fact:

 
Fraction('1.6399023003738329862244427204132080078125')

Fraction(738590337613, 4503599627370496)


see also https://docs.python.org/3/tutorial/floatingpoint.html for an 
explanation of floating-point inaccuracies.



___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fraction - differing interpretations for number and string

2015-04-16 Thread Danny Yoo
On Apr 16, 2015 1:52 AM, Danny Yoo danny@gmail.com wrote:


 On Apr 16, 2015 1:32 AM, Jim Mooney cybervigila...@gmail.com wrote:
 
  Why does Fraction interpret a number and string so differently? They
come
  out the same, but it seems rather odd
 
   from fractions import Fraction
   Fraction(1.64)
  Fraction(738590337613, 4503599627370496)
   Fraction(1.64)
  Fraction(41, 25)
   41/25
  1.64
   738590337613 / 4503599627370496
  1.64

 In many systems, if everything is the same shape, then certain operations
might be implemented more quickly by making uniform assumptions.  If all my
clothes were the same, for example, maybe I'd be able to sorry my laundry
more quickly.  And if all my Tupperware were the same size, then maybe my
cabinets wouldn't be the nest of ill fitting plastic that it is now.

Substitute sorry with sort.  Sorry!  :p
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fraction - differing interpretations for number and string

2015-04-16 Thread Danny Yoo
On Apr 16, 2015 1:32 AM, Jim Mooney cybervigila...@gmail.com wrote:

 Why does Fraction interpret a number and string so differently? They come
 out the same, but it seems rather odd

  from fractions import Fraction
  Fraction(1.64)
 Fraction(738590337613, 4503599627370496)
  Fraction(1.64)
 Fraction(41, 25)
  41/25
 1.64
  738590337613 / 4503599627370496
 1.64

In many systems, if everything is the same shape, then certain operations
might be implemented more quickly by making uniform assumptions.  If all my
clothes were the same, for example, maybe I'd be able to sorry my laundry
more quickly.  And if all my Tupperware were the same size, then maybe my
cabinets wouldn't be the nest of ill fitting plastic that it is now.

And if every number was represented with a fixed quantity of bits in a
computer, then maybe computer arithmetic could go really fast.

It's this last supposition that should be treated most seriously.  Most
computers use floating point, a representation of numbers that use a
fixed set of bits.  This uniformity allows floating point math to be
implemented quickly.  But it also means that it's inaccurate.  You're
seeing evidence of the inaccuracy.

Read: https://docs.python.org/2/tutorial/floatingpoint.html and see if that
helps clarify.
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fraction - differing interpretations for number and string

2015-04-16 Thread Dave Angel

On 04/16/2015 08:11 AM, Dave Angel wrote:

On 04/16/2015 01:03 AM, Jim Mooney wrote:

Why does Fraction interpret a number and string so differently? They come
out the same, but it seems rather odd


from fractions import Fraction
Fraction(1.64)

Fraction(738590337613, 4503599627370496)

Fraction(1.64)

Fraction(41, 25)

41/25

1.64

738590337613 / 4503599627370496

1.64



When a number isn't an exact integer (and sometimes when the integer is
large enough), some common computer number formats cannot store the
number exactly.  Naturally we know about transcendentals, which cannot
be stored exactly in any base.  PI and E and the square-root of two are
three well known examples.

But even rational numbers cannot be stored exactly unless they happen to
match the base you're using to store them.  For example, 1/3 cannot be
stored exactly in any common base.


By common I mean 2, 8, 10, or 16.  Obviously if someone implemented a 
base 3 floating point package, the number would be simply  0.1



 In decimal, it'd be a repeating set
of 3's.  And whenever you stopped putting down threes, you've made an
approximation.
 0.33

Python defaults to using a float type, which is a binary floating point
representation that uses the special hardware available in most recent
computers.  And in fact, when you use a literal number in your source,
it's converted to a float by the compiler, not stored as the digits you
typed.

The number you specified in decimal, 1.64, is never going to be stored
in a finite number of binary bits, in a float.

  from fractions import Fraction
  from decimal import Decimal

  y = 1.64
Conversion to float appens at compile time, so the value given to y is
already approximate.
roughly equivalent to the following
  y = float(1.64)

  Fraction(y)
Fraction(738590337613, 4503599627370496)

If you converted it in string form instead to Decimal, then the number
you entered would be saved exactly.

  x = Decimal(1.64)
This value is stored exactly.

  x
Decimal('1.64')

  Fraction(x)
Fraction(41, 25)


Sometimes it's convenient to do the conversion in our head, as it were.
Since 1.64 is shorthand for  164/100, we can just pass those integers to
Fraction, and get an exact answer again.

  Fraction(164, 100)
Fraction(41, 25)


Nothing about this says that Decimal is necessarily better than float.
It appears better because we enter values in decimal form and to use
float, those have to be converted, and there's frequently a loss during
conversion.  But Decimal is slower and takes more space, so most current
languages use binary floating point instead.

I implemented the math on a machine 40 years ago where all user
arithmetic was done in decimal floating point.  i thought it was a good
idea at the time because of a principle I called least surprise. There
were roundoff errors, but only in places where you'd get the same ones
doing it by hand.

History has decided differently.  When the IEEE committee first met,
Intel already had its 8087 implemented, and many decisions were based on
what that chip could do and couldn't do.  So that standard became the
default standard that future implementations would use, whatever company.




--
DaveA
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fraction - differing interpretations for number and string

2015-04-16 Thread Dave Angel

On 04/16/2015 01:03 AM, Jim Mooney wrote:

Why does Fraction interpret a number and string so differently? They come
out the same, but it seems rather odd


from fractions import Fraction
Fraction(1.64)

Fraction(738590337613, 4503599627370496)

Fraction(1.64)

Fraction(41, 25)

41/25

1.64

738590337613 / 4503599627370496

1.64



When a number isn't an exact integer (and sometimes when the integer is 
large enough), some common computer number formats cannot store the 
number exactly.  Naturally we know about transcendentals, which cannot 
be stored exactly in any base.  PI and E and the square-root of two are 
three well known examples.


But even rational numbers cannot be stored exactly unless they happen to 
match the base you're using to store them.  For example, 1/3 cannot be 
stored exactly in any common base.  In decimal, it'd be a repeating set 
of 3's.  And whenever you stopped putting down threes, you've made an 
approximation.

0.33

Python defaults to using a float type, which is a binary floating point 
representation that uses the special hardware available in most recent 
computers.  And in fact, when you use a literal number in your source, 
it's converted to a float by the compiler, not stored as the digits you 
typed.


The number you specified in decimal, 1.64, is never going to be stored 
in a finite number of binary bits, in a float.


 from fractions import Fraction
 from decimal import Decimal

 y = 1.64
Conversion to float appens at compile time, so the value given to y is 
already approximate.

   roughly equivalent to the following
 y = float(1.64)

 Fraction(y)
Fraction(738590337613, 4503599627370496)

If you converted it in string form instead to Decimal, then the number 
you entered would be saved exactly.


 x = Decimal(1.64)
This value is stored exactly.

 x
Decimal('1.64')

 Fraction(x)
Fraction(41, 25)


Sometimes it's convenient to do the conversion in our head, as it were.
Since 1.64 is shorthand for  164/100, we can just pass those integers to 
Fraction, and get an exact answer again.


 Fraction(164, 100)
Fraction(41, 25)


Nothing about this says that Decimal is necessarily better than float. 
It appears better because we enter values in decimal form and to use 
float, those have to be converted, and there's frequently a loss during 
conversion.  But Decimal is slower and takes more space, so most current 
languages use binary floating point instead.


I implemented the math on a machine 40 years ago where all user 
arithmetic was done in decimal floating point.  i thought it was a good 
idea at the time because of a principle I called least surprise. 
There were roundoff errors, but only in places where you'd get the same 
ones doing it by hand.


History has decided differently.  When the IEEE committee first met, 
Intel already had its 8087 implemented, and many decisions were based on 
what that chip could do and couldn't do.  So that standard became the 
default standard that future implementations would use, whatever company.


--
DaveA
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fraction - differing interpretations for number and string

2015-04-16 Thread Steven D'Aprano
On Thu, Apr 16, 2015 at 01:52:51AM -0700, Danny Yoo wrote:

 It's this last supposition that should be treated most seriously.  Most
 computers use floating point, a representation of numbers that use a
 fixed set of bits.  This uniformity allows floating point math to be
 implemented quickly.  But it also means that it's inaccurate.  You're
 seeing evidence of the inaccuracy.

Hmmm. I wouldn't describe it as inaccurate. The situation is a lot 
more subtle and complicated than mere inaccuracy, especially these days.

Back when dinosaurs ruled the earth, it would be accurate to describe 
floating point arithmetic as inaccurate, pun intended. Some of the 
biggest companies in computing back in the 1970s had floating point 
arithmetic which was horrible. I'm aware of at least one system where 
code like this:

if x != 0:
print 1/x

could crash with a Divide By Zero error. And worse! But since IEEE-754 
floating point semantics has become almost universal, the situation is 
quite different. IEEE-754 guarantees that the four basic arithmetic 
operations + - * / will give the closest possible result to the exact 
mathematical result, depending on the rounding mode and available 
precision. If an IEEE-754 floating point system gives a result for some 
operation like 1/x, you can be sure that this result is the closest you 
can possibly get to the true mathematical result -- and is often exact.

The subtlety is that the numbers you type in decimal are not always the 
numbers the floating point system is actually dealing with, because they 
cannot be. What you get though, is the number closest possible to what 
you want.

Let me explain with an analogy. We all know that the decimal for 1/3 is 
0.3-repeating, with an infinite number of threes after the decimal 
point. That means any decimal number you can possibly write down is not 
1/3 exactly, it will either be a tiny bit less, or a tiny bit more.

0.3  # pretty close
0.333# even closer
0.3  # closer, but still not exact
0.4  # too big

There is no way to write 1/3 exactly as a decimal, and no way to 
calculate it exactly as a decimal either. If you ask for 1/3 you will 
get something either a tiny bit smaller than 1/3 or a tiny bit bigger.

Computer floating point numbers generally use base 2, not base 10. That 
means that fractions like 1/2, 1/4, 1/8 and similar can be represented 
exactly (up to the limit of available precision) but many decimal 
numbers are like 1/3 in decimal and have an infinitely repeating binary 
form. Since we don't have an infinite amount of memory, we cannot 
represent them *exactly* as binary floats.

So the decimal fraction 0.5 means 5/10 or if you prefer, (5 * 1/10). 
That is the same as 1/2 or (1 * 1/2), which means that in base-2 we 
can write it as 0.1.

0.75 in decimal means (7 * 1/10 + 5 * 1/100). With a bit of simple 
arithmetic, you should be able to work out that 0.75 is also equal to 
(1/2 + 1/4), or to put it another way, (1 * 1/2 + 1 * 1/4) which can be 
written as 0.11 in base-2.

But the simple decimal number 0.1 cannot be written in an exact base-2 
form:

1/10 is smaller than 1/2, so base-2 0.1 is too big;
1/10 is smaller than 1/4, so base-2 0.01 is too big;
1/10 is smaller than 1/8, so base-2 0.001 is too big;

1/10 is bigger than 1/16, so base-2 0.0001 is too small;
1/10 is bigger than 1/16 + 1/32, so base-2 0.00011 is too small;
1/10 is smaller than 1/16 + 1/32 + 1/64, so base-2 0.000111 
 is too big;

likewise base-2 0.0001101 is too big (1/16 + 1/32 + 1/128);
base-2 0.00011001 is too small (1/16 + 1/32 + 1/256);
and so on.

What we actually need is the infinitely repeating binary number:

0.00011001100110011001100110011...

where the 0011s repeat forever. But we cannot do that, since floats only 
have a fixed number of bits. We have to stop the process somewhere, and 
get something a tiny bit too small:

0.0001100110011001100110

or a tiny bit too big:

0.0001100110011001100111

depending on exactly how many bits we have available.

Is this inaccurate? Well, in the sense that it is not the exact true 
mathematical result, yes it is, but that term can be misleading if you 
think of it as a mistake. In another sense, it's not inaccurate, it is 
as accurate as possible (given the limitation of only having a certain 
fixed number of bits). 



-- 
Steve
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fraction - differing interpretations for number and string - presentation

2015-04-16 Thread Wolfgang Maier

On 16.04.2015 19:24, Jim Mooney wrote:


Understood about the quondam inexactness of floating point bit
representation. I was just wondering why the different implementation of
representing it when using Fraction(float) as opposed to using
Fraction(string(float)).  In terms of user presentation, the string usage
has smaller numbers for the ratio, so it would be more understandable and
should, I assume, be chosen for GUI display.



The whole point of the discussion is that this is *not* a presentation 
issue. Fraction(1.64) and Fraction(1.64) *are* two different numbers 
because one gets constructed from a value that is not quite 1.64.

What sense would it make for Fraction(1.64) to represent itself inexactly ?
However, if you really want a shortened, but inexact answer: Fractions 
have a limit_denominator method that you could use like so:


 Fraction(1.64).limit_denominator(25)
Fraction(41, 25)


___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fraction - differing interpretations for number and string - presentation

2015-04-16 Thread Danny Yoo
On Apr 16, 2015 1:42 PM, Jim Mooney cybervigila...@gmail.com wrote:

 Understood about the quondam inexactness of floating point bit
 representation. I was just wondering why the different implementation of
 representing it when using Fraction(float) as opposed to using
 Fraction(string(float)).

Ah.  Correction.  You want to say: .. using Fraction(float) as opposed to
using
Fraction(**string**).
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fraction - differing interpretations for number and string - presentation

2015-04-16 Thread Jim Mooney

 Is this inaccurate? Well, in the sense that it is not the exact true
 mathematical result, yes it is, but that term can be misleading if you
 think of it as a mistake. In another sense, it's not inaccurate, it is
 as accurate as possible (given the limitation of only having a certain
 fixed number of bits).
 --
 Steve


---
Understood about the quondam inexactness of floating point bit
representation. I was just wondering why the different implementation of
representing it when using Fraction(float) as opposed to using
Fraction(string(float)).  In terms of user presentation, the string usage
has smaller numbers for the ratio, so it would be more understandable and
should, I assume, be chosen for GUI display.
-- 
Jim

The Paleo diet causes Albinism
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fraction - differing interpretations for number and string - presentation

2015-04-16 Thread Jim Mooney
 The whole point of the discussion is that this is *not* a presentation
issue. Fraction(1.64) and Fraction(1.64) *are* two different numbers
because one gets constructed from a value that is not quite 1.64.

Wolfgang Maier
--
So the longer numerator and denominator would, indeed, be more accurate if
used in certain calculations rather than being normalized to a float - such
as in a Fortran subroutine or perhaps if exported to a machine with a
longer bit-length? That's mainly what I was interested in - if there is any
usable difference between the two results.

Jim

The Paleo diet causes Albinism
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fraction - differing interpretations for number and string - presentation

2015-04-16 Thread Steven D'Aprano
On Thu, Apr 16, 2015 at 03:11:59PM -0700, Jim Mooney wrote:

 So the longer numerator and denominator would, indeed, be more accurate if
 used in certain calculations rather than being normalized to a float - such
 as in a Fortran subroutine or perhaps if exported to a machine with a
 longer bit-length? That's mainly what I was interested in - if there is any
 usable difference between the two results.

You're asking simple questions that have complicated answers which 
probably won't be what you are looking for. But let's try :-)

Let's calculate a number. The details don't matter:

x = some_calculation(a, b)
print(x)

which prints 1.64. Great, we have a number that is the closest possible 
base 2 float to the decimal 164/100. If we convert that float to a 
fraction *exactly*, we get:

py Fraction(1.64)
Fraction(738590337613, 4503599627370496)

So the binary float which displays as 1.64 is *actually* equal to 
738590337613/4503599627370496, which is just a tiny bit less than 
the decimal 1.64:

py Fraction(1.64) - Fraction(1.64)
Fraction(-11, 112589990684262400)

That's pretty close. The decimal value that normally displays as 1.64 is 
perhaps more accurately displayed as:

py %.23f % 1.64
'1.63990230037'

but even that is not exact.

So which is the right answer? That depends.

(1) It could be that our initial calculation some_calculation(a, b) 
actually was 164/100, say, in which case the *correct* result should be 
decimal 1.64 = Fraction(1.64) and the 7385blahblahblah/blahblahblah is 
just an artifact of the fact that floats are binary.

(2) Or it could be that the some_calculation(a, b) result actually was
738590337613/4503599627370496, say, in which case it is a mere 
coincidence that this number displays as 1.64 when treated as a binary 
float. The long 7385blahblahblah numerator and denominator is exactly 
correct, and decimal 1.64 is a approximation.

There is no way of telling in advance which interpretation is correct. 
You need to think about the calculation you performed and decide what it 
means, not just look at the final result.

Although... 9 times out of 10, if you get something *close* to an exact 
decimal, a coincidence is not likely. If you get exactly 0.5 out of a 
calculation, rather than 0.50001, then it probably should be 
exactly 1/2. *Probably*. The reality is that very often, the difference 
isn't that important. The difference between

1.64 inches

and

1.63990230037 inches

is probably not going to matter, especially if you are cutting the 
timber with a chainsaw.



-- 
Steve
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Fraction - differing interpretations for number and string - presentation

2015-04-16 Thread Dave Angel

On 04/16/2015 01:24 PM, Jim Mooney wrote:


Is this inaccurate? Well, in the sense that it is not the exact true
mathematical result, yes it is, but that term can be misleading if you
think of it as a mistake. In another sense, it's not inaccurate, it is
as accurate as possible (given the limitation of only having a certain
fixed number of bits).
--
Steve



---
Understood about the quondam inexactness of floating point bit
representation. I was just wondering why the different implementation of
representing it when using Fraction(float) as opposed to using
Fraction(string(float)).


You didn't use str(float), you used a simple str.  So there was no 
quantization error since it was never converted to binary floating 
point.  If you have a number that happens to be an exact decimal number, 
don't convert it via float().  Either convert it via Decimal() or 
convert it directly to fraction.



  In terms of user presentation, the string usage

has smaller numbers for the ratio, so it would be more understandable and
should, I assume, be chosen for GUI display.



Presumably you didn't read my message.  When you use a literal 1.64 in 
your code, you're telling the compiler to call the float() function on 
the token.  You've already forced the quantization error.  Nothing to do 
with the fraction class.


--
DaveA
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor