2017-11-10 20:58 GMT+01:00 Martin McClure <mar...@hand2mouse.com>:

> On 11/10/2017 11:33 AM, raffaello.giulie...@lifeware.ch wrote:
>
>> Doing only Fraction->Float conversions in mixed mode won't preserve = as
>> an equivalence relation and won't enable a consistent ordering with <=,
>> which probably most Smalltalkers consider important and enjoyable
>> properties.
>>
> Good point. I agree that Float -> Fraction is the more desirable mode for
> implicit conversion, since it can always be done without changing the value.
>
>> Nicolas gave some convincing examples on why most
>> programmers might want to rely on them.
>>
>>
>> Also, as I mentioned, most Smalltalkers might prefer keeping away from
>> the complex properties of Floats. Doing automatic, implicit
>> Fraction->Float conversions behind the scenes only exacerbates the
>> probability of encountering Floats and of having to deal with their
>> weird and unfamiliar arithmetic.
>>
> One problem is that we make it easy to create Floats in source code (0.1),
> and we print Floats in a nice decimal format but by default print Fractions
> in their reduced fractional form. If we didn't do this, Smalltalkers might
> not be working with Floats in the first place, and if they did not have any
> Floats in their computation they would never run into an implicit
> conversion to *or* from Float.
>
> As it is, if we were to uniformly do Float -> Fraction conversion on
> mixed-mode operations, we would get things like
>
> (0.1 * (1/1)) printString                       -->
> '3602879701896397/36028797018963968'
>
> Not incredibly friendly.
>

For those not practicing the litote: definitely a no go.


> Regards,
> -Martin
>
>
At the risk of repeating myself, unique choice for all operations is a nice
to have but not a goal per se.

I mostly agree with Florin: having 0.1 representing a decimal rather than a
Float might be a better path meeting more expectations.

But thinking that it will magically eradicate the problems is a myth.

Shall precision be limited or shall we use ScaledDecimals ?

With limited precision, we'll be back to having several Fraction converting
to same LimitedDecimal, so it won't solve anything wrt original problem.
With illimted precision we'll have the bad property that what we print does
not re-interpret to the same ScaledDecimal (0.1s / 0.3s), but that's a
detail.
The worse thing is that long chain of operations will tend to produce
monster numerators denominators.
And we will all have long chain when resizing a morph with proportional
layout in scalable graphics.
We will then have to insert rounding operations manually for mitigating the
problem, and somehow reinvent a more inconvenient Float...
It's boring to allways play the role of Cassandra, but why do you think
that Scheme and Lisp did not choose that path?

Reply via email to