2014-07-12 13:42 GMT+02:00 Andres Valloud <avall...@smalltalk.comcastbiz.net
>:

> Being consistent then means either abandonning 1/2 = 0.5 but we saw this
>> has nasty side effects.
>>
>
> Well ok, so letting 1 / 2 = 0.5 answer false has nasty side effects. Why
> is that?  Isn't the code that breaks trying to tell us something? That's
> what I've been trying to point out.  Why rush to defend numerically
> unstable code?  What are we going to preserve, and what will be the example
> given to others?
>
> Andres.
>
>
What apparently breaks Pharo graphics is letting (1 comparisonOp: 1.0) =
false in Float>>adaptToInteger:andCompare:
(maybe my mistake, I should  ^ operator = #~= rather than ^false)
At this point, we can't tell that the code is numerically unstable.
Maybe the Float are very well formed and sufficiently away from int, or
exactly equal to int...
Only if we see off-by-one errors can we conclude that there are design
mistakes.
Or if we perform a deeper analysis, but it ain't going to be easy.
To know where it happens, we would have to instrument code (no static
typing).

It's not un-interesting.
But it's going to waste a lot of time for a problem we don't yet have...




>  Or denying the generality of Dictionary: not all objects can be used as
>> key...
>> That's a possible choice, but IMO, this will generate bad feedback from
>> customers.
>> Speaking of consistency, I strongly believe that Squeak/Pharo are on the
>> right track.
>>
>> OK, we can not magically erase inexactness of floating point operations.
>> They are on purpose for the sake of speed/memory footprint optimization.
>> Exceptional values like NaN and Inf are a great deal of
>> complexification, and I'd allways preferred exceptions to exceptional
>> values...
>> But when we can preserve some invariants, we'd better preserve them.
>> Once again, I did not invent anything, that's the approach of lispers,
>> and it seems wise.
>>
>> And last thing, I like your argumentation, it's very logical, so if you
>> have more, you're welcome
>> but I pretty much exhausted mine ;)
>>
>> Nicolas
>>
>>
>>         If we can maintain the invariant with a pair of double dispatching
>>         methods and coordinated hash, why shouldn't we?
>>         Why lispers did it? (Scheme too)
>>
>>         For me it's like saying: "since float are inexact, we have a
>>         license to
>>         waste ulp".
>>         We have not. IEEE 754 model insists on operations to be exactly
>>         rounded.
>>         These are painful contortions too, but most useful!
>>         Extending the contortion to exact comparison sounds a natural
>>         extension
>>         to me, the main difference is that we do not round true or false
>>         to the
>>         nearest float, so it's even nicer!
>>
>>
>>              On 7/11/14 17:19 , Nicolas Cellier wrote:
>>
>>
>>
>>
>>                  2014-07-12 1:29 GMT+02:00 Andres Valloud
>>                  <avalloud@smalltalk.__comcastb__iz.net
>>         <http://comcastbiz.net>
>>                  <mailto:avalloud@smalltalk.__comcastbiz.net
>>         <mailto:avall...@smalltalk.comcastbiz.net>>
>>                  <mailto:avalloud@smalltalk.
>>         <mailto:avalloud@smalltalk.>__c__omcastbiz.net
>>         <http://comcastbiz.net>
>>
>>
>>                  <mailto:avalloud@smalltalk.__comcastbiz.net
>>         <mailto:avall...@smalltalk.comcastbiz.net>>>>:
>>
>>
>>                                I don't think it makes sense to compare
>>         floating point
>>                           numbers to
>>                                other types of numbers with #=... there's
>>         a world of
>>                           approximations
>>                                and other factors hiding behind #=, and the
>>                  occasional true
>>                           answer
>>                                confuses more than it helps.  On top of
>>         that, then
>>                  you get
>>                           x = y =>
>>                                x hash = y hash, and so the hash of
>>         floating point
>>                  values
>>                           "has" to
>>                                be synchronized with integers, fractions,
>>         scaled
>>                  decimals,
>>                           etc...
>>                                _what a mess_...
>>
>>
>>                           Yes, that's true, hash gets more complex.
>>                           But then, this has been discussed before:
>>
>>                           {1/2 < 0.5. 1/2 = 0.5. 1/2 > 0.5} - > #(false
>>         false false).
>>
>>                           IOW, they are unordered.
>>                           Are we ready to lose ordering of numbers?
>>                           Practically, this would have big impacts on
>>         code base.
>>
>>
>>                       IME, that's because loose code appears to work.
>>  What
>>                  enables that
>>                       loose code to work is the loose mixed mode
>>         arithmetic.  I could
>>                       understand integers and fractions.  Adding
>>         floating point
>>                  to the mix
>>                       stops making as much sense to me.
>>
>>                       Equality between floating point numbers does make
>>         sense.
>>                    Equality
>>                       between floating point numbers and scaled decimals
>> or
>>                  fractions...
>>                       in general, I don't see how they could make sense.
>>           I'd
>>                  rather see
>>                       the scaled decimals and fractions explicitly
>>         converted to
>>                  floating
>>                       point numbers, following a well defined procedure,
>>         and then
>>                  compared...
>>
>>                       Andres.
>>
>>
>>                  Why do such mixed arithmetic comparisons make sense?
>>                  Maybe we used floating points in some low level
>>         Graphics for
>>                  optimization reasons.
>>                  After these optimized operations we get a Float result by
>>                  contagion, but
>>                  our intention is still to handle Numbers.
>>                  It would be possible to riddle the code with explicit
>>                  asFloat/asFraction
>>                  conversions, but that does not feel like a superior
>>         solution...
>>
>>                  OK, we can as well convert to inexact first before
>>         comparing for
>>                  this
>>                  purpose.
>>                  That's what C does, because C is too low level to ever
>>         care of
>>                  transitivity and equivalence relationship.
>>                  It's not even safe in C because the compiler can decide
>> to
>>                  promote to a
>>                  larger precision behind your back...
>>                  But let's ignore this "feature" and see what lispers
>>         recommend
>>                  instead:
>>
>>         http://www.lispworks.com/____documentation/lcl50/aug/aug-__
>> __170.html
>>         <http://www.lispworks.com/__documentation/lcl50/aug/aug-__
>> 170.html>
>>
>>         <http://www.lispworks.com/__documentation/lcl50/aug/aug-__
>> 170.html
>>         <http://www.lispworks.com/documentation/lcl50/aug/aug-170.html>>
>>
>>
>>                  It says:
>>
>>                  In general, when an operation involves both a rational
>>         and a
>>                  floating-point argument, the rational number is first
>>         converted to
>>                  floating-point format, and then the operation is
>>         performed. This
>>                  conversion process is called /floating-point contagion/
>>
>>         <http://www.lispworks.com/____reference/lcl50/aug/aug-193.__
>> __html#MARKER-9-47
>>         <http://www.lispworks.com/__reference/lcl50/aug/aug-193.__
>> html#MARKER-9-47>
>>
>>
>>
>>         <http://www.lispworks.com/__reference/lcl50/aug/aug-193.__
>> html#MARKER-9-47
>>         <http://www.lispworks.com/reference/lcl50/aug/aug-193.
>> html#MARKER-9-47>>>.
>>
>>                  However, for numerical equality comparisons, the
>>         arguments are
>>                  compared
>>                  using rational arithmetic to ensure transitivity of the
>>         equality (or
>>                  inequality) relation.
>>
>>                  So my POV is not very new, it's an old thing.
>>                  It's also a well defined procedure, and somehow better
>>         to my taste
>>                  because it preserve more mathematical properties.
>>                  If Smalltalk wants to be a better Lisp, maybe it should
>> not
>>                  constantly
>>                  ignore Lisp wisdom ;)
>>
>>                  We can of course argue about the utility of
>> transitivity...
>>                  As a general library we provide tools like Dictionary
>>         that rely on
>>                  transitivity.
>>                  You can't tell how those Dictionary will be used in real
>>                  applications,
>>                  so my rule of thumb is the principle of least
>> astonishment.
>>                  I've got bitten once by such transitivity while
>>         memoizing... I
>>                  switched
>>                  to a better strategy with double indirection as
>>         workaround: class ->
>>                  value -> result, but it was surprising.
>>
>>
>>                           I'm pretty sure a Squeak/Pharo image wouldn't
>>         survive
>>                  that long
>>                           to such
>>                           a change
>>                           (well I tried it, the Pharo3.0 image survives,
>> but
>>                  Graphics are
>>                           badly
>>                           broken as I expected).
>>
>>                           That's allways what made me favour casual
>>         equality to
>>                  universal
>>                           inequality.
>>
>>                           Also should 0.1 = 0.1 ? In case those two
>>         floats have been
>>                           produced by
>>                           different path, different approximations they
>>         might not
>>                  be equal...
>>                           (| a b | a := 0.1. b := 1.0e-20. a+b=a.)
>>                           I also prefer casual equality there too.
>>                           Those two mathematical expressions a+b and a are
>>                  different but both
>>                           floating point expressions share same floating
>>         point
>>                  approximation,
>>                           that's all what really counts because in the
>>         end, we cannot
>>                           distinguish
>>                           an exact from an inexact Float, nor two
>>         inexact Float.
>>                           We lost the history...
>>
>>                           Also, the inexact flag is not attached to a
>>         Float, it's
>>                  only the
>>                           result
>>                           of an operation.
>>                           Statistically, it would waste one bit for
>>         nothing, most
>>                  floats
>>                           are the
>>                           result of an inexact operation.
>>                           But who knows, both might be the result of exact
>>                  operations too ;)
>>
>>
>>
>>                                On 7/11/14 10:46 , stepharo wrote:
>>
>>                                    I suggest you to read the Small
>>         number chapter
>>                  of the
>>                           Deep into
>>                                    Pharo.
>>
>>                                    Stef
>>
>>                                    On 11/7/14 15:53, Natalia Tymchuk
>> wrote:
>>
>>                                        Hello.
>>                                          I found interesting thing:
>>                                        Why it is like this?
>>
>>                                        Best regards,
>>                                        Natalia
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>

Reply via email to