On Mon, Jan 22, 2007 at 08:47:22PM -0800, Darren Duncan wrote:
: At 5:56 PM -0800 1/22/07, Larry Wall wrote:
: >Whether a Num that happens to be an integer prints out with .0 is a
: >separate issue.  My bias is that a Num pretend to be an integer when
: >it can.  I think most folks (including mathematicians) think that
: >the integer 1 and the distance from 0 to 1 on the real number line
: >happen to be the same number most of the time.  And Perl is not about
: >forcing a type-theoretical viewpoint on the user...
: 
: Up front, I will say that, all this stuff about 1 vs 1.0 won't matter 
: at all if the Int type is an actual subset of the Num type (but whose 
: implementation is system-recognized and optimized), meaning that Int 
: and Num are not disjoint, as "most folks" usually expect to be the 
: case, such that, eg, 1 === 1.0 returns true.
: 
: Of course if we did that, then dealing with Int will have a number of 
: the same implementation issues as with dealing with "subset"-declared 
: types in general.
: 
: I don't know yet whether it was decided one way or the other.

For various practical reasons I don't think we can treat Int as a
subset of Num, especially if Num is representing any of several
approximating types that may or may not have the "headroom" for
arbitrary integer math, or that lose low bits in the processing of
gaining high bits.  It would be possible to make Int a subset of Rat
(assuming Rat is implemented as Int/Int), but I don't think Rats are
very practical either for most applications.  It is unlikely that the
universe uses Rats to calculate QM interactions.

: Whereas, if Int is not an actual subset of Num, and so their values 
: are disjoint, then ...

Then 1.0 == 1 !=== 1.0.  I'm fine with that.

: FYI, my comment about a stringified Num having a .0 for 
: round-tripping was meant to concern Perl code generation in 
: particular, such as what .perl() does, but it was brought up in a 
: more generic way, to stringification in general, for an attempt at 
: some consistency.

It seems like an unnecessary consistency to me.  The .perl method
is intended to provide a human-readable, round-trippable form of
serialization, and as a form of serialization it is required to
capture all the information so that the original data structure
can be recreated exactly.  This policy is something the type should
have little say in.  At most it should have a say in *which* way to
canonicalize all the data.

The purpose of stringification, on the other hand, is whatever the type
wants it to be, and it is specifically allowed to lose information,
as long as the remaining string is suggestive to a human reader how
to reconstruct the information in question (or that the information is
none of their business).  A document object could decide to stringify
to a URI, for instance.  An imported Perl 5 code reference could
decide to stringify to CODE(0xdeadbeef).  :)

: Whether or not that is an issue depends really on whether we consider 
: the literal 1.0 in Perl code to be an Int or a Num.  If, when we 
: parse Perl, we decide that 1.0 is a Num rather than an Int such as 1 
: would be, then a .perl() invoked on a Num of value 1.0 should return 
: 1.0 also, so that executing that code once again produces the Num we 
: started with rather than an Int.

I think the intent of 1.0 in Perl code is clearly more Numish than
Intish, so I'm with you there.  So I'm fine with Num(1).perl coming
out simply as "1.0" without further type annotation.  But ~1.0 is
allowed to say "1" if that's how Num likes to stringify.

: On the other hand, if .perl() produces some long-hand like "Int(1)" 
: or "Num(1)", then it won't matter whether it is "Num(1)" or 
: "Num(1.0)" etc.

Well, mostly, unless we consider that Num(1.0) might have to wait
till run time to know what conversion to Num actually means, if Num
is sufficiently delegational....  But I think the compiler can probably
require a tighter definition of basic types for optimization purposes,
at least by default.

Larry

Reply via email to