On 2009-03-24, at 18:11, John Cowan wrote: >> in a Philip K Dick story who discovers that not only is the universe >> unreal, but so too is the character him/herself. > > If he's not real, how can he know if the universe is real or not? That is one of the major difficulties with reading Dick, I think if one actually can answer such questions, then the question ceases to exist. (Fortunately, my head explodes long before I get to that point :-)
>> (remember, >> the internal representation of C integers are not specified, somebody >> might >> build a replica of an IBM 7094, and we might have sign-magnitude >> integers again!). > > C integers can be represented however you like (even as sign-magnitude > bignums) but in certain ways must act *as if* they were two's > complement > binary representations: things like casting from signed to unsigned > types > and shifting, e.g. Actually, not. (To be truthful, I have read the C89 standard carefully, but the C99 standard much less so; for all I know, this stuff might have changed; but I doubt it.) For example, the standard defines shifting of unsigned integers, but does not define what happens if a negative (signed) integer is shifted. I have written code that depends upon that, and I very much doubt that somebody will ever resurrect a Univac 1108, with ones-complement arithmetic, to prove me wrong. But technically, the C standard doesn't define it. An even worse case is signed division, where the choice of remainder versus modulus is basically defined by a hardware architect who may or may not have ever thought about which of the two is better. Again, C doesn't define signed `remainder'. (On this point, the British mathematician/computer scientist Brian Wichmann published an article in Sigplan Notices back in the 70s, where he sent round an arithmetic test to his friends, asking them to evaluate things like 20/-3 and -17/2. He got something of a variation in quotient answers, and a huge variation in remainder answers, many of them using modulus, with perhaps somewhat odd rules about the sign of the result. Given that these answers get embedded into machine architectures, it's not a bad plan for a standard for a language that is supposed to be `efficient' to stay silent on which choice is to be made.) This raises an interesting point. Perhaps there are language standards that have no implementation-defined behavior. I have certainly never seen one. In reality, the language a programmer uses is a combination of the standard and common practice. A C programmer might be using some implementations on which ints are 16 bits and others on which ints are 32 or 64; but every modern C implementation uses 8, 16, 32, and 64 as the set of available int sizes. It is reasonable for a programmer not to worry about the case where the implementation provides 9, 18, 36, and 72 as the set of sizes. The obsession with defining exactly what R6RS `means', in the absence of saying what implementations actually do, I think detracts from the principal job of a standard, which is to establish some sort of consensus that programmers can use in ascribing meaning to programs. Regarding string->number, my vote is therefore for an erratum that clarifies that if string is not a string, the result is #f. This doesn't break the `doesn't raise an exception' statement, and is therefore the minimal change one can make. But I can live with other definitions. -- vincent _______________________________________________ r6rs-discuss mailing list [email protected] http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss
