On 9/7/05, Kon Lovett <[EMAIL PROTECTED]> wrote: > > There isn't an intrinsic signed vs. unsigned property for numbers. So > how can (u32vector-set! ...) accept 0 to (2^32)-1 when no way to > distinguish from -(2^31) to (2^31)-1 by the bit-pattern of the value > alone? >
Since fixnums on 32-bit machines can not cover the whole numeric range, we have to accept flloating-point/inexact numbers. cheers, felix _______________________________________________ Chicken-users mailing list [email protected] http://lists.nongnu.org/mailman/listinfo/chicken-users
