Re: "cannot coerce inexact literal to fixnum"
On 2024-02-10 16:39, Peter Bex wrote: Again, these could still be bignums when run on a 32-bit platform (that's why the return type of s32vector-ref is defined as "integer" and not "fixnum") Hm.. does this mean that using s32vector-ref and -set! always incur typecheck costs, no matter how I type-declare identifiers passed to / from them? And, furthermore, that if replace these with custom-written (foreign-primitive ...)'s, having integer32 args / returns, those would also incur typechecks, and thus never be as efficient as C...? I've generally taken to avoiding the s32vector-ref/set! builtins. Instead I mostly keep data in SRFI-4 vectors and manipulate it with C-backed foreign-primitives, without ever extracting to fixnums. For example I have things like (s32vector-move1! vec src-idx dst-idx) (copies an element from src-idx to dst-idx). But even for these, I wonder what type the indexes should be declared to minimize typechecks at the interface between C and Chicken...? Perhaps even how to make them inline-able to simple C array ops? Best, Al
Re: "cannot coerce inexact literal to fixnum"
On Sat, Feb 10, 2024 at 04:07:12PM +0200, Al wrote: > On 2024-02-10 15:38, Peter Bex wrote: > > > That's because you're using fixnum mode. As I explained, using literals > > that might be too large for fixnums break the fixnum mode's premise that > > everything must be a fixnum. > > Oh. So it refuses to emit C code that might break on 32-bit at runtime > (silent truncation of atoi result, presumably), preferring instead to > definitely break during Scheme compilation on any platform. OK, I get the > rationale. Yup. Like I said, it might be doable to add some sort of mode where it could be used to compile in fixnum mode exclusively for 64-bit platforms, but someone would have to spend the time to do so. > > That's because string->number gets constant-folded and evaluated at > > compile-time. > > Obviously; and I suppose there's no simple way to prevent that for just one > line, not the entire unit? Not really. The foreign-value you've been using is a good hack because it performs exactly the conversion that you're looking for under the hood. It also forms an optimization barrier so the constant can't be precalculated. Note that foreign-value can still return bignums on 32-bit platforms if the value doesn't fit in a fixnum (this is done by C_unsigned_int_to_num in chicken.h) and you're using integer32. The int32 version doesn't do this and assumes it'll just fit into a fixnum. > I did mention twice that I'm using them to implement int->u32. That should only be needed if you generated values bigger than 32-bit. Again, these could still be bignums when run on a 32-bit platform (that's why the return type of s32vector-ref is defined as "integer" and not "fixnum") > There are also places where I need to increment-and-wrap int32's by > INT32_MIN. I'm > writing a Forth compiler incidentally (which may or may not have been a good > idea). I store values in s32vector's, but they get turned into Scheme > numbers by s32vector-ref. I guess I'd prefer native s32int/u32int types, > complete with wrap-around and meaningful conversion to Scheme ints, but I > don't think that exists. Unfortunately, no. Another way of accomplishing what you want is to use the fx operations directly. AFAICT those aren't constant-folded. So something like this should work: (define (->32bit x) (fxand (- (fxshl 1 32) 1) x)) Alternatively, put the conversion code (including big-fixnum literals) in a separate Scheme file which is compiled *without* fixnum mode, and call it from the file that is compiled in fixnum mode. Cheers, Peter signature.asc Description: PGP signature
Re: "cannot coerce inexact literal to fixnum"
On 2024-02-10 15:38, Peter Bex wrote: That's because you're using fixnum mode. As I explained, using literals that might be too large for fixnums break the fixnum mode's premise that everything must be a fixnum. Oh. So it refuses to emit C code that might break on 32-bit at runtime (silent truncation of atoi result, presumably), preferring instead to definitely break during Scheme compilation on any platform. OK, I get the rationale. That's because string->number gets constant-folded and evaluated at compile-time. Obviously; and I suppose there's no simple way to prevent that for just one line, not the entire unit? It would help if you tried to explain exactly _what_ you're trying to do here, instead of _how_ you're trying to do it. Why do you need these constants? I did mention twice that I'm using them to implement int->u32. There are also places where I need to increment-and-wrap int32's by INT32_MIN. I'm writing a Forth compiler incidentally (which may or may not have been a good idea). I store values in s32vector's, but they get turned into Scheme numbers by s32vector-ref. I guess I'd prefer native s32int/u32int types, complete with wrap-around and meaningful conversion to Scheme ints, but I don't think that exists. It does that (more or less), as I explained. And it *wouldn't* work, Yeah, I understand why now. I suppose the best way is to use foreign values. Maybe I should switch the arithmetic code to C too. Thanks. -- Al
Re: "cannot coerce inexact literal to fixnum"
On Sat, Feb 10, 2024 at 02:32:16PM +0200, Al wrote: > That would be fine but where does that happen? csc actually barfs on my > Scheme code (as per the subject line), instead of emitting C code to > encode/decode into a string at runtime, as you mention. That's because you're using fixnum mode. As I explained, using literals that might be too large for fixnums break the fixnum mode's premise that everything must be a fixnum. > It won't even let me use string->number by hand. That's because string->number gets constant-folded and evaluated at compile-time. > The only thing that worked was > > (cond-expand > (csi > (define INT32_MAX #x7fff) > (define INT32_MIN #x-8000) > (define UINT32_MAX #x) > ) > (else > ; chicken csc only does 31-bit literals in fixnum mode > (define INT32_MIN (foreign-value "((int32_t) 0x8000)" integer32)) > (define INT32_MAX (foreign-value "((int32_t) 0x7fff)" integer32)) > (define UINT32_MAX (foreign-value "((uint32_t) 0x)" > unsigned-integer32)) > ) > ) It would help if you tried to explain exactly _what_ you're trying to do here, instead of _how_ you're trying to do it. Why do you need these constants? > ... and I'm not sure what the implications of using a "foreign value" > further down in my program are. If I assign them to another variable, does > that variable also become a "foreign value"? A foreign value is simply a value that gets calculated using the FFI. The value itself, once calculated, won't be "special" in any way. It's just another fixnum. > How about if I do (bitwise-and > IMAX32 int) to truncate a signed number to unsigned32 (which is what I'm > actually using them for)? Again, what are you trying to accomplish? > > There's (currently) no option to force fixnum mode in a way that ignores > > the existence 32-bit platforms. Theoretically, it should be possible to > > compile your code assuming fixnums (so it emits C integer literals) and > > make it barf at compilation time if one tried to build for a 32-bit > > platform using a #ifdef or something. We just don't have the required > > code to do this, and I'm not sure this is something we'd all want. > > Well if csc emitted string->number code in fixnum mode when necessary, that > would at least work. It does that (more or less), as I explained. And it *wouldn't* work, because it can't make the assumption it won't be compiled on a system that's 32 bits. Cheers, Peter signature.asc Description: PGP signature
Re: "cannot coerce inexact literal to fixnum"
On 2024-02-10 13:00, Peter Bex wrote: These so-called "big-fixnums" are compiled into a string literal which gets decoded on-the-fly at runtime into either a fixnum (on 64-bit) or a bignum (on 32-bit). That would be fine but where does that happen? csc actually barfs on my Scheme code (as per the subject line), instead of emitting C code to encode/decode into a string at runtime, as you mention. It won't even let me use string->number by hand. The only thing that worked was (cond-expand (csi (define INT32_MAX #x7fff) (define INT32_MIN #x-8000) (define UINT32_MAX #x) ) (else ; chicken csc only does 31-bit literals in fixnum mode (define INT32_MIN (foreign-value "((int32_t) 0x8000)" integer32)) (define INT32_MAX (foreign-value "((int32_t) 0x7fff)" integer32)) (define UINT32_MAX (foreign-value "((uint32_t) 0x)" unsigned-integer32)) ) ) ... and I'm not sure what the implications of using a "foreign value" further down in my program are. If I assign them to another variable, does that variable also become a "foreign value"? How about if I do (bitwise-and IMAX32 int) to truncate a signed number to unsigned32 (which is what I'm actually using them for)? There's (currently) no option to force fixnum mode in a way that ignores the existence 32-bit platforms. Theoretically, it should be possible to compile your code assuming fixnums (so it emits C integer literals) and make it barf at compilation time if one tried to build for a 32-bit platform using a #ifdef or something. We just don't have the required code to do this, and I'm not sure this is something we'd all want. Well if csc emitted string->number code in fixnum mode when necessary, that would at least work. Although if I'm using fixnum mode, I'm probably looking for performance, and I'm not sure the subsequent C compiler is smart enough to optimize the "atoi" or whatever away into a constant. Maybe it is nowadays. Otherwise, how do I write Scheme code to truncate a signed number to unsigned32? Resort to foreign values as I did above (or write foreign functions)? -- Al
Re: "cannot coerce inexact literal to fixnum"
On Sat, Feb 10, 2024 at 08:12:38AM +0200, Al wrote: > On 2024-02-10 02:42, Al wrote: > > > ... if I enable fixnum, csc chokes on both the third and fourth > > display's with: "Error: cannot coerce inexact literal `2147483647' to > > fixnum". It compiles and runs fine if those lines are commented out (or > > if fixnum is disabled). > > So the error comes from a check for (big-fixnum?) in chicken-core/core.scm. > It is defined in chicken-core/support.scm: > > (define (big-fixnum? x) ;; XXX: This should probably be in c-platform > (and (fixnum? x) > (feature? #:64bit) > (or (fx> x 1073741823) > (fx< x -1073741824) ) ) ) > > (define (small-bignum? x) ;; XXX: This should probably be in c-platform > (and (bignum? x) > (not (feature? #:64bit)) > (fx<= (integer-length x) 62) ) ) > > Maybe the condition in big-fixnum should be negated? Apply the restrictive > #x3fff limit only when NOT (feature? #:64bit) ? No, this code is correct. It emits a literal into the C code. That C code must be able to be compiled on either 32-bit platforms or 64-bit platforms. That's why on 64-bit platforms, we can't simply assume that a fixnum is small enough to fit in a machine word - if the same code will be compiled on a 32-bit machine it wouldn't fit in a fixnum. These so-called "big-fixnums" are compiled into a string literal which gets decoded on-the-fly at runtime into either a fixnum (on 64-bit) or a bignum (on 32-bit). So you see, this wouldn't work with (declare fixnum) mode because that must be able to assume fixnums throughout, regardless of target platform. There's (currently) no option to force fixnum mode in a way that ignores the existence 32-bit platforms. Theoretically, it should be possible to compile your code assuming fixnums (so it emits C integer literals) and make it barf at compilation time if one tried to build for a 32-bit platform using a #ifdef or something. We just don't have the required code to do this, and I'm not sure this is something we'd all want. Cheers, Peter signature.asc Description: PGP signature
Re: "cannot coerce inexact literal to fixnum"
On 2024-02-10 02:42, Al wrote: ... if I enable fixnum, csc chokes on both the third and fourth display's with: "Error: cannot coerce inexact literal `2147483647' to fixnum". It compiles and runs fine if those lines are commented out (or if fixnum is disabled). So the error comes from a check for (big-fixnum?) in chicken-core/core.scm. It is defined in chicken-core/support.scm: (define (big-fixnum? x) ;; XXX: This should probably be in c-platform (and (fixnum? x) (feature? #:64bit) (or (fx> x 1073741823) (fx< x -1073741824) ) ) ) (define (small-bignum? x) ;; XXX: This should probably be in c-platform (and (bignum? x) (not (feature? #:64bit)) (fx<= (integer-length x) 62) ) ) Maybe the condition in big-fixnum should be negated? Apply the restrictive #x3fff limit only when NOT (feature? #:64bit) ? Also, I've looked at Ken's number-limits egg. It has #define MOST_POSITIVE_INT32 ((int32_t) 0x3fffL) I don't know why this (as opposed to 0x7fff) would apply to #:64bit installs..? In any case the documentation on call-cc.org says * most-negative-integer32 Smallest negative int32_t value * most-positive-integer32 Largest negative (sic!) int32_t value ... which is ALSO wrong for most-positive-integer32 , but it does refers to int32_t, as opposed to chicken's internal representation (which uses up an extra bit.)
"cannot coerce inexact literal to fixnum"
(import (chicken fixnum)) ; (declare (fixnum)) (define mp most-positive-fixnum) (display mp) (newline) (display (string->number (number->string mp))) (newline) (display #x7fff) (newline) (display (string->number "#x7fff")) (newline) Obviously the first number is much greater than INT32_MAX. However, if I enable fixnum, csc chokes on both the third and fourth display's with: "Error: cannot coerce inexact literal `2147483647' to fixnum". It compiles and runs fine if those lines are commented out (or if fixnum is disabled). Not sure what's wrong here. Tried with both 5.3.0 and 5.3.1-pre. -- Al