On Thu, Feb 26, 2009 at 11:55 AM, Jonathan S. Shapiro <[email protected]>wrote:

> I have started trying to encode the interfaces of various libraries
> that are written in C, and I have hit a snag. The definition of C does
> not specify the size of short, int, long, and as a practical matter
> most libraries have used these "loose" definitions. This raises a
> problem of compatibility. On any given platform, there is a type in
> BitC that maps to "int" in that platform, but there is no type in BitC
> that maps to "int" generally.
>
> In abstract, there seem to be two ways to handle this:
>
> 1. Have a module that defines *aliases* for c_int, c_long, c_short, etc.
> 2. Introduce c_int and friends as first-class integer types that are
> *distinct from* the BitC types, but also define conversion functions
> and arithmetic operations over these types.
>
> The main disadvantage to [1] is that certain classes of portability
> error are not caught at compile time. If "word" is an alias, then an
> add of the form:
>
>   (+ x:int32 y:word)
>
> will compile on a 32-bit platform but not on a 64-bit platform. If
> "word" is a type in its own right, the compiler will complain about
> this usage. This is why we made word a distinct type for vector sizes.
>
> Introducing typealias raises some challenges related to name spaces.
> These challenges are surmountable, but I'm really trying not to make
> major language changes at this point. Adding new types for things like
> c_int is very easy.
>
> I would appreciate thoughts and reactions on this issue of
> C-compatible integer types.
>

Usually is enough to have a native size integer type that works with 32 bits
integers and do
implicit widening of them on 64 bits archs. This is safe since BitC doesn't
support 16bits targets.

Most 64bits targets handle mixed 32-64 bits math just fine.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to