On 30.09.2010, at 12:20, MacArthur, Ian (SELEX GALILEO, UK) wrote: > >> Thanks for the explanation. I was wondering WTH this was about. They >> define Atom as unsigned long which is 8 bytes = 64 bits on some 64-bit >> systems, but then they don't allow the (data) format to be 64 bits? > > Yes - that appears to be the Way Things Are. > You can understand, I think, why I was uncomfortable with that...
Yup. > The implication also appeared to be that "internally" the X system > "knew" that 32 might not mean 32 on some platforms. > The intent seemed to be that you just pass the Atom as normal, but set > the size to 32 on any host where the "true" size was>= 32 and leave it > at that. > That did not make me any happier about this as a solution. Well, then I don't understand why they don't allow 'format' to be 64. But, anyway, that's interesting. That would mean that we could, for instance, use an array of (64-bit) Atoms (where applicable), say that the format is 32, and it would find all the array elements correctly? I mean: on 32-bit AND on 64-bit systems, little- and big-endian as well? _That_ would be interesting to see documented somewhere. This would at least help to make the code "better" (for some value of "better"), because you wouldn't need to cast all Atom's. #define SIZEOF_ATOM (sizeof(Atom)>32 ? 32 : sizeof(Atom)) Is this what we are supposed to (and maybe should) do? If this is well-documented, I'd be inclined to do it this way rather than moving and casting Atoms to other types. Albrecht _______________________________________________ fltk-dev mailing list [email protected] http://lists.easysw.com/mailman/listinfo/fltk-dev
