i'd suspect that the gcc 64-bit long forces more changes than the other 
approach,
because i tried both with plan 9, and subsequently looked at a fair bit of 
non-plan9 code.

in fact, i tried x86_64's approach first for just the reason given:
code that fiddles with pointers as ints (in plan 9) often uses ulong,
though not invariably so, and having long as 64 bits keeps that code working.

as it turns out, relatively few things these days--certainly in plan 9--
fiddle with pointers as integers (directly, that is, not say through varargs).
those can be found by the compiler (as gcc does with pointer -> int on amd64).
by contrast, it's very hard for a compiler to find, reliably, instances
where long is assumed to be 32 bits.  bitmasks, loop bounds, ...
there are surprisingly many, things break weirdly, often obscurely,
or just on boundary cases, and having found myself in a world of pain...
i changed my mind.  all is relative sweetness and light.
at least the compiler moans instead of me.

non-plan9 code might go a little better because most of (say) the FSF
code probably assumes int is 32, so programmers typically
put the subtle 32-bit assumptions on `int', not `long'.
thus when long changes, it's less troublesome.
even so, quite a few do use long in contexts where 64-bits is not particularly 
useful,
probably not intended, and 32 bits would be a better representation.

Reply via email to