Richard Purdie wrote:
> The kernel uses UINT_MAX defined from kernel.h in a variety of places.
> 
> While looking at the behaviour of the LZO code, I noticed it seemed to
> think an int was 8 bytes large on my 32 bit i386 machine. It isn't but
> why did it think that?
> 
> kernel.h says:
> 
> #define INT_MAX               ((int)(~0U>>1))
> #define INT_MIN               (-INT_MAX - 1)
> #define UINT_MAX      (~0U)
> #define LONG_MAX      ((long)(~0UL>>1))
> #define LONG_MIN      (-LONG_MAX - 1)
> #define ULONG_MAX     (~0UL)
> #define LLONG_MAX     ((long long)(~0ULL>>1))
> #define LLONG_MIN     (-LLONG_MAX - 1)
> #define ULLONG_MAX    (~0ULL)
> 
> If I try to compile the code fragment below, I see the error:
> 
> #define UINT_MAX      (~0U)
> #if (0xffffffffffffffff == UINT_MAX)
>   #error argh
> #endif
> 
> I've tested this on several systems with a variety of gcc versions with
> the same result. I've tried various other ways of testing this all with
> the same conclusion, UINT_MAX is wrong.
> 

C99 states that all arithmetic in the preprocessor is done in
(u)intmax_t, regardless of prefixes or suffixes.

        -hpa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to