On Fri, May 2, 2008 at 2:52 AM, William A. Rowe, Jr. <[EMAIL PROTECTED]> wrote: > Lucian Adrian Grijincu wrote: > > > > > On Fri, May 2, 2008 at 2:18 AM, Roy T. Fielding <[EMAIL PROTECTED]> wrote: > > > > > > > Why? The type char is defined by the C standard to be an 8bit signed > integer. > > > The type unsigned char is defined to be an 8bit unsigned integer. Why > would > > > we want to add a bunch of unnecessary casting? > > > > > > > Not quite: http://home.att.net/~jackklein/c/inttypes.html > > > > That doesn't resolve Roy's question of "why overload signed char and > unsigned char"? > > Can anyone point to a platform where int8_t/uint8_t != signed/unsigned > char?
I've searched through the Linux kernel sources. This typedef __signed__ char __s8; typedef unsigned char __u8; ... typedef __u8 uint8_t; typedef __s8 int8_t; seems to be the general way to define int8_t and uint8_t (if I didn't skip any by mistake) regardless of cpu architecture. This doesn't prove anything, but it puts things in perspective:) There was a talk (flame) on comp.lang.c a few months ago. I've skimmed through quite a few, but found no mention of a C conforming implementation that had sizeof(char) != 8. Wherever the machine's smallest addressable unit was greater than 8 bits, the compiler would emulate smaller types by inserting shifts and bitwise operations (a supposed machine would be the Cray 90 http://groups.google.com/group/comp.lang.c/msg/6cd4a4c8b2b0806c ) Based on the above I think that typedef signed char int8_t; typedef unsigned char uint8_t; Inserted directly in apr.h.in would do just fine, without needing any support from the autotools. -- Lucian PS: At least maybe this way we'd break an implementation somewhere and someone would file a bug report on this :P I'm really curious to find a machine with chars!=8bits