On Jul 24, 2009, at 4:18 PM, Geoffrey Garen wrote:

In JavaScriptCore, some structures have integer members that must be
32bits in size, regardless of processor type. In those places, int32_t
and uint32_t are useful.

Less clear to me is whether clients of such structures should also use
int32_t / uint32_t. For example:

struct {
      int32_t i;
} s;

int32_t i32 = s.i; // option 1
int i = s.i; // option 2

Technically, option 2, which converts from int32_t to int, requires a
"32bit move with sign extension to 64bit" instead of just a "32bit
move", but Intel's documentation says that 32bit to 64bit sign
extension is the very fastest instruction on the processor, so maybe
it doesn't matter.

As Darin pointed out privately, this observation was wrong. Since "int" still means "32bit" on LP64 platforms, you could just use int.

So, I guess the question is, if you have a quantity that must be 32 bits, is it useful or harmful to specify by using int32_t instead of int?

Even though they are really the same type, I find it helpful to clarify when it's really important to have exactly 32 bits. Similarly, I like int16_t over short, and particularly uint8_t over unsigned char for the byte unit of binary data. If anything, I'd prefer to see us use C standard explicitly sized types throughout, over plain C types.

A related point: for integer types that should change with the architecture, I prefer using clear typedefs like size_t or ptrdiff_t instead of "long" or "unsigned long".

Regards,
Maciej

_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

Reply via email to