On Thu, Apr 18, 2024 at 8:47 PM Peter Eisentraut <pe...@eisentraut.org> wrote: > Maybe this means something like our int64 is long long int but the > system's int64_t is long int underneath, but I don't see how that would > matter for the limit macros.
Agreed, so I don't think it's long vs long long (when they have the same width). I wonder if this comment is a clue: static char * inet_net_ntop_ipv6(const u_char *src, int bits, char *dst, size_t size) { /* * Note that int32_t and int16_t need only be "at least" large enough to * contain a value of the specified size. On some systems, like Crays, * there is no such thing as an integer variable with 16 bits. Keep this * in mind if you think this function should have been coded to use * pointer overlays. All the world's not a VAX. */ I'd seen that claim before somewhere else but I can't recall where. So there were systems using those names in an ad hoc unspecified way before C99 nailed this stuff down? In modern C, int32_t is definitely an exact width type (but there are other standardised variants like int_fast32_t to allow for Cray-like systems that would prefer to use a wider type, ie "at least", 32 bits wide, so I guess that's what happened to that idea?). Or perhaps it's referring to worries about the width of char, short, int or the assumption of two's-complement. I think if any of that stuff weren't as assumed we'd have many problems in many places, so I'm not seeing a problem. (FTR C23 finally nailed down two's-complement as a requirement, and although C might not say so, POSIX says that char is a byte, and our assumption that int = int32_t is pretty deeply baked into PostgreSQL, so it's almost impossible to imagine that short has a size other than 16 bits; but these are all assumptions made by the OLD coding, not by the patch I posted). In short, I guess that isn't what was meant.