I wrote: >> I'm very hesitant to apply a volatile-qualification approach to >> eliminate those issues, for fear of pessimizing performance-critical >> code on more modern platforms. I wonder whether there is a reasonable >> way to tell at compile time if we have a platform with 80-bit math.
> Hmmm ... I find that dromedary's compiler predefines __FLT_EVAL_METHOD__ > as 2 not 0 when -mfpmath=387 is given. This seems to be something > that was standardized in C99 (without the double underscores), so > maybe we could do something like > #if __FLT_EVAL_METHOD__ > 0 || FLT_EVAL_METHOD > 0 After further poking around, it seems that testing FLT_EVAL_METHOD should be sufficient --- <float.h> appears to define that correctly even in very old C99 installations. However, I'm losing interest in the problem after finding that I can't reproduce it anywhere except on dromedary (with "-mfpmath=387" added). For instance, I have a 32-bit FreeBSD system with what claims to be the same compiler (gcc 4.2.1), but it passes regression just fine, with or without -mfpmath=387. Earlier and later gcc versions also don't show a problem. I suspect that Apple bollixed something with local mods to their version of 4.2.1, or possibly they are allowing inlining of isinf() in a way that nobody else does. Also, using that compiler with "-mfpmath=387", I see that every supported PG version back to 9.4 fails regression due to not detecting float8 multiply overflow. So this isn't a problem that we introduced with v12's changes, as I'd first suspected; and it's not a problem that anyone is hitting in the field, or we'd have heard complaints. So, barring somebody showing that we have an issue on some platform that people actually care about currently, I'm inclined not to do anything more here. regards, tom lane