Date: Fri, 10 Jun 2011 23:43:38 -0700 From: Matt Birkholz <[email protected]>
The first implements alienate_float_environment() and calls it before every callout and after every callback. This is no surprise. If we hadn't been running with all exceptions trapped since forever, I'd be worried that we really need to mask all exceptions in primitives by default anyway. The second patch implements BORKED_FENV. That this is necessary is a bit surprising. Can you set a breakpoint in gdb on alienate_float_environment, both with libc's fe* and with Scheme's fe*, and step through the machine instructions to compare what they do differently? (You'll have to avoid fesetenv altogether, since Scheme doesn't implement FE_DFL_ENV.) I looked at the glibc source code, and I see no substantial difference beyond fnstcw vs fstcw in fedisableexcept (which shouldnt make a difference here). commit 0d4959eb6a58c9d1e7c730f577e4a630d46ab5eb Author: Matt Birkholz <[email protected]> Date: Wed Jun 1 20:54:13 2011 -0700 --- a/src/microcode/cmpauxmd/x86-64.m4 +++ b/src/microcode/cmpauxmd/x86-64.m4 - fnstcw IND(REG(esp)) - OP(mov,w) TW(IND(REG(esp)),REG(ax)) + fnstcw IND(REG(rsp)) + OP(mov,w) TW(IND(REG(rsp)),REG(ax)) Oops. Too much garlic in my copy pasta -- *brain burp*. I guess the only way to test this would be to allocate a stack large enough that it doesn't fit in the low 4 GB of the virtual address space, which would fall outside the domain of our automated testing facilities... _______________________________________________ MIT-Scheme-devel mailing list [email protected] https://lists.gnu.org/mailman/listinfo/mit-scheme-devel
