On 2/28/07, Paul Schlie <[EMAIL PROTECTED]> wrote:
Thereby although creating a logical loop testing for a termination condition prior to returning a value and thereby enabling control flow to proceed, may not be the best way to temporally synchronize program execution to external events, it's seemingly valid; and thereby seemingly only valid to optimize away evaluation if it can be proven to terminate exclusively based on synthetic state values within the context of the logical program, and not values imported from the outside world through I/O ports or other similar means for example.
This seems weird. If I were to write a loop that just ran for timing purposes, I'd be insane to run it through an optimizing compiler. As it stands, collatz conjecture hasn't been proven, but no divergent value has ever been discovered. Would it be ok to optimize out the code for the cases for which it is known to converge? Suppose a termination proof were found, would it be ok to optimize out the call in that case? Would it be ok even if someone wrote timing code that depended on such a proof not being found? -- ~jrm _______________________________________________ r6rs-discuss mailing list [email protected] http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss
