On 1/2/06, Paul Schlie <[EMAIL PROTECTED]> wrote: > - at the most basic level, I feel like I've too often needlessly wasted > time debugging programs at one level of optimization, to only see a > different behavior needlessly expressed at a different level of > optimization (which I understand means something isn't portable, but > isn't the correct way to inform one of non-portable code, but is one > hell of a way to unknowingly interject bugs into a program which didn't > exist at a different level of optimization); however if a compiler > supported the means by which a target could define the semantics left > undefined by a language, an optimizing compiler could then both satisfy > the formal constrains of language, while simultaneously enabling target > specific semantics to be supported, and preserved through optimization. > (which seems like a win-win to me)
Okay, this makes sense to me now. If there were a switch that changed the language from ISO C to a very similar language that actually specified a fixed behavior for all the behaviors that ISO C says are unspecified or undefined, then you'd have a language that might not be possible to compile as efficiently in some cases, but in which every program had a unique correct behavior. (Setting aside inherently unpredictable things like threads and signals.) For example, the language would actually have to specify some default value for all variables, or require them to be assigned before their first use in a way that the compiler could statically verify (as in Java). This is what the Java folks were shooting for, if you ignore the non-determinism introduced by threads. Standard ML also specifies a particular result for all programs. If my copies of both those specs weren't still packed, I think I could actually find quotes from each where they state being fully defined as a goal. So I think it's clear there are a lot of people who think this is a worthwhile principle. Paul is combining this suggestion with the further idea that the unspecified and undefined behaviors could be tied down in a way comfortable for the particular target. I guess he's trying to reduce the performance impact. That concession would allow the changes in behavior that annoy him now when he switches optimization levels to re-appear when one switches targets. The opposite extreme to Paul's concession would be to eliminate all target-dependent characteristics from the language, including type size differences and alignment requirements, yielding a language which specified a unique correct result for all programs (again, setting aside threads and signals) on all targets. As ML and Java do. Or, there could be a middle ground: you could specify some characteristics (say, integral type sizes and wrap-around on overflow) for all targets, but leave others (say, pointer size and alignment) target-specific.