On Thu, Aug 8, 2013 at 3:47 PM, David Jeske <[email protected]> wrote:
> On Thu, Aug 8, 2013 at 10:41 AM, Jonathan S. Shapiro <[email protected]>wrote: > >> I think I'm missing the problem you are trying to solve that is prompting >> you to want to save the stack. >> > > I don't want to save the stack. I'm explaining that I *believe* existing > runtimes (JVM, CLR) have slow-throw specifically because they save the > stack, to assure they can provide stack-backtraces. > Ah. OK. I eventually deduced that was how this discussion started, but thank you for making it clear. > Bennie, all I want is a fast structured error handling mechanism which >>> includes assignment analysis. I don't much care is there is another slow >>> one. I'm happy to ignore it. >>> >> Can you say what you mean by "assignment analysis" here? >> > > I mean that returnable products of a function should be known by the > compiler not to contain predictable values in the case of an error return > (whether by structured error return, or structured exception handling)... > Currently we only get this behavior when using slow exceptions. Here is an > example.. > > SomeType a, b; > try { > a = doSomething(out b); > print(a); > } catch { > // invalid to use a or b here, because the compiler knows they were > not assigned.. > } > OK. I see what you are saying, but the problem is that you're wrong. A programmer, using proper discipline, can guarantee that the values *are* predictable, and it is very important to do this when writing transactional code. We shouldn't build a feature into the compiler that precludes transactional code! A better way to capture this might be to have a way to mark OUT parameters according to three possibilities: - Assigned in all cases - Untouched on exception (transactional style) - Undefined on exception Coming from a processor architect's point of view, the last possibility strikes me as frought with problems, and better removed from the language, though if we make it the case that "undefined" means "unusable in the caller", then it could be okay. > I'm expressing a preference to end slow exception mechanisms... > OK. Sorry about my misunderstanding, and I can definitely get behind this goal. :-) > I think there are a great many situations where runtime-stack-backtraces > are useful. To this end, it might be reasonable to strike a compromise > where we can, at run-time, decide globally between ultra-fast throw with no > stack-preservation, and pretty-fast throw with stack preservation. > (assuming there is no magic way to have ultra-fast throw and > stack-preservation, which would be even better) > That's one possibility. Another is to admit that exceptions have two uses: intended recoverable and intended fatal, and maybe distinguish those cases at the point of raise. An intended recoverable exception doesn't produce a stack trace unless it is completely unhandled (in which case note that the stack is unmodified). An intended fatal exception *does* produce a stack trace, possibly slowly. Not sure that's a good idea - just pondering an option here. > > >> I think there's another question to ask here. Is the CLR's approach to >> exception handling the real culprit here, or is the real problem the fact >> that dynamic introspection exists in CLR? >> > > To understand the cause of CLR's incredibly slow exceptions, we need a > real source of truth. Absent some digging, I think it's partially related > to stack-backtrace-preservation (which is only slightly related to > introspection), and partially related to the stack-walk based security > model. I could be completely wrong though. > Yeah. I think backtrace is the culprit. Dynamic introspection leads to a bunch of *other* perf issues, but I don't think it's the root cause in this one. shap
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
