This is more of a language thang, so I've redirected your message there [here].
> The most fundamental feature throwing an exception is that it transfers > program execution from the call site. Allowing the caller to resume > execution at that site is a very dangerous form of action at a distance. According to you. :-) > I think you'd be better off a giving the caller an explicit way to > inhibit throwing an exception. That is, use program design rather than > language features to provide this functionality. e.g., a delegate: > > $ex = new My::Exception: ...; > die $ex > unless .delegate && .delegate.should_ignore_ex($_, $ex)); > > Not only is the above pattern clearer to read and less afflicted by > action at a distance, but it provides a wider bidirectional interface. It also has high inertia and is pretty verbose for error reporting. The thing about reporting errors is that it needs to be short, easy to type, easy to think about, or else people won't do it. Everyone did error checking in Perl 5 because it was a simple little C<or die>. But I think you've got one thing right: giving the exception thrower control. The thing about C<die> is that it denotes a fatal error -- saying C<die> in a function is basically saying "I can't continue after this"... so you I<probably> shouldn't be allowed to continue it. If an error isn't fatal, you're supposed to use C<warn>. So, maybe what's needed is a C<warn> catcher (C<WARNCATCH>... eew), where C<warn> would throw an exception object with an attached continuation. And of course, if a warning reached the top of the stack without being caught, it would print itself and invoke its continuation. Luke