Hi All,

On Monday 26 August 2013 14:20:18 Atgeirr Rasmussen wrote:
> Den 23. aug. 2013 kl. 12:06 skrev Andreas Lauser:
> [...]
> (for example, a Newton solver not converging is not an exceptional thing).
> 
> That's a thing I see quite differently. The reason is that there are about a
> million things which could go wrong that make the newton solver diverge. If
> any and all of this cases is protected by a classical "if(error) return
> error;" guard, you take quite a performance hit (by means of the guards
> themselfs and by means of missed optimization opportunities). On the other
> hand, C++ exceptions do not negatively impact performance (for details, see
> http://stackoverflow.com/questions/13835817/are-exceptions-in-c-really-slow
> ).
> 
> Semantics first: a Newton solver failing to converge is something that
> should be expected, and to a large extent any reasonable software must
> handle the possibility. Therefore it is not exceptional, and the event
> should not be hidden from the API.

well, i think that it is exceptional in the sense that this only occurs once 
about every fifty million invocations of the assembler for the local jacobian 
of an element. (and each single invocation of the local jacobian would require 
quite a few ifs in the traditional method.)  IMHO that's exactly the use case 
for C++ exceptions.

I agree that this case must be handled, but why is it so bad to do this in an 
exception handler? Besides being more performant and more maintainable (the 
code is not cluttered by ifs and you can use the function return values for 
something useful), it is IMHO also conceptionally clearer to do it this way 
than the traditional way.

I also agree that the possibility of an exception thrown should not be hidden, 
but I think that's rather a documentation problem. (BTW, judging from the 
number of Linux kernel bugs, error code paths are also rarely tested if using 
the traditional way.)

> Performance: only when not being thrown can C++ exceptions have no overhead.

true, but if this gets thrown only once for every 50 million invocations and 
if every invocation only requires a single "if" statement for error detection, 
then c++ exceptions are still approximately one to 2.5 million times faster 
than the traditional approach...

> Modern compilers will emit code that cause no performance penalty when
> nothing is thrown, but if an exception is thrown it will have major
> performance consequences.

true: according to the stackoverflow article from above, the overhead of a 
thrown exception is equivalent to about 20 to 50 ifs. (plus cache effects, so 
let's make this a thousand ifs at worst.)

> This is quite all right usually, but it means
> that any code that uses exceptions for non-exceptional things, as
> alternative control flow, is going to be slow.

This is certainly also true. Although I did not argue for using exceptions for 
this case, right?

> So over time I gravitated towards the approach followed now in most of the
> software I have made -- report errors with the THROW(something) macro from
> ErrorMacros.hpp. This just sends file and line number plus the 'something'
> part to std::cerr, and throws a pure, unadorned std::exception.
> 
> I also have a slightly different opinion about that. IMHO it is okay to
> encode some information in what the error was within the exception object.
> I think of this as a modern version of the old-school C 'errno' variable.
> 
> I can agree that a little context in the exception object does not hurt (I
> did characterise my approach as a somewhat extreme after all…).
> 
> 
> It could be argued that this is somewhat extreme in its lack of information
> being transmitted with the exception, but my reasoning is as follows: most
> of the time, the context of the exception is important for debugging, and
> this cannot be carried by the exception.
> 
> punching 'catch throw'  into GDB makes it break whenever an exception is
> thrown. this is a thing that is very hard to do for the traditional way.
> Also, if more complicated breakpoint conditions are required, putting
> 'raise(SIGINT);' into your code makes GDB break at this location. For these
> reasons I think think exceptions are not hard to debug. (That is, of course,
> if you don't use them as a normal return mechanism.)
> 
> This is fine for a developer, but not for an end user. Trying to explain
> debugging in gdb to someone at a client company who just encountered a
> fatal error is not a good use of either's time.

I don't understand your argument: what is the end user going to do if he gets 
his program aborted due to some function failing if the traditional approach 
is used in contrast to exceptions?
 
> That is why the macro prints file
> and line number. On the other hand, the receiving end is rarely implemented
> (other than, perhaps at a very high level), and would not be able to use
> this information anyway.

Why this is an argument against using exceptions (an exception object may also 
carry that information and it can be printed by the exception handler)...

> This might be true for opm-core and opm-porsol, but eWoms catches
> 'NumericalException' and tries again with a smaller timestep. As you can
> imagine, is a relatively common case.
> 
> This may be a performance bomb, see above. Even if it turns out to not be,
> it is in my opinion abusing the exception concept, since it is an
> unexceptional event.

only if this would happen frequently, see above.

> Finally, if you really want to use a specific
> exception for this, it should be something like NewtonFailureMaxIter or
> something like that.

well, we can talk about the naming, but in my experience, most of these errors 
occur in the assembly stage because the newton method decided to use an 
intermediate value which is out-of-range for the material relations. IMO, 
throwing NewtonFailureMaxIter in this case would be quite a misnomer. Maybe 
something like 'MaterialError' or so...

> Introducing a new exception hierarchy is of dubious value. I think that once
> you has digested the appropriate usage of the standard range_error vs.
> out_of_range_error vs. invalid_argument vs. length_error and so on (quick
> quiz: what is appropriate for an index check in an array), you will not
> feel the need to complicate any further. If one feels the need for
> specialty exceptions that is fine, but they should be defined close to
> where they are used and inherit logic_error or runtime_error.
> 
> I agree that most of these exceptions should not be caught. But i think
> there is some value in the class-name of the exception object itself.
> Again, that's similar to the errno variable of libc. (what do you want to
> do if this one gets set to ENOMEM?)
> 
> I can agree to that. It should be specific then, not NumericalException.

yeah, maybe something also like "MaterialError", "LinearSolverError", etc. The 
result would not be much different, though: the timestep would be reduced and 
the Newton method would be restarted. IME the main value for analyzing what 
went wrong the error message (which gets printed anyway)...

> Finally, I'll state the obvious that we do not need to discuss. Yes, we
> allow throwing exceptions. Yes, all code should satisfy the basic exception
> safety guarantee (and all developers are assumed to understand what that
> means). No, we do not require the strong gurarantee (but perhaps we
> should).
> 
> IMHO these guarantees are quite easy to fulfill if you don't use C-style
> malloc()+free() memory management. Other options are welcome ;)
> 
> Yes, I agree. std::vector saves us all from this particular hell...

we should not forget the virtues of std::unique_ptr and std::shared_ptr :)

cheers
  Andreas

-- 
A programmer had a problem. He thought to himself, "I know, I'll solve it with 
threads!". has Now problems. two he
   -- Davidlohr Bueso

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Opm mailing list
Opm@opm-project.org
http://www.opm-project.org/mailman/listinfo/opm

Reply via email to