On Sunday, 5 October 2014 at 21:16:17 UTC, Marco Leise wrote:
I don't get this. When we say logic error we are talking bugs
in the program.

By what definition?

And what if I decide that I want my programs to recover from bugs in insignificant code sections and keep going?

Is a type error in a validator a bug? It makes perfect sense to let the runtime throw implicitly on things you cannot be bothered to check explicitly because they should not happen for valid input. If that is a bug, then it is a good bug that makes it easier to write code that responds properly. The less verbose a validator is, the easier it is to ensure that it responds in a desirable fashion. Why force the programmer to replicate the work that the compiler/runtime already do anyway?

Is a out-of-range-error when processing a corrupt file a bug or it is a deliberate reliance on D's range check feature? Isn't the range check more useful if you don't have to do explicit checks for valid input? Useful as in: saves time and money with the same level of correctness as long as you know what you are doing?

Is deep recursion a bug? Not really.

Is running out of memory a bug? Not really.

Is division by a very small number that is coerced to zero a bug? Not really.

Is hitting the worst case running time which cause timeouts a bug? Not really, it is bad luck.

Can the compiler/library/runtime reliably determine what is a bug and what is not? Not in a consistent fashion.

Why would anyone turn an outright bug into
"cannot compute this". When a function cannot handle division
by zero it should not be fed a zero in the first place. That's
part of input validation before getting to that point.

I disagree. When you want to computations to be performant it makes a lot of sense to do speculative computation in a SIMD like manner using the less robust method, then recompute the computations that failed using a slower and more robust method.

Or simply ignore the results that were hard to compute: Think of a ray tracer that solves very complex equations using a numerical solver that will not always produce a meaningful result. You are then better off using the faster solver and simply ignore the rays that produce unreasonable results according to some heuristics. You can compensate by firing more rays per pixel with slightly different x/y coordinates. The alternative is to produce images with "pixel noise" or use a much slower solver.

Or do you vote for removing these validations and wait for the
divide by zero to happen inside the callee in order to catch
it in the caller and say in hindsight: "It seems like in one
way or another this input was not computable"?

There is a reason for why the FP handling in ALUs let this be configurable. It is up to the application to decide.

Reply via email to