On Monday, 6 June 2022 at 15:54:16 UTC, Steven Schveighoffer
wrote:
If it's an expected part of the sorting algorithm that it *may
fail to sort*, then that's not an Error, that's an Exception.
No, it is not expected. Let me rewrite my answer to Sebastiaan to
fit with the sort scenario:
For instance, you may have a formally verified sort function, but
it is too slow. So you optimize one selected bottle neck, but
that cannot be verified, because verification is hard. That
specific unverified softspot is guarded by an assert. The
compiler may remove it or not.
Your shipped product fails, because the hard to read optimization
wasn't perfect. So you trap the thrown assert and call the
reference implementation instead.
The cool thing with actors/tasks is that you can make them as
small and targeted and revert to fallbacks if they fail.
(Assuming 100% @safe code.)
It says that the programmer cannot attribute exactly where this
went wrong because otherwise, he would have accounted for it,
or thrown an Exception instead (or some other mitigation).
He can make a judgement. If this happened in a safe pure function
then it would most likely be the result of what is what meant to
do: check that the assumptions of the algorithm holds.
Anything from memory corruption, to faulty hardware, to bugs in
the code, could be the cause.
That is not what asserts check! They will be removed if the
static analyzer is powerful enough. All the information to remove
the assert should be in the source code.
You are asserting that *given all the constraints of the type
system* then the assert should hold.
Memory corruption could make an assert succeed when it should
not, because then anything can happen! It cannot catch memory
corruption reliably because it is not excluded from optimization.
You need something else for that, something that turns off
optimization for all asserts.
Exactly. Use Exceptions if it's recoverable, Errors if it's not.
This is what is not true, asserts says only something about the
algorithm it is embedded in, it says that the algorithm makes a
wrong assumption, and that is all. It says nothing about the
calling environment.
A failed assert could be because of undefined behavior. It
doesn't *imply* it, but it cannot be ruled out.
And a successful assert could happen because of undefined
behaviour or optimization! If you want these types of guards then
you need to propose a type of asserts that would be excluded from
optimization. (which might be a good idea!)
In the case of UB anything can happen. It is up to the programmer
to make that judgment based on the use scenario. It is a matter
of probabilisitic calculations in relation to the use scenario of
the application.
As I pointed out elsewhere: «reliability» has to be defined in
terms of the use scenario by a skilled human being, not in terms
of some kind of abstract thinking about compiler design.