Walter Bright:
> That misses the point about reliability. Again, you're approaching from the 
> point of view that you can make a program that cannot fail (i.e. prove it 
> correct). That view is WRONG WRONG WRONG and you must NEVER NEVER NEVER rely 
> on 
> such for something important, like say your life. Software can (and will) 
> fail 
> even if you proved it correct, for example, what if a memory cell fails and 
> flips a bit? Cosmic rays flip a bit?
> 
> Are you willing to bet your life?

If you have a critical system, you use most or all means you know to make it 
work, so if you can you use theorem proving too. If you have the economic 
resources to use those higher means, then refusing to use them is not wise. And 
then you also use error correction memory, 3-way or 6-way redundancy, plus 
watchdogs and more.

If it is a critical system you can even design it fail gracefully even if zero 
software is running, see the recent designs of the concrete canal under the 
core of nuclear reactors, to let the fused core go where you want it to go 
(where it will kept acceptably cool and safe, making accidents like Chernobyl 
very hard to happen) :-)

Bye,
bearophile

Reply via email to