== Quote from Don (nos...@nospam.com)'s article > Jim Balter wrote: > > > > "Walter Bright" <newshou...@digitalmars.com> wrote in message > > news:i2nkto$8u...@digitalmars.com... > >> bearophile wrote: > >>> You have to think about proofs as another (costly) tool to avoid > >>> bugs/bangs, > >>> but not as the ultimate and only tool you have to use (I think > >>> dsimcha was > >>> trying to say that there are more costly-effective tools. This can be > >>> true, > >>> but you can't be sure that is right in general). > >> > >> I want to re-emphasize the point that keeps getting missed. > >> > >> Building reliable systems is not about trying to make components that > >> cannot fail. It is about building a system that can TOLERATE failure > >> of any of its components. > >> > >> It's how you build safe systems from UNRELIABLE parts. And all parts > >> are unreliable. All of them. Really. All of them. > > > > You're being religious about this and arguing against a strawman. While > > all parts are unreliable, they aren't *equally* unreliable. Unit tests, > > contract programming, memory safe access, and other reliability > > techniques, *including correctness proofs*, all increase reliability. > I have to disagree with that. "Correctness proofs" are based on a total > fallacy. Attempting to proving that a program is correct (on a real > machine, as opposed to a theoretical one) is utterly ridiculous. > I'm genuinely astonished that such an absurd idea ever had any traction.
Yea, here's a laundry list of stuff that theory doesn't account for that can go wrong on real machines: overflow rounding error compiler bugs hardware bugs OS bugs I sincerely wish all my numerics code always worked if it was provably mathematically correct.