> [B]uffer overflows can always be avoided, because if there is ANY > input whatsoever that can produce a buffer overflow, the proofs will > fail and the problem will be identified.
Then either (a) there exist programs which never access out-of-bounds but which the checker incorrectly flags as doing so, or (b) there exist programs for which the checker never terminates (quite possibly both). (This is simply the Halting Theorem rephrased.) > In summary: If you can state what you mean by "secure" in terms of > what must happen and what must not happen, then by using precise > specifications and automatic proof, you can achieve complete security > for all possible inputs - until the definition of "secure" needs to > be expanded. Or until you find a bug in your automated prover. Or, worse, discover that a vulnerability exists despite your proof, meaning that you either missed a loophole in your spec or your prover has a bug, and you don't have the slightest idea which. Ultimately, this amounts to a VHLL, except that you're calling it a "specification" rather than "code" - and that rather than "compiling" the "code" with a program, you're using (falliable) humans, with a program (the "prover") checking that the "compiler" output is correct. And, as with any language, whoever writes this VHLL can write bugs. At some point you _have_ to ground your assurance on "Smart people looked at it and think it's OK". You can shuffle that point around, but it's always lurking somewhere. /~\ The ASCII der Mouse \ / Ribbon Campaign X Against HTML [EMAIL PROTECTED] / \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B