> On 17 August 2013 13:50, Jon Callas <j...@callas.org> wrote:
>> ...
>> I *cannot* provide an argument of security that can be verified on its
>> own. This is Godel's second incompleteness theorem. A set of statements S
>> cannot be proved consistent on its own. (Yes, that's a minor handwave.)
.
.
On Thu, Aug 22, 2013 at 1:32 AM, Ben Laurie <b...@links.org> wrote:
> That is totally not Godel's second incompleteness theorem. It is certainly
> possible to prove things in formal systems.
> This is not a "minor handwave", it is pure bullshit.


the software systems for which you can write a formal proof of
correctness (or have a "trusted" prover write one for you) are very
different from the software systems widely used today, especially
those published in app stores and running on common desktops.

possible != practical, which is really the boundary where interesting
security decisions are made...


this is a very different subject, which always brings me back to
design flaws. an entire processor, operating system, and application
system written in a security friendly automated prover verified manner
may and likely will suffer design flaws and misuse.

which is not to say these efforts are pointless, but rather that if
your threat model includes state level actors with significant budget,
access, skill, and motive then a perfect software system is but one of
your many opsec concerns.  i could go on, but why retread "Reflections
on Trusting Trust" further when it has been conveyed so well already
;)
_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to