On Aug 17, 2013, at 12:49 AM, Bryan Bishop <kanz...@gmail.com> wrote:

> On Sat, Aug 17, 2013 at 1:04 AM, Jon Callas <j...@callas.org> wrote:
> It's very hard, even with controlled releases, to get an exact byte-for-byte 
> recompile of an app. Some compilers make this impossible because they 
> randomize the branch prediction and other parts of code generation. Even when 
> the compiler isn't making it literally impossible, without an exact copy of 
> the exact tool chain with the same linkers, libraries, and system, the code 
> won't be byte-for-byte the same. Worst of all, smart development shops use 
> the *oldest* possible tool chain, not the newest one because tool sets are 
> designed for forwards-compatibility (apps built with old tools run on the 
> newest OS) rather than backwards-compatibility (apps built with the new tools 
> run on older OSes). Code reliability almost requires using tool chains that 
> are trailing-edge.
> 
> Would providing (signed) build vm images solve the problem of distributing 
> your toolchain?

Maybe. The obvious counterexample is a compiler that doesn't deterministically 
generate code, but there's lots and lots of hair in there, including potential 
problems in distributing the tool chain itself, including copyrighted tools, 
libraries, etc.

But let's not rathole on that, and get to brass tacks.

I *cannot* provide an argument of security that can be verified on its own. 
This is Godel's second incompleteness theorem. A set of statements S cannot be 
proved consistent on its own. (Yes, that's a minor handwave.)

All is not lost, however. We can say, "Meh, good enough" and the problem is 
solved. Someone else can construct a *verifier* that is some set of policies 
(I'm using the word "policy" but it could be a program) that verifies the 
software. However, the verifier can only be verified by a set of policies that 
are constructed to verify it. The only escape is decide at some point, "meh, 
good enough."

I brought Ken Thompson into it because he actually constructed a rootkit that 
would evade detection and described it in his Turing Award lecture. It's not 
*just* philosophy and theoretical computer science. Thompson flat-out says, 
that at some point you have to trust the people who wrote the software, because 
if they want to hide things in the code, they can.

I hope I don't sound like a broken record, but a smart attacker isn't going to 
attack there, anyway. A smart attacker doesn't break crypto, or suborn 
releases. They do traffic analysis and make custom malware. Really. Go look at 
what Snowden is telling us. That is precisely what all the bad guys are doing. 
Verification is important, but that's not where the attacks come from (ignoring 
the notable exceptions, of course).

One of my tasks is to get better source releases out there. However, I also 
have to prioritize it with other tasks, including actual software improvements. 
We're working on a release that will tie together some new anti-surveillance 
code along with a better source release. We're testing the new source release 
process with some people not in our organization, as well. It will get better; 
it *is* getting better.

        Jon

Attachment: PGP.sig
Description: PGP signature

_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to