On 18/12/08 12:09, Kyle Hamilton wrote:
Eddy's gone ahead and sent a signed PDF, according to a later message
in-thread.  I expect that it'll work without a hitch, though I would
like to hear of any anomalous behavior. :)

But, I'm struck again by a couple of questions.

Why does everything have to have an explicit 'threat model' before
cryptography can be applied?


Because otherwise we don't know what to protect against, and we end up reaching for the grab bag of things that are known to be protected by crypto, rather than what the user wants.

Classic case: PKI view is that you want to protect the authenticity of the identity. But in some scenarios, this is a threat not a protection. For example, consider self-help chat message boards where people can sign up using a nymous name like "PuddleBlob" and talk about difficult things. The net has created this wonderful ability for people who have problems to talk about them, but they can only do this in conditions of safety, which to them means anonymity and/or untraceability. Think Alcholics Anonymous, or divorced women.

So, PKI is the precise wrong thing for them, because it identifies them and tries (badly) to ensure that they are traceable.

A deeper more nuanced view would be that once we identify a thing that needs protecting, then we still have to compare the costs of the threat with the costs of the protection. For example, MITM is in this camp; the protection for MITM in say secure browsing is too expensive, given that only in the last year or so (recent K case) have we seen any real evidence of attacks. Without a costs based analysis, any sense of "protect against X" lacks foundation.

The other aspect that must be considered is that threat models and security models are notoriously fickle and complicated, so even if you do them, you could be wrong; often it is more economic to forget the security modelling, and run with what works well for now. Fix up the bugs later.


In my view, cryptography is useful for
MUCH more than just "protecting against potential attack".

Er, what else is there?

(It's not
like we're trying to protect secrets with national security
implications.  It's not like we're trying to protect a financial
instrument.  It's not even like we're trying to keep an affair
secret.)


I don't know about that; sometimes we are, sometimes not. In my work, the second. I've seen the third, and alluded to it above. Protecting national security: no, that is pointless for us, let them do that.

Although, funny story: the guy who deals with the CIA wiki presented recently (I forget his name) and he said that one day, an IT department programmer got annoyed at the standard browser which was out of date, so he downloaded Firefox onto a local private server for programmers only. It quickly spread to other programmers ... and then out of the local environment. The security people raised hell, because it was unauthorised, but by then it was too late, the Firefox virus was spreading throughout the CIA. Within one year, Firefox had taken over and forced the security people to declare defeat and make it the official browser. Regardless of any analysis that they could do...

Sometimes users tell us stuff, but are we able to listen?


As I've said before, I view cryptography as a means of associating a
policy with data.

Well. You might want to consider a wider view like financial cryptography :)


The policy in this case would be: this is a
document version that someone working on behalf of Mozilla (currently
-- and with the tenacity and thoroughness she's exhibited, hopefully
for a LONG time -- Kathleen) prepared, it hasn't been corrupted, and
it's got a timestamp so that later revisions can be identified as
such.  Cryptography can give me a very good idea that these three
concepts can be relied upon.


Hopelessly unreliable, in my opinion. Crypto will tell you that someone with "Kathleen's key" made that PDF, but some time later we might discover that Kathleen now works for Microsoft. Nobody bothered to replace the key, because it worked. Working practices are better than halted practices, don't break something that isn't broken!


It doesn't have to be a "legal document".  It doesn't need a
contract-grade (i.e., Qualified Certificate in PKIX and EU parlance)
signature on it.  All that I need to know is that what I'm reading is
the actual working document with a means of determining if there's a
newer one, and a digisig countersigned by a timestamp authority is a
perfect means of accomplishing this.


Yeah, it's "indication grade." It's not reliable enough to write home about. There are many weaknesses, and scores of papers listing them.


Why does it have to be any more complex than this?  Why does there
have to be any more "meaning" assigned to the act of digitally signing
something?

There always has to be a meaning. Otherwise we wouldn't do it? The problem is, which meaning?


(Why do we always treat the concept of digital signatures
as though we're signing away our firstborn?  What are we so afraid of?
  That fear-among-the-experts is part of what makes cryptography so
inaccessible to the common user, and reduces confidence in the system
-- which leads to a lack of use, which leads to a dearth of innovation
in application.)


Oh, in that we are agreed. For this reason, I generally suggest that anyone using digsigs assumes they mean nothing, in policy or human terms, as a default. It is a simple cryptographic or protocol meaning, which is not to be interpreted as a higher layer meaning without care. Add a higher layer meaning if desired, but do it between the persons, not hidden in assumptions that can't later be tracked down.



iang
_______________________________________________
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to