On Thu, Dec 18, 2008 at 7:29 AM, Ian G <i...@iang.org> wrote:
> On 18/12/08 12:09, Kyle Hamilton wrote:
>>
>> Eddy's gone ahead and sent a signed PDF, according to a later message
>> in-thread.  I expect that it'll work without a hitch, though I would
>> like to hear of any anomalous behavior. :)
>>
>> But, I'm struck again by a couple of questions.
>>
>> Why does everything have to have an explicit 'threat model' before
>> cryptography can be applied?
>
>
> Because otherwise we don't know what to protect against, and we end up
> reaching for the grab bag of things that are known to be protected by
> crypto, rather than what the user wants.
>
> Classic case:  PKI view is that you want to protect the authenticity of the
> identity.  But in some scenarios, this is a threat not a protection.  For
> example, consider self-help chat message boards where people can sign up
> using a nymous name like "PuddleBlob" and talk about difficult things.  The
> net has created this wonderful ability for people who have problems to talk
> about them, but they can only do this in conditions of safety, which to them
> means anonymity and/or untraceability.  Think Alcholics Anonymous, or
> divorced women.

Self-help chat message boards are a rather odd concern, and they're
actually where I want to try to put PKI.  The "problem" as far as it
goes is this: I want to put PKI there.  I DON'T want to put real names
there.

Those nyms are valid identifiers, and thus "names" as far as they go,
within those environments -- and as long as you're in that
environment, you don't need PKI, since the board software has an
online link to its authenticator.  Once you drop out of that
environment, there are two routes you can go:
1) The OpenID route (separate the identity-consumer from the
authenticator, with an online verification system)
2) The PKI route (separate the identity-consumer from the
authenticator, without an online verification system)

One of my favorite examples is "the bank manager who writes slashfic
while not at work".  (slashfic is, essentially, fan fiction that
explores the writer's concepts of relationships between the
characters.  Some of it's quite explicit, some of it is quite tame,
but it's all 'copyright infringement' unless the rightsholder has
explicitly granted fans the right to create derivative works, which
they usually[*] don't because then they have a hard time actually
selling the "derivative work" right to someone who might want to pay
for it.)  Suppose that one of her superiors at corporate -- or worse,
a customer -- looks up her name on Google.

Linking the pen-name with legal/employment identity would be
deleterious, since courts have held that companies can reasonably
expect certain standards of behavior from their employees, even
without explicitly delineating those standards.  However, slashfic
writers often go to science fiction conventions, and when they do they
often want to be known by their pen-names.  The convention itself
needs to know the legal name, for its own protection -- but those
membership lists are generally not considered "freely accessible
information".

Now, suppose the bank manager/author takes a vacation, and goes to a
convention.  Then, suppose that (for whatever reason) she needs to
prove that she's the author of the stories that she wrote, even if
they were based in a world that she didn't have rights to use.  (Don't
knock this, it's occurred at least once that I'm aware of -- not with
a bank manager, but with a songwriter/lyricist.)  PKI could be
extended to cover this case, if we got away from the
overly-restrictive requirement that people seem to put into it that
require legal names in their identity certificates.

It could also be used to allow for authentication of instant message
sessions.  (I'm currently looking at the OTR protocol, from
http://www.cypherpunks.ca/otr/, and trying to create a
certificate-authentication system that integrates at the point where
they currently use the Socialist Millionaire Protocol -- i.e., after
the session's been generated, and without using the keys in the
certificate to create any kind of signed/nonrepudiable record.)

*this is slowly changing with such movements as the Creative Commons,
which can do BY-NC-SA licensing, but entertainment industry lawyers
haven't figured out what to do with that yet.

> So, PKI is the precise wrong thing for them, because it identifies them and
> tries (badly) to ensure that they are traceable.

It's only as "precisely wrong" as the identities that are embedded
into the certificate.

> A deeper more nuanced view would be that once we identify a thing that needs
> protecting, then we still have to compare the costs of the threat with the
> costs of the protection.  For example, MITM is in this camp; the protection
> for MITM in say secure browsing is too expensive, given that only in the
> last year or so (recent K case) have we seen any real evidence of attacks.
>  Without a costs based analysis, any sense of "protect against X" lacks
> foundation.

The case above is a case where an individual, not a corporation, has a
lot to lose by not doing it.  Loss of income, loss of job, loss of
reference, loss of career... corporations don't have to worry quite so
much, because they've GOT the controls in place.  Those controls are,
ironically, what the individual needs most to protect against.

Yes, I realize that I'm describing a pathological case.  However, it's
a case where the existing tools (software and hardware) could work, if
the policy encoded and embedded in the common usage of those tools
were to be relaxed.

> The other aspect that must be considered is that threat models and security
> models are notoriously fickle and complicated, so even if you do them, you
> could be wrong;  often it is more economic to forget the security modelling,
> and run with what works well for now.  Fix up the bugs later.

You're viewing it from the POV of a corporation again.

>> In my view, cryptography is useful for
>> MUCH more than just "protecting against potential attack".
>
> Er, what else is there?

Let's see.  Microsoft uses cryptography (at least hashing) for its
Single Instance Storage mechanism (it consolidates all duplicate
copies of a file into a single copy, and creates hard links to that
file -- with copy-on-write semantics).

Hashing is also used for rsync and all sorts of non-attack scenarios.

Other aspects of cryptography?  Probably not so much.  However, in the
PDF signature case the attack-protection is primarily a side-effect of
the timestamp, which is the most useful aspect for thwarting the
unwitting "attack" of human error.

>> (It's not
>> like we're trying to protect secrets with national security
>> implications.  It's not like we're trying to protect a financial
>> instrument.  It's not even like we're trying to keep an affair
>> secret.)
>
>
> I don't know about that;  sometimes we are, sometimes not.  In my work, the
> second.  I've seen the third, and alluded to it above.  Protecting national
> security:  no, that is pointless for us, let them do that.

I was referring specifically to the PDF signature issue as "not trying
to protect a financial instrument", actually, but in general you're
right.

> Although, funny story:  the guy who deals with the CIA wiki presented
> recently (I forget his name) and he said that one day, an IT department
> programmer got annoyed at the standard browser which was out of date, so he
> downloaded Firefox onto a local private server for programmers only.  It
> quickly spread to other programmers ... and then out of the local
> environment.  The security people raised hell, because it was unauthorised,
> but by then it was too late, the Firefox virus was spreading throughout the
> CIA.  Within one year, Firefox had taken over and forced the security people
> to declare defeat and make it the official browser.  Regardless of any
> analysis that they could do...
>
> Sometimes users tell us stuff, but are we able to listen?

If the response that I've gotten on this list over the past three
years is any indication, evidently not.

These things that I'm bringing up (that the PKI would be much more
useful to everyone if the stringent requirement that only the Legal
Name be used in the Subject be dropped), I've been bringing up for a
long while.  This is the first time that a real discussion about the
concept has come up, and this is the first time that anyone other than
I has publicly recognized even a single case where a real name in a
certificate is counterproductive.

>> As I've said before, I view cryptography as a means of associating a
>> policy with data.
>
> Well.  You might want to consider a wider view like financial cryptography
> :)

Even that's a means of associating a policy with data: The policy is
"this is financial information and must be treated as such, and if you
have the key to decrypt it, you understand this and agree."

Two of my friends work for banks in the EU.  I met them in a
pseudonymous environment, and I didn't learn their legal names (or
even a tiny bit of information about where they work) for more than
six years... and I also met them in-person at science fiction
conventions several times in that time.  In fact, the only reason why
I learned one of their names is because he signed up for GoodReads,
which doesn't allow pseudonyms.  (Hence, the "confusion" aspect: I got
a mail from GoodReads, saying that some name I'd never heard of had
invited me to keep up with what he was reading.  I mentioned this in
the chatroom where we often hang out in the evenings, and one of them
PM'ed me with a "er, sorry... that's me".)

>> The policy in this case would be: this is a
>> document version that someone working on behalf of Mozilla (currently
>> -- and with the tenacity and thoroughness she's exhibited, hopefully
>> for a LONG time -- Kathleen) prepared, it hasn't been corrupted, and
>> it's got a timestamp so that later revisions can be identified as
>> such.  Cryptography can give me a very good idea that these three
>> concepts can be relied upon.
>
>
> Hopelessly unreliable, in my opinion.  Crypto will tell you that someone
> with "Kathleen's key" made that PDF, but some time later we might discover
> that Kathleen now works for Microsoft.  Nobody bothered to replace the key,
> because it worked.  Working practices are better than halted practices,
> don't break something that isn't broken!

Just because someone might work at Microsoft doesn't mean that they
can't contribute to Mozilla, as well.  Or at least that's my
impression from my acquaintances who worked there.  (Granted,
Kathleen's a full-time Mozilla employee, but I assume that if she left
that Frank would inform us all.)

By the way, Frank?  Is there a pool for a Secretary's Day gift for her? :)

>> Why does it have to be any more complex than this?  Why does there
>> have to be any more "meaning" assigned to the act of digitally signing
>> something?
>
> There always has to be a meaning.  Otherwise we wouldn't do it?  The problem
> is, which meaning?

Usually, everyone who discusses a PKI expects that the "meaning"
always includes "financial" and "legal commitment".

If you build something for corporations and business, consumers won't
beat a path to your door -- but if you build something consumers can
use for fun, and your door happens to be right there, they'll come in.

As a side effect of this, without using it in their recreational
lives, non-cryptography-savvy employees need increased training
budgets from every business/organization that wants to use it.
Already, there's a generalized requirement that
knowledge-work/administrative employees know how to use a computer,
since the training budget is a bit too high and the value of the
knowledge gained is extremely portable.

>> (Why do we always treat the concept of digital signatures
>> as though we're signing away our firstborn?  What are we so afraid of?
>>  That fear-among-the-experts is part of what makes cryptography so
>> inaccessible to the common user, and reduces confidence in the system
>> -- which leads to a lack of use, which leads to a dearth of innovation
>> in application.)
>
> Oh, in that we are agreed.  For this reason, I generally suggest that anyone
> using digsigs assumes they mean nothing, in policy or human terms, as a
> default.  It is a simple cryptographic or protocol meaning, which is not to
> be interpreted as a higher layer meaning without care. Add a higher layer
> meaning if desired, but do it between the persons, not hidden in assumptions
> that can't later be tracked down.

Part of this is presentation in the tools.  The presentation makes it
look Very Official and Very Proper and Very Intimidating.

Part of this is poor presentation of failure modes in the tools (I'd
like to see a warning that a given email is in HTML that's just as
overwhelmingly workflow-damaging as the "this message has been
modified in transit, click here to read it with that understanding"
warning).

Part of this is an awareness requirement.  The job of security is to
be as unintrusive as possible while still providing the tools and
information necessary to allow humans to make decisions related to
policy.  (Computers do not create policy, computers do not create
exceptions to policy, computers simply implement policy that's
programmed into them.)

-Kyle H
_______________________________________________
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to