Hi Martin,

I think we may all be looking at this elephant from several different angles.  
We are having to write essays on our perspective to describe some broad 
generalizations that we are making.

>> >> One of the better definitions I've heard.  I would question whether
>> >(c) is even in scope; seems like a relying party function.
>> >
>> >We should run screaming from (c). Not only do there be dragons there,
>> >but there be dragons even in saying what "trustworthiness" means.
>> >Surely this is not a real-world reputation system.
>> >
>> >    Jon
>>
>> Yes! ... but, we can define "who we trust for what" ... who being a
>key, what being some Domain of Discourse with appropriate constraints.
>>
>> Trustworthiness as a probability or metric yields contradictory and
>nondeterministic evaluations.
>>
>> Paul
>
>Paul, is this necessarily bad?

I read "trustworthiness" as a potential metric - I'd rather see a formal 
framework that was close to first order logic rather than fuzzy logic.

That said, risk probabilities might be useful, but these seem better as an 
overlay than a foundational design element.

More of my point above was that we need to look at the bounds and constraints 
of the "trust" with a better model than we have in today's PKI.  For example 
not all top level keys should be able to make assertations about all possible 
domains - we need constrains.  We need a notion of domains of discourse to 
separate different usage of of cryptographic trust mechanisms.

>
>All three statements above are sort of correct IMHO. A slight
>modification:
>  "As security *protocol* engineers, we have to [...] and (c), give the
>users the capacity and ability to enforce their own trust-policies, for
>their own purposes, according to their own desires."
>
>Let's go deeper.
>
>I (still, thankfully!) have the ability to remove root CA's from my
>/etc/ssl/cert when I see fit to do so. And I do so.  But it's extremely
>cumbersome and obviously "does not scale".  Let's do better?
>Obviously, not all users on the planet can by decree of a small group be
>mandated to verify trust in the same manner, at least not very
>successfully. The choice of what you want to trust has always been
>there.
>
>From this I can't say that I either fully - or not at all - trust any CA
>in my root-cert store on an individual basis -- some are even there, not
>because I trust them indefinitely, but because of a combination of
> a) my browser becomes full of extra clicks without them,
> b) they do provide a varying-but-greater-than-zero % expectation that a
>TLS-connection that validates to them can be reasonably expected not to
>also be MITM:ed somewhere.

Yes ... PKI with lots of roots in browsers is broken.  

Individual points of trust (keys) should be able to be added and deleted by the 
owner of the system.  Each key should have specific limitations on it's domain 
of discourse with constrains possible within each domain (e.g. coding signing 
versus DNS)

>
>This doesn't mean *I* have 100% trust in any of them.  These are
>different things.
>
>Adding to that: having multiple inputs to your trust policy generation
>engine, as has been discussed here, can't possibly hurt the objective
>here?  Similarly, having an output from this trust resolution engine
>that motivates its calculation (according to your policy), can't hurt
>either?
>
>  configuration bar = list of various input sources;
>  trust_policy_compile(policy foo, configuration bar) ->
>trust_resolve_data[policy=foo]
>
>(Policy and configuration may actually just be one and the same.)
>
>Obviously, a computer program wouldn't work very well if a 67% trust
>implies that it should proceed with a computation in 67% of the
>instances:
>  The validation result through my trust verification engine using
>policy X ought to be that I either can (based on my policy!) proceed
>with the computation, or, that I have to abort it because of a lack of
>trust.
>
>  validate_with_policy(trust_resolve_data data, policy foo, certificate
>bar) -> (valid|invalid, {details})
>
>The policy can define things such as, a >X% fail-rate from
>convergence/observatory (using convergence/observatory-configuration
>foobar...) -> invalid cert verification, and list of root-CA's, etc.
>Note that this does not at all exclude putting multiple certificates in
>the handshake! That seems smart, for those who can afford the extra
>bytes transferred.

This is where I don't see how users will fully understand a % or analog metric. 
 I can
 see results of - Yes, No, can't tell/resolve, etc...

We need to step back a little to get a wider view.  A "trust" metric 
is not the end result.  WE are building systems to answer policy questions.
Like an oracle - the machinery we define needs to give an answer to
questions posed. Questions might be of the form: 
"Does *this_key have DNSaddress foo.com?


>
>So, perhaps having browser vendors and operating systems declare who The
>list of trustworthy entities according to their Process is, is not the
>best solution for all purposes?  Having defaults is fine, I guess, but
>for all I care this entire apparatus could (successfully, overall, I
>bet) be externalized to institutions dealing more solely with trust, and
>individual users would then get the option to choose between these. The
>feedback-loop this could provide is not without merit.
>  For example, there appears to me to be a natural place for someone to
>define these foo's, that users can use. And also [inputlist]
>configurations.

Feedback is a different matter ... metrics might be possible for more 
subjective observations about the behavior of a key's owner.  I think we have 
different underlying models.  Determination of the validity of the sssignment 
of attributes (like a name or capability) has a small enumerated set of results.

Aggregation of other attribute types (risk, rating, perceived value, 
reputation, etc) could be such a metric.  It might be possible to have a policy 
that used these type of attributes (e.g. eat at restaurant only if rating 
greater than x). Use or non-use of a name (per most of our discussed examples 
relating to TLS and DNS names) does not seem like a good place to have a metric 
(e.g there's a 88% chance that foo.com is the authorized DNS name for *this_key)

I may be looking at the back end of the elephant, but terms like "pinning" and 
such seem wrong.  With a "key centric" view, the DNS address or other 
information are attributes that can be assigned to a key versus a name centric 
perspective that has multiple keys per name and may need pinning.  

Paul

>
>Offline, people trust different things.  Would there be a
>protocol/standard/method to model this in a more scalable way online, so
>much better, I think.
>
>Best,
>Martin


_______________________________________________
therightkey mailing list
therightkey@ietf.org
https://www.ietf.org/mailman/listinfo/therightkey

Reply via email to