On 08/12/2011 14:18, Hannes Tschofenig wrote:
Hi all,
Hi Hannes,
Some random thoughts about your message below:
I read through this rather long mail thread again and see whether we are 
reaching any conclusion on this discussion.
In turns out that there are actually two types of discussions that relate to 
each other, namely the TLS version support and the token type.

Let me go back in time a little bit when I was still chairing another working 
group (years ago), namely the KEYPROV working group. There we ran into a 
similar issue, which looked fairly simple at the beginning. We worked on 
Portable Symmetric Key Container (PSKC),  later published as RFC 6030. We were 
at the stage where we thought we had to decide on a mandatory-to-implement 
cryptographic algorithm and, similar to the OAuth case, PSKC is one building 
block in a larger protocol suite. As you can imagine, everyone had their own 
deployment environment in mind and did not like the suggestion others made 
about what as mandatory to implement.

Russ (now IETF chair and at that time security area director) told the group 
not to worry - we don't need to pick an algorithm. He suggested to just provide 
a recommendation of what is best in a specific deployment environment (at the 
time of writing). In fact, he proposed to publish a separate document instead 
to discussing it in that document.

I was surprised because I was couldn't see how one would accomplish 
interoperability. Russ told me that this is in practice not a problem because 
implementations often implement a range of cryptographic algorithms anyway. 
Then, as part of the algorithm negotiation procedure (or some discovery) they 
will figure out what both end points support. He further argued that algorithm 
preferences will change (as algorithms get old) and we don't want to update our 
specifications all the time. (This was a bit motivated by the MD5 clean-up that 
happened at that time.) [Please forgive me if I do not recall the exact words 
he said many years ago.]

I believe we are having a similar discussion here as well, both with the token 
type but also with the TLS version. We look at individual specifications 
because we are used to add security consideration sections to each and every 
document. Unfortunately, the most useful statements about security (for these 
multi-party protocols where the functionality is spread over multiple 
documents) can really only be made at a higher level. Our approach for 
describing security threats and for recommending countermeasures isn't suitable 
to all the documents we produce.

Let me list a few desirable results of our standardization work:

1) Everyone wants interoperability. We can do interop testing of building 
blocks to see whether a client and a server are able to interact. For example, 
we could write a few test cases to see how two independent bearer token 
specifications work with each other. That approach works for some of our 
building blocks. I do, however, believe that we are more interested of an 
interoperable system consisting of several components rather than having 
interop between random components. Even if we do not like it, OAuth is an 
application level protocol that requires a number of things to be in place to 
make sense.

2) We want libraries to be available that allow implementers to quickly 
"OAuth-enable" their Web applications, i.e., by quickly I mean that an 
application develop shouldn't have to write everything from scratch. Most of the 
development time will be spent on aspects that are not subject to standardization in the 
working group (such as the user interface and the actual application semantic -- the data 
sharing that happens once the authorization step is completed). These libraries are 
likely to support various extensions and getting two different implementations to 
interwork will IMHO in practice not be a problem. However, for a real deployment it seems 
that the current direction where people are going is more in the line of trust frameworks 
where much more than just technical interoperability is needed for a working system. See 
the discussions around NSTIC for that matter.

3) We want the ability for algorithm negotiation/discovery, at least up to a 
certain degree. For example, it would would nice if a client talks to a server 
and they both implement TLS 1.2 then they actually use it. The requirement for 
crypto-agility fits in here as well.
Algorithm negotiation/discovery is always a good thing.

TLS already have this capability builtin, so doing TLS version negotiation at the application layer would be wrong.
4) We want to separate the protocol specification from the cryptographic 
algorithms and other faster changing components. We don't want to update our 
protocol specification just because an algorithm becomes obsolete or the 
protocol suddenly gets used in a different environment where other constraints 
may be prevalent.
Separating requirement on crypto from the rest of the protocol is generally a good thing.
5) The security analysis and the solution approaches will vary based on the 
deployment environment. During the Taipei OAuth WG meeting I tried to explain 
what I mean with this statement with my reference to NIST SP 800-63. For some 
reason I failed to get the story across and so I try it again here.

The authors of NIST SP 800-63 (of which one is Tim Polk, former IETF security AD) noticed 
that identity management protocols will be used for a variety of different usages, each 
with different security properties, and varying privacy requirements. For this purpose, 
the NIST guys had introduced the famous "Level of Assurance (LoA)" concept. 
Different levels put different requirements on different parts of the protocol suite. 
There is no expectation that bearer assertions will be issued by an authorization server 
for usage with a client at LoA level 4. A client implementation for the health care 
environment may also not expect to accept LoA 1 only suitable mechanisms.

While it may be fine for certain environments not to care about the installed 
code size there are certainly cases where size of code matters. I am not only 
thinking about the Internet of Things space but also about the vulnerabilities 
that are introduced by unnecessary code.
If the WG is making a claim that OAuth is always going to be a part of bigger environments (e.g. healthcare, military, etc), each with its own requirements on security mechanisms, then I think this needs to be captured in one of the OAuth documents. This is your escape clause from the de-facto requirement to specify mandatory-to-implement mechanisms.
While I understand that it would be great if anything interworks with anything 
else out of the box I don't see how to get there.

Hence, I suggest that we

a) skip specifying a mandatory-to-implement token-type, TLS version, etc. in 
the individual specifications,
b) complete re-chartering and to get some of the other needed building blocks done 
that get us closer to an more complete "system,
c) develop OAuth profiles and security recommendations for different security 
levels (in the style of what SP 800-63 outlines),
d) capture this discussion on mandatory-to-implement security mechanisms in a 
draft and socialize it with the rest of the IETF security community,
If I were your AD, I would have asked for some demonstrated effort on d) and possibly c) [e.g. some drafts written] before allowing a) and b) to go forward.
e) have a broader discussion about what we envision the Web identity eco-system 
to look like. http://tools.ietf.org/html/draft-tschofenig-secure-the-web-00 
tries to make a first step but it is still at an early stage.

Ciao
Hannes

_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to