Hi Barry.  Thanks for your response.  I believe we share the same goals here 
(the "what").  Where I think we need to focus the discussion then is on the 
mechanisms to achieve that (the "how").  Let me fill in the details, as I see 
them, by responding to a few things you wrote:



If you do this by profiling, we need to get to a point where two things are 
true: (1) the profile chain has ended, and what's left is well defined, and (2) 
I have to be able to, *from the specs and the protocol alone*, determine what 
profile will be used.  I don't see how this protocol gives me any way to 
determine what to send or what to expect to receive.  And if an out-of-band 
understanding is required for that, that doesn't interoperate.



I completely agree with you that the profile chain has to end, with what's left 
being well-defined and that from the protocol specs alone, one can determine 
how to interoperate.



Now, the way we usually handle the need for "different kinds of things" is to 
have different fields for each, or to have one field tagged so that it's 
self-defining (as URIs have a scheme that says what to do with them).  If the 
Audience field might look like this in one application:

   urn:ietf:params:banana

...and like this in another

   abra:cadabra

...where the first is understood to be a URI and the second is something else, 
then please explain how you and I can each write applications to that?


You can write applications to that by having the profile chain end, and with 
the contents of the Audience field being completely specified somewhere in the 
profile chain being used.  Also, I'll observe that we are using the "tagged 
field" approach that you mention in the assertions specs, using the defined tag 
values urn:ietf:params:oauth:grant-type:saml2-bearer, 
urn:ietf:params:oauth:client-assertion-type:saml2-bearer, 
urn:ietf:params:oauth:grant-type:jwt-bearer, and 
urn:ietf:params:oauth:client-assertion-type:jwt-bearer to declare the token 
type and use of that token type.  (The OpenID Connect profile also uses a 
"tagged field" which is an OAuth scope value of "openid" to dynamically declare 
to the OAuth implementation that the OpenID Connect profile is being used.  
Other profiles may similarly indicate their usage through different scope 
values.)



I am proposing that there must be a way for someone writing an application to 
know what to use in these fields that will work with your application (or will 
work with a server in the same way as your application does, or whatever) 
*without* having to go to the documentation of your application to figure it 
out.



I agree with what I think you mean, but possibly not with how you're saying it. 
 Using the TCP analogy again, in fact, to understand the contents of the TCP 
stream for port 25, one has to go to the documentation of the application 
communicating on port 25.  In this case, that documentation is RFC 821 and its 
successors.  SMTP is a profile of TCP that further specifies the contents of 
the data being exchanged.  An analogous situation exists when using OAuth 
Assertions.



I'll also observe that the working group is also working on a specification 
that enables an OAuth client to dynamically register itself with the 
Authorization Server (draft-ietf-oauth-dyn-reg) and that that registration does 
declare information about what profile is being used, as John Bradley pointed 
out in his response.  That's a key piece of the whole solution to enable 
interoperable implementations.



So using your Ukrainian dolls analogy, yes, the OAuth Assertions spec and the 
OAuth SAML2 Profile and OAuth JWT Profile specs are dolls inside other dolls - 
not the outer doll.  That's by design, and not a spec defect, at least as I see 
it.  We already do have mechanisms for dynamically declaring what profile is 
being used, and we are using them.



I agree with Stephen that we should let this conversation run for a while to 
make sure everyone comes to a common understanding, but ultimately, I hope that 
you'll withdraw your DISCUSS, because, in fact, interoperable implementations 
can be written by reading the specs used alone.



                                                            Best wishes,

                                                            -- Mike



-----Original Message-----
From: barryle...@gmail.com [mailto:barryle...@gmail.com] On Behalf Of Barry 
Leiba
Sent: Sunday, February 17, 2013 9:38 PM
To: Mike Jones
Cc: Stephen Farrell; oauth@ietf.org; oauth-cha...@tools.ietf.org
Subject: Re: [OAUTH-WG] oauth assertions plan



OK, I have some time to respond to this on a real computer.



Let's look at the general mechanism that oauth provides, using one use case:

A client asks an authorization server for authorization to do something.

The authorization server responds with an authorization token, which the client 
is required to understand.



We have talked about three kinds of tokens: bearer tokens, MAC tokens, and 
assertion tokens.

How does a client know what kind of token it will get from a particular 
authorization server?

How does a server that supports multiple token types know what kind of token it 
should give to a particular client?



Now, suppose that the server decides to give back an assertion:

How does the server know whether to give a SAML assertion or a JWT assertion?  
How does the client know which it's going to get?



And now you're saying that even if everyone knows they're going to get an 
assertion token, and specifically a JWT assertion, the semantics of particular 
fields in those tokens are undefined.  If you have different meanings 
("different kinds of things", as you've said) for the Audience field, how is 
the server supposed to communicate which meaning *it* is using?  How is there 
any assurance that a client will understand it in the same way?



These are the sorts of things I'm concerned about, and this is what I mean by 
saying that it's like a Ukrainian doll: you open the oauth doll and find the 
token doll; you open the token doll and find the assertions doll; you open the 
assertions doll and find the JWT doll; you open the JWT doll and find the 
Audience field doll... at what point do the dolls end and interoperablity 
begins?



I understand that you want certain things to be handled differently by 
different applications, and I'm fine with that in principle.  But the other 
part of the principle is that I have to be able to write an application that 
interoperates with yours *purely by reading the specs*.



If you do this by profiling, we need to get to a point where two things are 
true: (1) the profile chain has ended, and what's left is well defined, and (2) 
I have to be able to, *from the specs and the protocol alone*, determine what 
profile will be used.  I don't see how this protocol gives me any way to 
determine what to send or what to expect to receive.  And if an out-of-band 
understanding is required for that, that doesn't interoperate.



Now, the way we usually handle the need for "different kinds of things" is to 
have different fields for each, or to have one field tagged so that it's 
self-defining (as URIs have a scheme that says what to do with them).  If the 
Audience field might look like this in one application:

   urn:ietf:params:banana

...and like this in another

   abra:cadabra

...where the first is understood to be a URI and the second is something else, 
then please explain how you and I can each write applications to that?



For your specific questions:



> Barry, are you proposing that we require that the Audience contain a

> specific data format tailored to a particular application of

> assertions?  If so, what format are you proposing, and for which

> application of assertions?  Likewise, are you proposing that the

> Subject field contain a particular data format tailored to a particular 
> application?  And also the Issuer field?



I am proposing that there must be a way for someone writing an application to 
know what to use in these fields that will work with your application (or will 
work with a server in the same way as your application does, or whatever) 
*without* having to go to the documentation of your application to figure it 
out.



That's why I say that as I see it, it's not an issue of MTI.  I'm not saying 
that I want you to require that any particular thing be implemented.  I'm 
saying that both sides need to be able to know which variations *are* 
implemented.  How else do you get interoperability?



Your TCP/ports example is a perfect one to show how this works: the assignments 
for port numbers are *exactly* to create the interoperability I'm talking about 
here.  TCP doesn't say anything about the protocol ("profile", perhaps) that's 
used over it.  But we've defined that port 110 is used for POP and 143 is used 
for IMAP and 25 is used for SMTP, and so on.  So I know that if my IMAP 
application wants to talk with your IMAP server, I can accomplish that by using 
port 143.  If you decide to run your IMAP server on port 142 instead, or if you 
run your POP server on port 143, we will not interoperate.



But all I need to know is (1) how to do IMAP, (2) how to do it over TCP, and 
(3) what port to use... and I can build an IMAP client that will work with any 
IMAP server that follows the same specs.



Can you do that here?  Please explain how.



Barry
_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to