Barry,

I will point out that in RFC 6749 the Client is specificity not required to 
understand the access token, it is required to understand the token_type in the 
response from the authorization server in order to know how to present the 
token to the RS.

Perhaps you were speaking metaphorically though.

It seems like the scope of your criticism has more to do with RFC 6749 & RFC 
6750 overall, than the assertions drafts themselves.

In OpenID Connect we implemented a discovery and registration layer for clients 
to discover what the Authorization server supports.  

Things like:
token_endpoint_auth_methods_supported  OPTIONAL.  JSON array
      containing a list of authentication types supported by this Token
      Endpoint.  The options are "client_secret_post",
      "client_secret_basic", "client_secret_jwt", and "private_key_jwt",
      as described in Section 2.2.1 of OpenID Connect Messages 1.0
      [OpenID.Messages].  Other authentication types may be defined by
      extension.  If unspecified or omitted, the default is
      "client_secret_basic" -- the HTTP Basic Authentication Scheme as
      specified in Section 2.3.1 of OAuth 2.0 [RFC6749].

token_endpoint_auth_signing_alg_values_supported  OPTIONAL.  JSON
      array containing a list of the JWS signing algorithms ("alg"
      values) supported by the Token Endpoint for the "private_key_jwt"
      and "client_secret_jwt" methods to encode the JWT [JWT].  Servers
      SHOULD support "RS256".


The client can then set in registration what auth method it intends to use to 
prevent downgrade attacks.
token_endpoint_auth_method  OPTIONAL.  Requested authentication
      method for the Token Endpoint.  The options are
      "client_secret_post", "client_secret_basic", "client_secret_jwt",
      and "private_key_jwt", as described in Section 2.2.1 of OpenID
      Connect Messages 1.0 [OpenID.Messages].  Other Authentication
      methods may be defined by extension.  If unspecified or omitted,
      the default is "client_secret_basic" HTTP Basic Authentication
      Scheme as specified in Section 2.3.1 of OAuth 2.0 [RFC6749].


To acceve real interoperability these things need to be profiled or you need a 
negotiation mechanism.

Are you saying that Assertions needs a low level mechanism to negotiate 
capabilities?   
Many will argue that specs at a higher level like like Dynamic Client 
Registration https://datatracker.ietf.org/doc/draft-ietf-oauth-dyn-reg/ are 
better places to address these capability negotiations because as you point out 
there is more to be negotiated than just assertions.

In your message you are asking about how the server knows what format of 
assertion the client needs, I think you meant that to be how will a client know 
what format of assertion a token endpoint can accept.   It is the client that 
creates the assertion in the spec under discussion.   Signed SAML or JWS 
assertions from the AS to the RS are a whole separate issue.

It has been stated by one of the WG chairs that some of us are anti 
interoperability, so some of us may be a touch sensitive, around this, as it is 
not the case.

I am in-favour of interoperability, but not interoperability theatre. 

If there are specific improvements we can make to assertions then lets discuss 
them, without downloading all of OAuth's perceived interoperability issues on 
them.

I am also happy to engage in the broader discussion of how OAuth can use things 
like Discovery and dynamic registration to address some of the wider interop 
issues. 

I am happy to make time for that in Orlando if that works for people.

Regards
John B.

On 2013-02-18, at 2:38 AM, Barry Leiba <barryle...@computer.org> wrote:

> OK, I have some time to respond to this on a real computer.
> 
> Let's look at the general mechanism that oauth provides, using one use case:
> A client asks an authorization server for authorization to do something.
> The authorization server responds with an authorization token, which
> the client is required to understand.
> 
> We have talked about three kinds of tokens: bearer tokens, MAC tokens,
> and assertion tokens.
> How does a client know what kind of token it will get from a
> particular authorization server?
> How does a server that supports multiple token types know what kind of
> token it should give to a particular client?
> 
> Now, suppose that the server decides to give back an assertion:
> How does the server know whether to give a SAML assertion or a JWT
> assertion?  How does the client know which it's going to get?
> 
> And now you're saying that even if everyone knows they're going to get
> an assertion token, and specifically a JWT assertion, the semantics of
> particular fields in those tokens are undefined.  If you have
> different meanings ("different kinds of things", as you've said) for
> the Audience field, how is the server supposed to communicate which
> meaning *it* is using?  How is there any assurance that a client will
> understand it in the same way?
> 
> These are the sorts of things I'm concerned about, and this is what I
> mean by saying that it's like a Ukrainian doll: you open the oauth
> doll and find the token doll; you open the token doll and find the
> assertions doll; you open the assertions doll and find the JWT doll;
> you open the JWT doll and find the Audience field doll... at what
> point do the dolls end and interoperablity begins?
> 
> I understand that you want certain things to be handled differently by
> different applications, and I'm fine with that in principle.  But the
> other part of the principle is that I have to be able to write an
> application that interoperates with yours *purely by reading the
> specs*.
> 
> If you do this by profiling, we need to get to a point where two
> things are true: (1) the profile chain has ended, and what's left is
> well defined, and (2) I have to be able to, *from the specs and the
> protocol alone*, determine what profile will be used.  I don't see how
> this protocol gives me any way to determine what to send or what to
> expect to receive.  And if an out-of-band understanding is required
> for that, that doesn't interoperate.
> 
> Now, the way we usually handle the need for "different kinds of
> things" is to have different fields for each, or to have one field
> tagged so that it's self-defining (as URIs have a scheme that says
> what to do with them).  If the Audience field might look like this in
> one application:
>   urn:ietf:params:banana
> ...and like this in another
>   abra:cadabra
> ...where the first is understood to be a URI and the second is
> something else, then please explain how you and I can each write
> applications to that?
> 
> For your specific questions:
> 
>> Barry, are you proposing that we require that the Audience contain a 
>> specific data
>> format tailored to a particular application of assertions?  If so, what 
>> format are you
>> proposing, and for which application of assertions?  Likewise, are you 
>> proposing
>> that the Subject field contain a particular data format tailored to a 
>> particular
>> application?  And also the Issuer field?
> 
> I am proposing that there must be a way for someone writing an
> application to know what to use in these fields that will work with
> your application (or will work with a server in the same way as your
> application does, or whatever) *without* having to go to the
> documentation of your application to figure it out.
> 
> That's why I say that as I see it, it's not an issue of MTI.  I'm not
> saying that I want you to require that any particular thing be
> implemented.  I'm saying that both sides need to be able to know which
> variations *are* implemented.  How else do you get interoperability?
> 
> Your TCP/ports example is a perfect one to show how this works: the
> assignments for port numbers are *exactly* to create the
> interoperability I'm talking about here.  TCP doesn't say anything
> about the protocol ("profile", perhaps) that's used over it.  But
> we've defined that port 110 is used for POP and 143 is used for IMAP
> and 25 is used for SMTP, and so on.  So I know that if my IMAP
> application wants to talk with your IMAP server, I can accomplish that
> by using port 143.  If you decide to run your IMAP server on port 142
> instead, or if you run your POP server on port 143, we will not
> interoperate.
> 
> But all I need to know is (1) how to do IMAP, (2) how to do it over
> TCP, and (3) what port to use... and I can build an IMAP client that
> will work with any IMAP server that follows the same specs.
> 
> Can you do that here?  Please explain how.
> 
> Barry
> _______________________________________________
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to