Re: [OAUTH-WG] Looking for a compromise on signatures and other open issues

2010-09-28 Thread John Panzer
+1.
--
John Panzer / Google
jpan...@google.com / abstractioneer.org http://www.abstractioneer.org/ /
@jpanzer



On Mon, Sep 27, 2010 at 11:25 PM, Eran Hammer-Lahav e...@hueniverse.comwrote:

 (Please take a break from the other threads and read this with an open
 mind. I have tried to make this both informative and balanced.)



 --- IETF Process



 For those unfamiliar with the IETF process, we operate using rough
 consensus. This means most people agree and no one strongly objects. If
 someone strongly objects, it takes a very unified group to ignore that
 person, with full documentation of why the group chose to do so. That person
 can raise the issue again during working group last call, area director
 review, and IETF last call - each has the potential to trigger another round
 of discussions with a wider audience. That person can also appeal the
 working group decision before it is approved as an RFC.



 The process is managed by the working group chairs. The chairs elect the
 editor and make consensus calls. So far this working group had only a few
 consensus calls (breaking the 1.0a RFC into two parts and dropping these in
 favor of a unified WRAP + 1.0a draft). From my experience and understanding
 of the process, this working group does not have rough consensus on any of
 the open items to make consensus calls and close the issues. Simply
 dismissing the few objections raised will not accomplish finishing the
 document sooner, but will only add more rounds of debates now and at a later
 time.



 One of the problems we have is that we work without a charter. Usually, the
 charter is the most useful tool chairs have when limiting scope and closing
 debates. For example, had we fixed the charter last year to explicitly say
 that we will publish one document with both bearer tokens and signatures,
 the chairs could have ended this argument by pointing to the charter. Since
 we have no charter, the chairs have little to offer in terms of ending these
 disagreements. We never officially agreed what we are here to solve.



 The reality of this working group is that we need to find a way to make
 everyone happy. That includes every one of those expressing strong opinions.
 Any attempt to push people around, dismiss their views, or reject reasonable
 compromises will just keep the issues open. If this small group cannot reach
 agreement, the specification will surely fall apart during working group
 last call, area director review, IETF last call, application area review,
 security area review, general area review, IANA review, and IESG review.



 It’s a long process, and at each step, anyone can raise their hand and
 object. A united working group is the most important tool to end discussions
 over objections and concerns raised at each step. It also give people the
 confidence to implement a working group final draft before it is published
 as an RFC (because it is much less likely to change).



 --- Open Issues



 This working group has failed to reach consensus on a long list of items,
 among them are the inclusion of signatures, signatures format, use of HTTP
 authentication headers, restrictions on bearer tokens, support for specific
 profiles, etc. While many of these items faded away, I would not be surprise
 to see them all come back.



 The current argument over signatures ignores compromises and agreements
 reached over the past two years. This working group explicitly rejected WRAP
 as the replacement for OAuth 1.0 and the whole point of combining 1.0a with
 WRAP was the inclusion of signatures. We reached another agreement to keep
 signatures at the Anaheim meeting. The current draft is a version of WRAP
 alone.



 There are currently three separate threads going on:



 1. OAuth 1.0a style signatures vs. JSON proposals

 2. Including a signature mechanism in core

 3. Concerns about bearer tokens and HTTPS



 The first item will not be resolved because we are not going to reach
 industry consensus over a single signature algorithm (this is a general
 comment, not specific to these two proposals). The only thing we can do is
 let those who care about each solution publish their own specification and
 let the market decide.

 The second item, while it was part of the very first compromise this
 working group made (when we combined the two specifications), cannot be
 resolved because of #1. We can’t agree on which signature method to include,
 and including all is not practical. For these reasons, including a signature
 algorithm in core is not likely to happen. I have made some proposals but
 they received instant negative feedback which means we have no consensus.



 The third item has also been debated and blogged for a long time and is not
 going to be resolved in consensus. Instead, we will need to find the right
 language to balance security concerns with the reality that many providers
 are going to deploy bearer tokens no matter what the IETF says. The OAuth
 1.0a RFC

Re: [OAUTH-WG] Google's view on signatures in the core OAuth2 spec

2010-09-24 Thread John Panzer
Richard,

I'm a bit confused because the made-up example you give below is,
essentially, what Magic Signatures does.  The algorithm you present is
basically the correct one IMHO.  Are you assuming that the recipient is
_also_ using the HTTP-level method and URL path for some  important security
decision?

(Note:  I'm assuming it's fine to use this unverified host/path data for
tentative routing to an intended recipient, because the worst thing a MITM
attacker can possibly do is to route it to the wrong recipient.  As long as
the recipient uses only signed information to decide whether it will
actually ACCEPT the data, it will be fine.  MITM attackers can always
mis-route even signed messages of course, given that firewalls etc. are not
aware of signatures, so I don't see this as a distinction.)

--
John Panzer / Google
jpan...@google.com / abstractioneer.org http://www.abstractioneer.org/ /
@jpanzer



On Fri, Sep 24, 2010 at 8:26 AM, Richard L. Barnes rbar...@bbn.com wrote:

  Yes, there is certainly a risk if someone just checks the signature and
 does not verify the content of the message. This is a bad implementation
 of an authorization system, to be sure, and it's an issue that people
 need to be aware of. But simply signing metadata doesn't completely
 solve the problem, either. In both cases there can be parameters that
 are outside of the signed request that need to be checked and treated
 appropriately.


 Ah, perhaps I was unclear.  I didn't mean *signing* metadata, I meant
 *sending* metadata.  Using a completely made-up syntax:

 1. Signer computes signature sig_val over data object:
   { user_agent: Mozilla, method: GET }
 2. Signer sends { signed_fields: ['user_agent', 'method'], sig: sig_val }
 3. Recipient reconstructs data object using signed_fields
 4. Recipient verifies sig_val == sign(reconstructed_object)

 --Richard

 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Google's view on signatures in the core OAuth2 spec

2010-09-24 Thread John Panzer
On Fri, Sep 24, 2010 at 9:06 AM, Eran Hammer-Lahav e...@hueniverse.comwrote:

 Validating the signed data delivered with the request is as much magic as
 the current 1.0 approach. Go write the validation rules and see.


Done that (for the specific case of Salmon).  The only trick is to write the
code so that if the message were delivered via a note tied to a rock thrown
through your window, it would have the same security properties as the ones
delivered via HTTP.  This is very amenable to static analysis.

Is there a specific attack vector that we could discuss that you're worried
about?



 I'm getting really tired of this argument because it is not grounded in
 reality. Any attempt to compare the HTTP request to what was signed, whether
 it is sent with the request or not, is as difficult. Sending the signed data
 just makes it easier for the server to make bad assumptions and be flexible
 in an insecure way.

 EHL

 On Sep 24, 2010, at 10:51, Justin Richer jric...@mitre.org wrote:

  I think that any signature method that we end up using needs to rely
  less on magic and anecdote and more on explicit declaration.
 
  This is certainly correct ...
 
 
  I think
  that Brian Eaton's approach of sending the bare string that was
  signed,
  which was also a JSON element that could be parsed and validated,
  was an
  essential simplification.
 
  ... but this does not follow.  The signer can specify what was signed
  without sending the data...
 
  Even OpenID states which of the parameters on
  the request were signed, which makes it easier to validate.
 
  ... as in this pattern.  There are some other examples of elements of
  the signed object being conditionally included:
  1. HTTP Digest authentication [1]
  2. The IKEv2 key exchange messages [2]
 
  And I'm marginally OK with either pattern, though I think that the
  former is a much cleaner way to do it. I believe either case to be
  worlds better than the OAuth 1.0 method, which has caused countless
  problems. I actually had to help someone debug between sending that
  first email and this one, so I can attest that the problem has not gone
  away! :)
 
  As has been pointed out before, there is a security risk in sending
  the signed request data itself (as opposed to metadata that allows the
  recipient to reconstruct the data), because the recipient can choose
  not to verify the binding between the signed data and the request.
 
  Yes, there is certainly a risk if someone just checks the signature and
  does not verify the content of the message. This is a bad implementation
  of an authorization system, to be sure, and it's an issue that people
  need to be aware of. But simply signing metadata doesn't completely
  solve the problem, either. In both cases there can be parameters that
  are outside of the signed request that need to be checked and treated
  appropriately.
 
  -- Justin
 
 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Google's view on signatures in the core OAuth2 spec

2010-09-24 Thread John Panzer
Ah ok, I misread your field names as field values :).

Of course recipients can always choose to ignore the result of verifying
signatures (or not verify them at all) no matter what scheme you use.

Note that Magic Signatures is unabashedly an envelope, and well written
libraries make it hard to shoot yourself in the foot by opening an invalid
envelope.

Then there's the question of checking to see if the data in the envelope is
consistent with data outside the envelope.  A good way to do this is to
ignore the data outside the envelope and only use the verified, enveloped
data.  There are other ways if you have more complicated requirements; but
you only need to do complicated things if you have those complicated
requirements.

Of course, if some other part of your system is using data outside the
envelope to make important security decisions before your signature-aware
code gets to it, you may have issues.  But I think you have issues under ALL
these proposals in that case.

--
John Panzer / Google
jpan...@google.com / abstractioneer.org http://www.abstractioneer.org/ /
@jpanzer



On Fri, Sep 24, 2010 at 9:20 AM, Richard L. Barnes rbar...@bbn.com wrote:

 Maybe I've misunderstood the Magic Signatures proposal.  I thought that the
 MagicSig blob actually contained the data that was signed, so that step (3)
 below would be unnecessary.  (Note that the object in Step 2 has only field
 *names*, not *values*.)  Including the data is the part of that scheme that
 causes some heartburn, since the recipient can choose not to verify the
 match against the original data (the HTTP request).

 Like I said earlier, though, Magic Signatures / JSON token could probably
 still be useful, as long as the signed data is reconstructed, not provided
 in the token.  The correspondence in my example was deliberate :)

 --Richard




 On Sep 24, 2010, at 12:11 PM, John Panzer wrote:

 Richard,

 I'm a bit confused because the made-up example you give below is,
 essentially, what Magic Signatures does.  The algorithm you present is
 basically the correct one IMHO.  Are you assuming that the recipient is
 _also_ using the HTTP-level method and URL path for some  important security
 decision?

 (Note:  I'm assuming it's fine to use this unverified host/path data for
 tentative routing to an intended recipient, because the worst thing a MITM
 attacker can possibly do is to route it to the wrong recipient.  As long as
 the recipient uses only signed information to decide whether it will
 actually ACCEPT the data, it will be fine.  MITM attackers can always
 mis-route even signed messages of course, given that firewalls etc. are not
 aware of signatures, so I don't see this as a distinction.)

 --
 John Panzer / Google
 jpan...@google.com / abstractioneer.org http://www.abstractioneer.org/ /
 @jpanzer



 On Fri, Sep 24, 2010 at 8:26 AM, Richard L. Barnes rbar...@bbn.comwrote:

  Yes, there is certainly a risk if someone just checks the signature and
 does not verify the content of the message. This is a bad implementation
 of an authorization system, to be sure, and it's an issue that people
 need to be aware of. But simply signing metadata doesn't completely
 solve the problem, either. In both cases there can be parameters that
 are outside of the signed request that need to be checked and treated
 appropriately.


 Ah, perhaps I was unclear.  I didn't mean *signing* metadata, I meant
 *sending* metadata.  Using a completely made-up syntax:

 1. Signer computes signature sig_val over data object:
   { user_agent: Mozilla, method: GET }
 2. Signer sends { signed_fields: ['user_agent', 'method'], sig: sig_val }
 3. Recipient reconstructs data object using signed_fields
 4. Recipient verifies sig_val == sign(reconstructed_object)

 --Richard

 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth




___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Basic signature support in the core specification

2010-09-24 Thread John Panzer
-1 on requiring it to be part of core OAuth2.  Reasoning: It won't be a MUST
or even SHOULD requirement for either client or server, so adding it later
does not affect interop.  The actual schedule to finalize the signature
mechanism should not be affected either way -- it's fine for a WG to produce
2 or more RFCs if that's the right thing to do.  (If there were consensus
today on what exactly the signing mechanism should be I'd think differently,
but I don't believe there is.)

Caveat:  If there were consensus that OAuth 2 should simply adopt the OAuth
1.0a signature mechanism today, I'd be okay with that, just because there is
some proven code out there.

This is of course a trade-off.  My bias:  I really want us to stabilize what
has been spec'd so far and move forward with that while additional work
happens.  There are already multiple mutually implementations of OAuth2
floating around and I'd rather resolve that quickly.
--
John Panzer / Google
jpan...@google.com / abstractioneer.org http://www.abstractioneer.org/ /
@jpanzer



On Thu, Sep 23, 2010 at 6:43 PM, Eran Hammer-Lahav e...@hueniverse.comwrote:

 Since much of this recent debate was done off list, I'd like to ask people
 to simply express their support or objection to including a basic signature
 feature in the core spec, in line with the 1.0a signature approach.

 This is not a vote, just taking the temperature of the group.

 EHL

 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Returning HTTP 200 on Error for JSONP

2010-08-17 Thread John Panzer
Well, no, jsonp is a special transport layer inside http.  Making that
fuzzy will cause (interop and security) problems.

On Tuesday, August 17, 2010, Luke Shepard lshep...@facebook.com wrote:
 From the perspective of OAuth, a JSONP endpoint is just another protected 
 resource. I'd rather not need us to write an extension for every type of 
 protected resource we might need to access.

 I think the wordsmithing you discussed is what Paul's proposing - just saying 
 essentially look, these are the HTTP error codes you can expect, but it's 
 okay for the server sometimes to give 200 on an error response anyway. That 
 would be necessary even if we wrote an extension.

 On Aug 17, 2010, at 12:43 AM, Torsten Lodderstedt wrote:

 Good point. The server will have to provide special JSONP support anyway. 
 This is the only place where the requested status code handling is needed.

 +1 for a JSONP extension spec

 This might also result in much cleaner JSONP support.

 regards,
 Torsten.

 Am 17.08.2010 um 09:28 schrieb John Panzer jpan...@google.com:

 Except you cannot guarantee that result of course (proxies, apache
 plus tomcat separate processes etc. will all result in error codes).

 Doesn't this all depend on a jsonp extension in the first place - the
 client has to request a special jsonp response by specifying the
 callback, thus making the server both use 200s where possible and also
 wrap its response data in a callback call?  That's not part of the
 spec either, why not just define both pieces of behavior in a jsonp
 extension spec?  (Assuming this can be done without violating the
 letter of the core spec, which might take some wordsmithing.)

 On Monday, August 16, 2010, Torsten Lodderstedt tors...@lodderstedt.net 
 wrote:
 Paul Tarjan schrieb:

 Yes, I'm talking about 5.2.1

 For JSONP the user's browser is the client. It will make a request by 
 executing some HTML like this:

 script 
 src=http://graph.facebook.com/me?access_token=...callback=jsonp_cb;/script
 script
 function jsonp_cb(response) {
 if (response.error) {
   // error out
  return;
 }
 // do cool things
 }
 /script

 (this is done instead of an AJAX request, because of cross-domain 
 restrictions).

 As to Aaron's point, Google sends 3 parameters to the callback function, 
 which I kind of like since the user can choose to get the code or not. 
 Something like:

 jsonp_cb({
 error: invalid_request,
 error_description: An active access token must be used to query
 information about the current user.
 }.
 400,
 'Bad Request');

 which you can grab with

 function jsonp_cb(response, code, status) {
 }

 or ignore it with

 function jsonp_cb(response) {
 }

 But all of this is outside of the spec. I just want to make sure the spec 
 says that the HTTP status code can send as 200 if the server+client need 
 it for errors.


 I think this can be achieved in two ways: (a) either the client tells the 
 server using a parameter or (b) the server always responds with status 
 code 200 in some cases. From my understanding, status code 200 is relevant 
 for requests following the rules of section 5.1.2 only. So my sugesstion 
 would be to go with option (b) and modify the spec to always return status 
 code 200 for such requests. This keeps the spec simpler and preserves the 
 behavior of requests following the rules of section 5.1.1..

 regards,
 Torsten.


 Paul

 On Aug 16, 2010, at 3:09 PM, Torsten Lodderstedt wrote:



 I would like to furthermore track down the relevant use cases. Assuming 
 you are referring to section 5.2.1, how does your client send the access 
 token to the resource server? I'm asking because I think error handling 
 for URI query parameters, Body parameters and Authorization headers could 
 be handled differently. For URI query parameters and Body parameters, 
 returning the error code in the payload instead of the status code would 
 be acceptable from my point of view since authentication is also pushed to 
 the application level. In contrast when using HTTP authentication, 40(x) 
 status codes together with WWW-Authenticate are a must have.

 Would such a

-- 
--
John Panzer / Google
jpan...@google.com / abstractioneer.org / @jpanzer
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Returning HTTP 200 on Error for JSONP

2010-08-17 Thread John Panzer
Is there any legit reason other than jsonp specifically?

In the wild I mean.

On Tuesday, August 17, 2010, Brian Eaton bea...@google.com wrote:
 On Tue, Aug 17, 2010 at 11:48 AM, David Recordon record...@gmail.com wrote:
 Luke's point still holds true of the core spec needing to allow a 200 status
 code on an error in this scenario. I'd also rather see this as part of the
 core spec as it reduces the number of things that implementors will need to
 read for common use cases.

 For the record, I think any implementer that is relying on protected
 resources returning special response codes for any type of OAuth
 protocol issue is probably going to get burned.  Variation in
 protected resource behavior has been a consistent problem in OAuth
 1.0, and I doubt that can change in OAuth 2.

 It's tough to get protected resource servers to be consistent; they
 frequently have good reasons (e.g. jsonp) to be inconsistent.

 Authorization servers are simpler beasts.


-- 
--
John Panzer / Google
jpan...@google.com / abstractioneer.org / @jpanzer
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Understanding the reasoning for Base64

2010-06-25 Thread John Panzer
There are 2 characters that are different between base64 and base64url.
 Many good libraries support both (as they're both useful, and both are in
the base64 RFC spec); the ability to eliminate a class of encoding problems
seems like a good trade-off for, in some languages without full base64
support, an additional substitution of 2 characters.

--
John Panzer / Google
jpan...@google.com / abstractioneer.org http://www.abstractioneer.org/ /
@jpanzer



On Fri, Jun 25, 2010 at 12:15 PM, Naitik Shah n...@daaku.org wrote:

 On Fri, Jun 25, 2010 at 11:39 AM, Breno breno.demedei...@gmail.comwrote:

 On Fri, Jun 25, 2010 at 10:49 AM, Luke Shepard lshep...@facebook.com
 wrote:
  Brian, Dirk - just wondering if you had thoughts here?
 
  The only strong reason I can think of for base64 encoding is that it
 allows for a delimiter between the body and the signature. Is there any
 other reason?

 Without base64 encoding we have to define canonicalization procedures
 around spaces and we still have to URL encode separator characters
 such as {. There is also the risk that developers might be confused
 whether the URL encoding is to be performed before or after
 computation of the signature.  If you say that the signature is
 computed on the base64 encoded blob, there's less scope for confusion
 and interoperability issues.


 Yep, I get that the web version makes the url encoding a no op. But I
 fear we're trading one spec (urlencoding) to another one (web base64). I'm
 imagining the sample code (that does not rely on an SDK) we'ed give out to
 developers in our docs, and the thing that stands out is the web part in
 the web_base64. It means that our sample code will look like

   str_replace(+, _, base64(json_encode(data

 or for validating signatures:

   json_decode(decode64(str_replace(_, +, data)))

 The str_replace() really stands out. From my quick read, it seemed like
 there were one or two other characters that needed to get replaced too.
 While some languages (like PHP) support arrays to specify multiple
 replacement patterns, in other languages you'll end up with a few
 str_replace calls. It would be nice if that wasn't necessary.

 I'm wondering if we can get away with urlencode(json_encode(data) + '.' +
 sig) as the value. then, instead of str_replace for getting normal base64
 logic to work, we would instead need a rsplit or something, since the dot is
 not a reserved character in the json blob. Was that approach considered?


 -Naitik

 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] proposal for signatures

2010-06-22 Thread John Panzer
On Tue, Jun 22, 2010 at 2:36 AM, Ben Laurie b...@google.com wrote:

 On 22 June 2010 07:03, David Recordon record...@gmail.com wrote:
  Thanks for writing this. A few questions...
 
  Do we need both `issuer` and `key_id`? Shouldn't we use `client_id`
  instead at least for OAuth?
 
  Can we write out algorithm instead of `alg`?
 
  How do you generate the body hash?
 
  Does websafe-base64-encoded mean that I can't just blindly use my
  languages built in base64 encode function?

 No, you need the websafe alphabet, which substitutes '-' and '_' for
 '+' and '/' in the standard alphabet. Which reminds me, Dirk needs to
 specify whether padding is used.


(Padding _is_ part of the base64 specification IIRC; I think it'd be
sufficient to artfully include it in the primary example -- and have a
second example crafted so that there happens to be zero padding :) ).

You can construct base64url() from base64() plus a substitution pass for -+
and _/ so it doesn't seem too onerous.  (Make sure to include one of these
characters in the examples too.)



  Don't we still have the more fundamental question to answer about
  decoupling what's being signed from the underlying HTTP request?


Aside/my $.02: This is a key issue which Salmon+Magic Signatures evades by
essentially treating the HTTP request (the method, URL, headers, etc.) as
advisory/transport hints, to be ignored when reading the data, and making
sure the protocol works even if the data is sent via carrier pigeon; all
important information must be contained in the signed, structured body.
 This is much much harder if you have to deal with totally arbitrary kinds
of requests with arbitrary semantics.

This also means that you're effectively using HTTP as a simple transport to
move envelopes around, in much the same way you can use the ocean to
transport messages in bottles around, but a bit more efficiently.  I've
banged my head against this a bit and have not come up with a better
solution but if there is one I'd love to hear it.


 
  --David
 
 
  On Mon, Jun 21, 2010 at 12:04 AM, Dirk Balfanz balf...@google.com
 wrote:
  Hi guys,
  I think I owe the list a proposal for signatures.
  I wrote something down that liberally borrows ideas from Magic
 Signatures,
  SWT, and (even the name from) JSON Web Tokens.
  Here is a short document (called JSON Tokens) that just explains how
 to
  sign something and verify the signature:
 
 http://docs.google.com/document/pub?id=1kv6Oz_HRnWa0DaJx_SQ5Qlk_yqs_7zNAm75-FmKwNo4
 
  Here is an extension of JSON Tokens that can be used for signed OAuth
  tokens:
 
 http://docs.google.com/document/pub?id=1JUn3Twd9nXwFDgi-fTKl-unDG_ndyowTZW8OWX9HOUU
  Here is a different extension of JSON Tokens that can be used for
 2-legged
  flows. The idea is that this could be used as a drop-in replacement for
 SAML
  assertions in the OAuth2 assertion flow:
 
 http://docs.google.com/document/pub?id=1s4kjRS9P0frG0ulhgP3He01ONlxeTwkFQV_pCoOowzc
  I also have started to write some code to implement this as a
  proof-of-concept.
 
  Thoughts? Comments?
  Dirk.
 
  ___
  OAuth mailing list
  OAuth@ietf.org
  https://www.ietf.org/mailman/listinfo/oauth
 
 
  ___
  OAuth mailing list
  OAuth@ietf.org
  https://www.ietf.org/mailman/listinfo/oauth
 
 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth 2.0 Mobile WebApp Flow

2010-06-09 Thread John Panzer
So the thinking is that this is just a generic include or one level of
indirection feature that is orthogonal to other flows?

FWIW, I really like that notion.  It's also very easy to describe and
understand conceptually.
--
John Panzer / Google
jpan...@google.com / abstractioneer.org http://www.abstractioneer.org/ /
@jpanzer



On Mon, Jun 7, 2010 at 7:53 PM, Nat Sakimura sakim...@gmail.com wrote:

 I fully agree on it.

 Instead of doing as a flow, defining request_url as one of the core
 variable would be better.
 The question then is, whether this community accepts the idea.


 On Mon, Jun 7, 2010 at 10:51 PM, Manger, James H 
 james.h.man...@team.telstra.com wrote:

 Nat,

  On the other hand, you are starting to think of it as a generic
 include mechanism, are you?

 Yes. That feels like the simplest mental model for this functionality, and
 the simplest way to specify it.

 --
 James Manger




 --
 Nat Sakimura (=nat)
 http://www.sakimura.org/en/
 http://twitter.com/_nat_en

 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] modifying the scope of an access token

2010-05-11 Thread John Panzer
(Note that in my use case, it's actually the client who wants to dispose of
its dangerous full-access token as quickly as possible, retaining only the
least-authority token it needs to continue its ongoing work.  This would be
the case even if the token granting service is handing out tokens like free
candy.)

On Mon, May 10, 2010 at 10:43 PM, David Recordon record...@gmail.comwrote:

 I'm wondering if this could be achieved by adding an optional scope
 parameter to the existing refresh request versus creating a new
 request type. Both because Dick's proposed text requires a refresh
 token and it seems like services worried about this sort of risk would
 not want to issue long lived access tokens.

 --David


 On Mon, May 10, 2010 at 10:39 PM, John Panzer jpan...@google.com wrote:
  Yes; a service that does a one time configuration step, requiring
  extensive access, followed by an ongoing lower level of access (say,
  read-only).  Lowering access means it only needs to store low-risk
  tokens in its data store, limiting exposure (and liability).
 
  On Monday, May 10, 2010, Eran Hammer-Lahav e...@hueniverse.com wrote:
  Are there actual use cases for this? Either way sounds like it belongs
 in an extension.
 
  EHL
 
  -Original Message-
  From: Marius Scurtescu [mailto:mscurte...@google.com]
  Sent: Monday, May 10, 2010 12:49 PM
  To: Eran Hammer-Lahav
  Cc: Dick Hardt; OAuth WG (oauth@ietf.org)
  Subject: Re: [OAUTH-WG] modifying the scope of an access token
 
  On Sun, May 9, 2010 at 10:17 PM, Eran Hammer-Lahav
  e...@hueniverse.com wrote:
   This would only work for the client credentials flow (because you
 keep the
  same authorization source). For all other flows you are breaking the
  authorization boundaries.
 
  If the requested scope is a subset of the original scope associated
 with the
  refresh token then it should be acceptable, right?
 
  This would allow a client to request a larger set of scopes, needed for
 all API
  calls need for its function, but then get sub-scoped access tokens,
 particular
  to each API. This will prevent an API from receiving a too powerful
 access
  token. A compromised API could use access tokens to place calls against
  other APIs, but not if it is narrowly scoped.
 
  Marius
 
  
   What would be useful is to allow asking for more scope. For example,
 when
  asking for a token (the last step of each flow), also include a valid
 token to
  get a new token with the combined scope (new approval and previous).
  
   EHL
  
   -Original Message-
   From: oauth-boun...@ietf.org [mailto:oauth-boun...@ietf.org] On
   Behalf Of Dick Hardt
   Sent: Sunday, May 09, 2010 7:19 PM
   To: OAuth WG (oauth@ietf.org)
   Subject: [OAUTH-WG] modifying the scope of an access token
  
   There has been some discussion about modifying the scope of the
   access token during a refresh. Perhaps we can add another method
 to
   what the AS MAY support that allows modifying the scope of an access
   token. Type of request is modify and the scope parameter is
   required to indicate the new scope required. Suggested copy below:
  
   type
 REQUIRED. The parameter value MUST be set to modify
  
   client_id
 REQUIRED. The client identifier as described in Section 3.4.
  
   client_secret
 REQUIRED if the client was issued a secret. The client secret.
  
   refresh_token
 REQUIRED. The refresh token associated with the access token
 to
   be refreshed.
  
   scope
 REQUIRED. The new scope of the access request expressed as a
   list of space-delimited strings. The value of the scope parameter is
   defined by the authorization server. If the value contains multiple
   space-delimited strings, their order does not matter, and each
 string
   adds additional access range to the requested scope.
  
   secret_type
 OPTIONAL. The access token secret type as described by Section
 8.3.
   If omitted, the authorization server will issue a bearer token (an
   access token without a matching secret) as described by Section 8.2.
  
   ___
   OAuth mailing list
   OAuth@ietf.org
   https://www.ietf.org/mailman/listinfo/oauth
   ___
   OAuth mailing list
   OAuth@ietf.org
   https://www.ietf.org/mailman/listinfo/oauth
  
  ___
 
  --
  --
  John Panzer / Google
  jpan...@google.com / abstractioneer.org / @jpanzer
  ___
  OAuth mailing list
  OAuth@ietf.org
  https://www.ietf.org/mailman/listinfo/oauth
 

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Comments on Web Callback Client Flow

2010-04-08 Thread John Panzer
On Wed, Apr 7, 2010 at 10:46 PM, Evan Gilbert uid...@google.com wrote:




 On Wed, Apr 7, 2010 at 9:29 AM, John Panzer jpan...@google.com wrote:

 I'm assuming that tokens need not be bound to a specific user (typically
 they are, but imagine a secretary granting an app access to his boss's
 calendar and then leaving the company and therefore being removed from the
 calendar ACL, but the app still keeping access by virtue of the capability
 granted by the token).  In this case, the proposed wording seems kind of
 problematic for a MUST.


 Note that the resource owner in this case is the secretary, not his boss.
 Resource owner is An entity capable of granting access to a protected
 resource.

 Since access tokens are bound to the resource owner (A unique identifier
 used by the client to make authenticated requests on behalf of the
 resource owner.), I think in this case the system would have a clear way to
 revoke access to the document.


Sorry, I was unclear.  In the scenario I'm imagining, the boss has delegated
calendar access to his secretary, who uses this delegated access to hook up
an app to the calendar.  The boss is very happy with this situation because
now his Zombieville reminders show up on his calendar.  If the token is tied
to the secretary, and the secretary's account is shut down when the
secretary leaves, then the boss no longer sees the Zombieville reminders pop
up.  He then complains that this sort of thing never happened with
password-based access!

I can imagine either scenario upon termination of the secretary -- you may
want to revoke all the tokens, some of the tokens, or none of the tokens.



 Updating the wording on the proposal slightly to clarify (also changing
 format to new parameter formatting)

 Before:
 username
   The resource owner's username. The authorization server MUST only send
 back refresh tokens or access tokens for the user identified by username

 Current:
 username
   OPTIONAL. The resource owner's username. The authorization server MUST
 only send back refresh tokens or access tokens *capable of making requests
 on behalf of* the user identified by username



 On Wed, Apr 7, 2010 at 8:22 AM, Evan Gilbert uid...@google.com wrote:



 On Wed, Apr 7, 2010 at 12:08 AM, Eran Hammer-Lahav 
 e...@hueniverse.comwrote:

  What about an attacker changing the username similar to the way a
 callback can be changed?


  I don't think there is a danger here.

 We still use all of the safeguards in place from the rest of the flow -
 adding this parameter never log you in when omitting the parameter would not
 have. It will just create more error responses.



 EHL



 On 4/6/10 11:14 PM, Evan Gilbert uid...@google.com wrote:



 On Tue, Apr 6, 2010 at 11:07 PM, Eran Hammer-Lahav e...@hueniverse.com
 wrote:




 On 4/6/10 5:24 PM, Evan Gilbert uid...@google.com wrote:

  Proposal:
  In 2.4.1  2.4.2, add the following OPTIONAL parameter
  username
The resource owner's username. The authorization server MUST only
 send back
  refresh tokens or access tokens for the user identified by username.

 What are the security implications? How can the client know that the
 token
 it got is really for that user?


 Think the client has to trust the auth server, in the same way as with
 the username + password profile. The auth server can always send back a
 scope for a different user.

 Worst case is that there is an identity mismatch between client and the
 identity implicit in the authorization token. This mismatch is already
 possible, and I don't think the username parameter makes the problem worse.


 EHL





 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth




___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Signatures, Why?

2010-03-16 Thread John Panzer
I'm confused by one pro for signatures:

Protect integrity of whole request - authorization data and payload when
communicating over unsecure channel

I do not believe there is an existing concrete proposal that will protect
the whole request, unless you add additional restrictions on the request
types -- e.g., only HTTP GET or POST with form-encoded data variables only.

If the assertion is that signatures will actually provide integrity for
arbitrary HTTP request bodies as well as the URL, authority, and HTTP
method:   I would like to see at least one concrete proposal that will
accomplish this.   IIRC there's only one that I think is possibly
implementable in an interoperable way, and it supports only JSON payloads.
 In other words, anyone using body signing would need to wrap their data in
JSON to do it.  (This is not necessarily the worst thing in the world, of
course, but it is something to be taken into account when listing pros and
cons.)

On Mon, Mar 15, 2010 at 3:50 PM, Torsten Lodderstedt 
tors...@lodderstedt.net wrote:

  Hi all,

 I composed a detailed summary at
 http://trac.tools.ietf.org/wg/oauth/trac/wiki/SignaturesWhy. Please review
 it.

 @Zachary: I also added some of your recent notes.

 regards,
 Torsten.

  I volunteer to write it up.

 hat type='chair'/

 On 3/4/10 1:00 PM, Blaine Cook wrote:


  One of the things that's been a primary focus of both today's WG call
 and last week's call is what are the specific use cases for
 signatures?

 - Why are signatures needed?
 - What do signatures need to protect?

 Let's try to outline the use cases! Please reply here, so that we have
 a good idea of what they are as we move towards the Anaheim WG.


  This was a valuable thread. Perhaps someone could write up a summary of
 the points raised, either on the list or at the wiki?

 Peter




 ___
 OAuth mailing listoa...@ietf.orghttps://www.ietf.org/mailman/listinfo/oauth



 ___
 OAuth mailing listoa...@ietf.orghttps://www.ietf.org/mailman/listinfo/oauth



 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Signatures, Why?

2010-03-08 Thread John Panzer
On Mon, Mar 8, 2010 at 5:38 AM, Torsten Lodderstedt tors...@lodderstedt.net
 wrote:

 ...
 1. Connection latency to bootstrap the connection (from the
 asymmetric/public-key encryption operations)


 Bootstrapping a SSL sessions is expensive. But every session can be
 used for multiple HTTPS-Connections. Thus an application can establish the
 first
 HTTPS connection in the background before any user interaction takes place
 and
 reuse the session for further communication.


I think this point is worth calling out (and doing a bit of prototyping on)
-- if the use case is a latency-sensitive client app that wishes to avoid
cold-start HTTP(s) connections, then a warmup connect() or just an
idempotent GET while the app is starting up / coming to the foreground could
be a very good idea.  Good even without SSL, due to DNS overhead, and even
more useful with SSL.  This could allow many apps to hide the latency hit
from the user almost completely.

If this is true, then it may mean that the SSL overhead would be a problem
in far fewer cases than it might appear at first glance.
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Signatures, Why?

2010-03-08 Thread John Panzer
Though not of course the body of a POST, unless it is form encoded
data, which merely pushes the problem of canonicalization (and thus
interoperability) outside the spec.

On Monday, March 8, 2010, Dick Hardt dick.ha...@gmail.com wrote:

 On 2010-03-08, at 6:39 PM, Ethan Jewett wrote:

 Request hijacking: I actually significantly understated the protection
 against request hijacking that that the HMAC-SHA1 method of OAuth 1.0a
 provides. In the worst case, a MITM can hijack a request but cannot
 change the request method, URL, query parameters, nonce, or timestamp.
 In the best case (a single-part form-encoded request body or a request
 consisting only of query parameters), the MITM cannot modify the
 request at all because it is fully signed. It is not true, as Dick
 contends, that a MITM who has captured a signed OAuth 1.0a request can
 use a signed access token as if it were a bearer token. It is far more
 limited in the worst case, and useless in the best case.

 After reviewing the 3.4 of draft-hammer-oauth I see that the query string is 
 part of the string being signed, minimizing the attack surface. Thanks for 
 pointing out my misunderstanding.

 -- Dick


 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth


-- 
--
John Panzer / Google
jpan...@google.com / abstractioneer.org / @jpanzer
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth XRD?

2010-02-24 Thread John Panzer
Www-Authenticate plus Link headers seem like they would involve
minimal wheel re-invention.

On Wednesday, February 24, 2010, Paul C. Bryan em...@pbryan.net wrote:
 I've been mostly following the pattern in your XRD-Based OAuth Discovery
 Sneak Peek, where a client can discover how to interact with an OAuth
 token service in reaction to the OAuth challenge. What got me
 second-guessing this was a passing reference to OAuth in the example in
 XRD version 1.0.

 I'm asking now because in the process of simplifying the UMA spec, I
 need to decide on some of the OAuth discovery mechanisms it will use.

 Paul

 On Wed, 2010-02-24 at 19:40 -0700, Eran Hammer-Lahav wrote:
 No idea. I am not sure yet what the discovery requirements are. For
 example, are clients expected to be familiar with each of the
 token-obtaining profiles and just need a single URI for each supported
 mechanism? XRD is useful for describing a bunch of links for such
 endpoints, but it might just be that the discovery information can be
 provided directly in the header.

 Until we know what OAuth 2.0 looks like, we can't really discuss
 discovery much.

 Is there a reason why you are asking now?

 EHL

  -Original Message-
  From: oauth-boun...@ietf.org [mailto:oauth-boun...@ietf.org] On
 Behalf
  Of Paul C. Bryan
  Sent: Wednesday, February 24, 2010 9:30 AM
  To: oauth@ietf.org
  Subject: [OAUTH-WG] OAuth XRD?
 
  This is a message directed mostly at Eran:
 
  Is the OAuth 2.0 discovery mechanism still expected to be via a
 provider
  attribute in the WWW-Authenticate header, or is host-meta expected
 to
  take over?
 
  Paul
 
  ___
  OAuth mailing list
  OAuth@ietf.org
  https://www.ietf.org/mailman/listinfo/oauth


 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth


-- 
--
John Panzer / Google
jpan...@google.com / abstractioneer.org / @jpanzer
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] FW: Salmon signatures proposal - base64url

2010-02-11 Thread John Panzer
James,

Thanks for the feedback.

+salmon-proto...@googlegroups.com
bcc:oauth@ietf.org bcc%3aoa...@ietf.org

--
John Panzer / Google
jpan...@google.com / abstractioneer.org http://www.abstractioneer.org/ /
@jpanzer



On Wed, Feb 10, 2010 at 10:59 PM, Manger, James H 
james.h.man...@team.telstra.com wrote:

 John,

 I like your choice of base64url as a way to armour binary data and avoid
 escaping issues.

 It might be nicer to sign the bytes that get armoured, instead of the ASCII
 output of the armouring.
 I don't think this compromises the robustness aim of Magic Signatures.
 It would mean you sign the binary message, then use base64url armouring to
 ensure the exact bytes signed make it unaltered to the other end. The
 armouring applies equally to the data and signature, but isn't actually
 involved in the crypto.

 An extra little advantage is that the code to remove the armour [eg byte[]
 dearmour(String)] can also skip whitespace. With the current arrangement, a
 separate step to remove the whitespace is required as you need the base64url
 encoding without whitespace to verify the signature.

 The example used throughout the spec is wrong. It looks like two -
 characters have been accidentally dropped (which is not a good advertisement
 for the robustness that base64url offers!)
 4th line:  change bWUPHVy to bWU-PHVy (decodes to meur)
 10th line: change  bGUU2Fs to bGU-U2Fs (decodes to leSal)

 Curiously, there are no _ characters in the examples (data or sig).
 Changing the title to end with a ?, instead of a !, would introduce
 one. Base64url is uncommon so examples using its differences from normal
 base64 might help catch a few implementation bugs.
 [actually I just noticed a _ in the example modulus, but one in the
 example data would be even better]

 The signature is wrong, but not random so it is misleading.
 Decrypting the example signature with the example key produces a 20-byte
 value -- which happens to the be SHA-1 hash of the empty string (I don't
 recognize many hash values on sight, but this is one of them!).
 It should be the hash of the (armoured) data.
 It should be a SHA-256 hash, not SHA-1 as per me:algRSA-SHA256/me:alg.
 It should be wrapped in a DER-encoded DigestInfo structure (basically
 includes an id for SHA-256).
 It should have the PKCS#1 v1.5 block type 1 prefix (01 FF FF… 00
 DigestInfo), making the value a similar size to the modulus.

 Sorry for being picky ;)

 --
 James Manger

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] FW: Salmon signatures proposal - base64url

2010-02-10 Thread John Panzer
Thanks!

On Wednesday, February 10, 2010, Manger, James H
james.h.man...@team.telstra.com wrote:
 John,

 I like your choice of base64url as a way to armour binary data and avoid 
 escaping issues.

 It might be nicer to sign the bytes that get armoured, instead of the ASCII 
 output of the armouring.
 I don't think this compromises the robustness aim of Magic Signatures.
 It would mean you sign the binary message, then use base64url armouring to 
 ensure the exact bytes signed make it unaltered to the other end. The 
 armouring applies equally to the data and signature, but isn't actually 
 involved in the crypto.

 An extra little advantage is that the code to remove the armour [eg byte[] 
 dearmour(String)] can also skip whitespace. With the current arrangement, a 
 separate step to remove the whitespace is required as you need the base64url 
 encoding without whitespace to verify the signature.

 The example used throughout the spec is wrong. It looks like two - 
 characters have been accidentally dropped (which is not a good advertisement 
 for the robustness that base64url offers!)
 4th line:  change bWUPHVy to bWU-PHVy (decodes to meur)
 10th line: change  bGUU2Fs to bGU-U2Fs (decodes to leSal)

 Curiously, there are no _ characters in the examples (data or sig). 
 Changing the title to end with a ?, instead of a !, would introduce 
 one. Base64url is uncommon so examples using its differences from normal 
 base64 might help catch a few implementation bugs.
 [actually I just noticed a _ in the example modulus, but one in the example 
 data would be even better]

 The signature is wrong, but not random so it is misleading.
 Decrypting the example signature with the example key produces a 20-byte 
 value -- which happens to the be SHA-1 hash of the empty string (I don't 
 recognize many hash values on sight, but this is one of them!).
 It should be the hash of the (armoured) data.
 It should be a SHA-256 hash, not SHA-1 as per me:algRSA-SHA256/me:alg.
 It should be wrapped in a DER-encoded DigestInfo structure (basically 
 includes an id for SHA-256).
 It should have the PKCS#1 v1.5 block type 1 prefix (01 FF FF… 00 
 DigestInfo), making the value a similar size to the modulus.

 Sorry for being picky ;)

 --
 James Manger


-- 
--
John Panzer / Google
jpan...@google.com / abstractioneer.org / @jpanzer
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Allowing Secrets in the Clear Over Insecure Channels

2010-01-15 Thread John Panzer
I think the question at hand is:  If a server says it wants to do bearer
tokens and no TLS, is a client obligated to interop in order to claim spec
compliance?

--
John Panzer / Google
jpan...@google.com / abstractioneer.org / @jpanzer



On Fri, Jan 15, 2010 at 10:01 AM, Hurliman, John john.hurli...@intel.comwrote:

 +1 to this as well. Any implementation is free to deviate from the spec, at
 the risk of breaking interoperability.

 John

 -Original Message-
 From: oauth-boun...@ietf.org [mailto:oauth-boun...@ietf.org] On Behalf Of
 John Panzer
 Sent: Friday, January 15, 2010 8:43 AM
 To: Eve Maler
 Cc: OAuth WG
 Subject: Re: [OAUTH-WG] Allowing Secrets in the Clear Over Insecure
 Channels

 +1 to MUST implement TLS on both sides.

 I thought we were only discussing whether the server could decide to
 skip TLS for a particular use case.  No?

 On Friday, January 15, 2010, Eve Maler e...@xmlgrrl.com wrote:
  The points about matching security to use case are excellent.  This is
 why I think we're maybe misinterpreting Eran's argument for MUST.  It's not
 an argument from security alone (we must always have highest security all
 the time); it's an argument from interoperability of security features at
 Internet scale (in the general/at-scale case, we should not accept deployed
 instances that do not support this important security feature).
 
  On this basis, it's reasonable to argue for MUST for implementing TLS
 (with no weasel words about or equivalent, since this isn't a testable
 protocol clause), for the broad ecosystem benefits.
 
  Eve
 
  On 15 Jan 2010, at 8:06 AM, John Kemp wrote:
 
  On Jan 14, 2010, at 7:39 PM, Richard L. Barnes wrote:
 
  As such, I can't see how *not* requiring SSL for unsigned requests
  could pass muster at an IETF security review.
 
  Speaking as someone who does IETF security reviews ...  :)
 
  If I were reviewing a document that defines an optional insecure mode
 of operation (in this case, operating without TLS), I would be looking for
 basically two things: (1) A discussion of the risks if the insecure mode is
 used, and (2) a motivation for why these risks might be acceptable in
 certain cases.  This is in the spirit of MUST=SHOULD+exception -- if it's
 a SHOULD, you need to explain the exception.  In this case, the risks (1)
 are pretty obvious: A passive observer can steal your password and use it to
 authenticate as you.  The motivations (2) are what this thread is about.
 
  I would also observe that MUST use TLS or equivalent is actually the
 same as SHOULD use TLS, since the or equivalent isn't really specified.
 
  Right. Which is why I'm currently OK with Eran's text either way as I
 think it allows bearer tokens with *any* other security protections, unless
 we specify exactly which security properties should be provided by the
 channel, vs. via other mechanisms.
 
  SAML's confirmation method is sort of the equivalent idea to what
 we've been talking about here (where the method could be holder-of-key+a
 signature algorithm, bearer or something else) but we also haven't
 separated out security properties such as integrity or confidentiality for
 the purposes of this discussion.
 
  (This is really obvious when you think about it from the perspective of
 an implementor: If you're going to cover the or equivalent cases, then you
 have to have be able to operate in non-TLS mode.)  The or equivalent cases
 are the ones where not using TLS might be acceptable, i.e., the ones that
 should be cited as motivations (2) for allowing the non-secure mode.
 
  Taking the SECDIR hat off, it seems to me that there are some
 motivations appearing in this thread:
  -- Appropriate key management (frequent key refresh)
  -- Private trusted networks
  -- John's observation that URL+token == private URL
  So ISTM that SHOULD use TLS could be motivated here.
 
  (Now, all that said, it probably wouldn't hurt to have TLS as a MUST
 implement so that it's there if people want to use it.)
 
  +1
 
  - johnk
 
 
  Eve Maler
  e...@xmlgrrl.com
  http://www.xmlgrrl.com/blog
 
  ___
  OAuth mailing list
  OAuth@ietf.org
  https://www.ietf.org/mailman/listinfo/oauth
 

 --
 --
 John Panzer / Google
 jpan...@google.com / abstractioneer.org / @jpanzer
 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth
 ___
 OAuth mailing list
 OAuth@ietf.org
 https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Allowing Secrets in the Clear Over Insecure Channels

2010-01-15 Thread John Panzer
On Fri, Jan 15, 2010 at 10:41 AM, Richard L. Barnes rbar...@bbn.com wrote:

 I would think not; it's a matter of the client's policy what signatures it
 will issue, and the server's policy which it will accept.


That doesn't answer the interop question, though, which is the interesting
one from the standpoint of the spec.  If there is no requirement for a
common minimal-subset then there is no guarantee of interop between a random
client and a random server that both speak the same data protocol.  Most use
cases of OAuth have a custom client for every data API so this isn't an
issue, but that's not necessarily the case, especially as new protocols
start to depend on OAuth for their security needs.



 And in any case, OAuth doesn't have any in-band security negotiation, so
 there's no way for the server to ask for a given set of features (e.g.,
 bearer tokens and no TLS) within the protocol.


See however https://groups.google.com/group/oauth-key-discovery for a very
needed extension that does add key discovery and rotation.  Even without
that, the question is just punted to the out of band documentation:  Is a
server allowed to support only bearer tokens and no TLS for some use case?



--Richard




 On Jan 15, 2010, at 1:29 PM, John Panzer wrote:

 I think the question at hand is:  If a server says it wants to do bearer
 tokens and no TLS, is a client obligated to interop in order to claim spec
 compliance?

 --
 John Panzer / Google
 jpan...@google.com / abstractioneer.org / @jpanzer



 On Fri, Jan 15, 2010 at 10:01 AM, Hurliman, John 
 john.hurli...@intel.comwrote:

 +1 to this as well. Any implementation is free to deviate from the spec,
 at the risk of breaking interoperability.

 John

 -Original Message-
 From: oauth-boun...@ietf.org [mailto:oauth-boun...@ietf.org] On Behalf Of
 John Panzer
 Sent: Friday, January 15, 2010 8:43 AM
 To: Eve Maler
 Cc: OAuth WG
 Subject: Re: [OAUTH-WG] Allowing Secrets in the Clear Over Insecure
 Channels

 +1 to MUST implement TLS on both sides.

 I thought we were only discussing whether the server could decide to
 skip TLS for a particular use case.  No?

 On Friday, January 15, 2010, Eve Maler e...@xmlgrrl.com wrote:
  The points about matching security to use case are excellent.  This is
 why I think we're maybe misinterpreting Eran's argument for MUST.  It's not
 an argument from security alone (we must always have highest security all
 the time); it's an argument from interoperability of security features at
 Internet scale (in the general/at-scale case, we should not accept deployed
 instances that do not support this important security feature).
 
  On this basis, it's reasonable to argue for MUST for implementing TLS
 (with no weasel words about or equivalent, since this isn't a testable
 protocol clause), for the broad ecosystem benefits.
 
  Eve
 
  On 15 Jan 2010, at 8:06 AM, John Kemp wrote:
 
  On Jan 14, 2010, at 7:39 PM, Richard L. Barnes wrote:
 
  As such, I can't see how *not* requiring SSL for unsigned requests
  could pass muster at an IETF security review.
 
  Speaking as someone who does IETF security reviews ...  :)
 
  If I were reviewing a document that defines an optional insecure mode
 of operation (in this case, operating without TLS), I would be looking for
 basically two things: (1) A discussion of the risks if the insecure mode is
 used, and (2) a motivation for why these risks might be acceptable in
 certain cases.  This is in the spirit of MUST=SHOULD+exception -- if it's
 a SHOULD, you need to explain the exception.  In this case, the risks (1)
 are pretty obvious: A passive observer can steal your password and use it to
 authenticate as you.  The motivations (2) are what this thread is about.
 
  I would also observe that MUST use TLS or equivalent is actually the
 same as SHOULD use TLS, since the or equivalent isn't really specified.
 
  Right. Which is why I'm currently OK with Eran's text either way as I
 think it allows bearer tokens with *any* other security protections, unless
 we specify exactly which security properties should be provided by the
 channel, vs. via other mechanisms.
 
  SAML's confirmation method is sort of the equivalent idea to what
 we've been talking about here (where the method could be holder-of-key+a
 signature algorithm, bearer or something else) but we also haven't
 separated out security properties such as integrity or confidentiality for
 the purposes of this discussion.
 
  (This is really obvious when you think about it from the perspective
 of an implementor: If you're going to cover the or equivalent cases, then
 you have to have be able to operate in non-TLS mode.)  The or equivalent
 cases are the ones where not using TLS might be acceptable, i.e., the ones
 that should be cited as motivations (2) for allowing the non-secure mode.
 
  Taking the SECDIR hat off, it seems to me that there are some
 motivations appearing in this thread:
  -- Appropriate key management