Re: [OAUTH-WG] redircet_uri matching algorithm

2015-05-20 Thread David Waite

> On May 16, 2015, at 1:43 AM, Patrick Gansterer  wrote:
> 
> "OAuth 2.0 Dynamic Client Registration Protocol” [1] is nearly finished and 
> provides the possibility to register additional “Client Metadata”.
> 
> OAuth 2.0 does not define any matching algorithm for the redirect_uris. The 
> latest information on that topic I could find is [1], which is 5 years old. 
> Is there any more recent discussion about it?
> 
> I’d suggest to add an OPTIONAL “redirect_uris_matching_method” client 
> metadata. Possible valid values could be:
> * “exact”: The “redirect_uri" provided in a redirect-based flow must match 
> exactly one of of the provided strings in the “redirect_uris” array.
> * “prefix”: The "redirect_uri" must begin with one of the “redirect_uris”. 
> (e.g. "http://example.com/path/subpath” would be valid with 
> [“http://example.com/path/“, “http://example.com/otherpath/”])
> * “regex”: The provided “redirect_uris” are threatened as regular 
> expressions, which the “redirect_uri” will be matched against. (e.g. 
> “http://subdomain.example.com/path5/“ would be valid with 
> [“^http:\\/\\/[a-z]+\\.example\\.com\\/path\\d+\\/“]

I don’t know if this is appropriate. For example, If a server is unwilling to 
support arbitrary regex matching, how would a client which required this be 
able to register dynamically? Or conversely: if a client did not require regex 
matching, why would they request this from a server?

If a client requests regex or prefix, it was built to rely on these to work. If 
some set of servers choose not to support regex or prefix for scope or security 
reasons, this hurts interoperability from the perspective of dynamic 
registration. And we already have a workaround - instead make your client rely 
on the state parameter.

A client doing code or implicit should specify exact return URLs in their 
registration, and if they need to send the user someplace else after 
authentication it should be represented to the client by their state param.

> If not defined the server can choose any supported method, so we do not break 
> existing implementations. On the other side it allows an client to make sure 
> that a server supports a specific matching algorithm required by the client. 
> ATM a client has no possibility to know how a server handles the 
> redirect_uris.

The clients should be more than reasonably safe in assuming exact matching 
works. If the server won’t support exact matching on the redirect_uris supplied 
it should fail registration.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Second OAuth 2.0 Mix-Up Mitigation Draft

2016-01-21 Thread David Waite
Question: 

I understand how “iss" helps mitigate this attack (client knows response was 
from the appropriate issuer and not an attack where the request was answered by 
another issuer). 

However, how does passing “state” on the authorization_code grant token request 
help once you have the above in place? Is this against some alternate flow of 
this attack I don’t see, or is it meant to mitigate some entirely separate 
attack?

If one is attempting to work statelessly (e.g. your “state” parameter is actual 
state and not just a randomly generated value), a client would have always 
needed some way to differentiate which issuer the authorization_code grant 
token request would be sent to.

However, if an AS was treating “code” as a token (for instance, encoding: 
client, user, consent time and approved scopes), the AS now has to include the 
client’s state as well. This would effectively double (likely more with 
encoding) the state sent in the authorization response back to the client 
redirect URL, adding more pressure against maximum URL sizes.

-DW

> On Jan 20, 2016, at 11:28 PM, Mike Jones  wrote:
> 
> John Bradley and I collaborated to create the second OAuth 2.0 Mix-Up 
> Mitigation draft.  Changes were:
> ·   Simplified by no longer specifying the signed JWT method for 
> returning the mitigation information.
> ·   Simplified by no longer depending upon publication of a discovery 
> metadata document.
> ·   Added the “state” token request parameter.
> ·   Added examples.
> ·   Added John Bradley as an editor.
>  
> The specification is available at:
> ·   http://tools.ietf.org/html/draft-jones-oauth-mix-up-mitigation-01 
> 
>  
> An HTML-formatted version is also available at:
> ·   
> http://self-issued.info/docs/draft-jones-oauth-mix-up-mitigation-01.html 
> 
>  
>   -- Mike
>  
> P.S.  This note was also posted at http://self-issued.info/?p=1526 
>  and as @selfissued 
> .
>  
> ___
> OAuth mailing list
> OAuth@ietf.org 
> https://www.ietf.org/mailman/listinfo/oauth 
> 
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Second OAuth 2.0 Mix-Up Mitigation Draft

2016-01-21 Thread David Waite
> On Jan 21, 2016, at 2:50 PM, John Bradley  wrote:
> 
> In that case you probably would put a hash of the state in the code to manage 
> size.  The alg would be up to the AS, as long as it used the same hash both 
> places it would work.
Yes, true. 
> 
> Sending the state to the token endpoint is like having nonce and c_hash in 
> the id_token, it binds the issued code to the browser instance.
I think I understand what you are saying. Someone won’t be able to frankenstein 
up a state and a token from two different responses from an AS, and have a 
client successfully fetch an access token based on the amalgamation.
 
> This protects against codes that leak via redirect uri pattern matching. 
> failures etc.  It prevents an attacker from being able to replay a code from 
> a different browser.
Yes, if a party intercepts the redirect_url, or the AS fails to enforce one 
time use (which even for a compliant implementation could just mean the 
attacker was faster than the state propagated within the AS)

Makes sense. Thanks John.

-DW

> If the client implements the other mitigations on the authorization endpoint, 
> then it wouldn't be leaking the code via the token endpoint. 
> 
> The two mitigations are for different attacks, however some of the attacks 
> combined both vulnerabilities.
> 
> Sending the iss and client_id is enough to stop the confused client attacks, 
> but sending state on its own would not have stopped all of them.
> 
> We discussed having them in separate drafts, and may still do that.   However 
> for discussion having them in one document is I think better in the short run.
> 
> John B.
> 
>> On Jan 21, 2016, at 4:48 PM, David Waite > <mailto:da...@alkaline-solutions.com>> wrote:
>> 
>> Question: 
>> 
>> I understand how “iss" helps mitigate this attack (client knows response was 
>> from the appropriate issuer and not an attack where the request was answered 
>> by another issuer). 
>> 
>> However, how does passing “state” on the authorization_code grant token 
>> request help once you have the above in place? Is this against some 
>> alternate flow of this attack I don’t see, or is it meant to mitigate some 
>> entirely separate attack?
>> 
>> If one is attempting to work statelessly (e.g. your “state” parameter is 
>> actual state and not just a randomly generated value), a client would have 
>> always needed some way to differentiate which issuer the authorization_code 
>> grant token request would be sent to.
>> 
>> However, if an AS was treating “code” as a token (for instance, encoding: 
>> client, user, consent time and approved scopes), the AS now has to include 
>> the client’s state as well. This would effectively double (likely more with 
>> encoding) the state sent in the authorization response back to the client 
>> redirect URL, adding more pressure against maximum URL sizes.
>> 
>> -DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Second OAuth 2.0 Mix-Up Mitigation Draft

2016-01-22 Thread David Waite
It’s pronounced FronkenSTEEN-ian.

-DW

> On Jan 22, 2016, at 10:02 AM, George Fletcher  wrote:
> 
> "Frankensteinian Amalgamation" -- David Waite
> 
> I like it! :)
> 
> On 1/22/16 8:11 AM, William Denniss wrote:
>> +1 ;)
>> On Fri, Jan 22, 2016 at 8:45 PM John Bradley < 
>> <mailto:ve7...@ve7jtb.com>ve7...@ve7jtb.com <mailto:ve7...@ve7jtb.com>> 
>> wrote:
>> Perhaps Frankenstein response is a better name than cut and paste attack.
>> 
>> John B.  
>> 
>> On Jan 22, 2016 1:22 AM, "David Waite" > <mailto:da...@alkaline-solutions.com>> wrote:
>>> On Jan 21, 2016, at 2:50 PM, John Bradley < 
>>> <mailto:ve7...@ve7jtb.com>ve7...@ve7jtb.com <mailto:ve7...@ve7jtb.com>> 
>>> wrote:
>>> 
>>> In that case you probably would put a hash of the state in the code to 
>>> manage size.  The alg would be up to the AS, as long as it used the same 
>>> hash both places it would work.
>> Yes, true. 
>>> 
>>> Sending the state to the token endpoint is like having nonce and c_hash in 
>>> the id_token, it binds the issued code to the browser instance.
>> I think I understand what you are saying. Someone won’t be able to 
>> frankenstein up a state and a token from two different responses from an AS, 
>> and have a client successfully fetch an access token based on the 
>> amalgamation.
>>  
>>> This protects against codes that leak via redirect uri pattern matching. 
>>> failures etc.  It prevents an attacker from being able to replay a code 
>>> from a different browser.
>> Yes, if a party intercepts the redirect_url, or the AS fails to enforce one 
>> time use (which even for a compliant implementation could just mean the 
>> attacker was faster than the state propagated within the AS)
>> 
>> Makes sense. Thanks John.
>> 
>> -DW
>> 
>>> If the client implements the other mitigations on the authorization 
>>> endpoint, then it wouldn't be leaking the code via the token endpoint. 
>>> 
>>> The two mitigations are for different attacks, however some of the attacks 
>>> combined both vulnerabilities.
>>> 
>>> Sending the iss and client_id is enough to stop the confused client 
>>> attacks, but sending state on its own would not have stopped all of them.
>>> 
>>> We discussed having them in separate drafts, and may still do that.   
>>> However for discussion having them in one document is I think better in the 
>>> short run.
>>> 
>>> John B.
>>> 
>>>> On Jan 21, 2016, at 4:48 PM, David Waite >>> <mailto:da...@alkaline-solutions.com>> wrote:
>>>> 
>>>> Question: 
>>>> 
>>>> I understand how “iss" helps mitigate this attack (client knows response 
>>>> was from the appropriate issuer and not an attack where the request was 
>>>> answered by another issuer). 
>>>> 
>>>> However, how does passing “state” on the authorization_code grant token 
>>>> request help once you have the above in place? Is this against some 
>>>> alternate flow of this attack I don’t see, or is it meant to mitigate some 
>>>> entirely separate attack?
>>>> 
>>>> If one is attempting to work statelessly (e.g. your “state” parameter is 
>>>> actual state and not just a randomly generated value), a client would have 
>>>> always needed some way to differentiate which issuer the 
>>>> authorization_code grant token request would be sent to.
>>>> 
>>>> However, if an AS was treating “code” as a token (for instance, encoding: 
>>>> client, user, consent time and approved scopes), the AS now has to include 
>>>> the client’s state as well. This would effectively double (likely more 
>>>> with encoding) the state sent in the authorization response back to the 
>>>> client redirect URL, adding more pressure against maximum URL sizes.
>>>> 
>>>> -DW
>> ___
>> OAuth mailing list
>> OAuth@ietf.org <mailto:OAuth@ietf.org>
>> https://www.ietf.org/mailman/listinfo/oauth 
>> <https://www.ietf.org/mailman/listinfo/oauth>
>> 
>> 
>> ___
>> OAuth mailing list
>> OAuth@ietf.org <mailto:OAuth@ietf.org>
>> https://www.ietf.org/mailman/listinfo/oauth 
>> <https://www.ietf.org/mailman/listinfo/oauth>
> 
> -- 
> Chief Architect   
> Identity Services Engineering Work: george.fletc...@teamaol.com 
> <mailto:george.fletc...@teamaol.com>
> AOL Inc.  AIM:  gffletch
> Mobile: +1-703-462-3494   Twitter: http://twitter.com/gffletch 
> <http://twitter.com/gffletch>
> Office: +1-703-265-2544   Photos: http://georgefletcher.photography 
> <http://georgefletcher.photography/>

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC for "OAuth 2.0 Security Best Current Practice"

2019-11-08 Thread David Waite
Hello Daniel!

> 1. The client makes an ajax HEAD request to the OAuth authorization
> endpoint, which will silently create the authorization grant (this was
> the security exploit that was patched).

> Anyway, I'm trying to find guidance on transparent redirects for
> authorization code grants. There's a whole host of both security and
> application logic issues that could come up from such behavior, so I'd
> like to ask for clarification in best practices.

OAuth does not provide a way to recover from an expired access token barring a 
refresh token, which also can be invalidated. In particular, the only front 
channel ‘continuation’ parameter I know of is ‘id_token_hint’ in OIDC.

There are deployments today (admittedly mostly using implicit flow) which do 
not have refresh tokens. A mandate that you SHOULD ask for re-consent would be 
a recommendation that they have to interrupt the user periodically to continue 
access - which would just create another security vs usability decision.

Per your point above, the actual security issue was that GitHub 1) had the 
authorization endpoint serve double-duty and 2) treated HEAD requests as a 
quasi GET/POST to create a grant in their database to the client without user 
confirmation. The solution for this is not to ask the user to re-confirm on 
every request..

That said, it does make sense for some deployments to periodically invalidate a 
refresh token, even for the purpose of bringing the user back to re-consent 
permissions (aka self-audit). An application could theoretically distinguish 
from tokens granted by the protected user needing to be invalidated to drive 
the user to re-consent, and operationally granted tokens which are assumed to 
be actively managed and not tied to any user account.

-DW

> On Nov 8, 2019, at 5:49 AM, Daniel Roesler 
>  wrote:
> 
> Howdy,
> 
> In the "3.1 Protecting Redirect-Based Flows" > "3.1.1. Authorization
> Code Grant" section, is there guidance on when it is appropriate (if
> ever) to automatically generate a new authorization code and redirect
> back to the client?
> 
> A recent exploit[1] on Github's OAuth implementation was practical
> because if you make an authorization request and the resource owner is
> already authenticated and the scope is already authorized, Github will
> silently generate a new authorization code and redirect the user back
> to the redirect_uri without asking them to click "Authorize" again.
> 
> How the exploit worked:
> 
> 
> 2. However, since the ajax response was blocked via CORS, the client
> couldn't receive the authorization code in the response parameters.
> 
> 3. So, the client then redirected the user to Github's authorization
> endpoint with the same authorization code request (only this time as a
> real GET redirect).
> 
> 4. Github instantly redirected the user back to the client's
> redirect_uri with a new authorization code and without asking for any
> user interaction.
> 
> It seems strange to me that OAuth should allow for transparent
> authorization code redirects without resource owner confirmation. This
> situation only comes up when something weird is happening, such as
> when a client loses their valid access|refresh_token, but isn't that
> all the more reason to clarify that you should always ask for resource
> owner confirmation of the scope, even in scenarios where you are just
> re-authorizing the same scope as before?
> 
> Had Github asked for confirmation on step 4 above, the practicality of
> the HEAD exploit would have been reduced because the user would have
> been presented with an unexpected Allow/Deny Github OAuth dialogue,
> possibly alerting them to the fact that something strange was going
> on.
> 
> 
> [1]: https://blog.teddykatz.com/2019/11/05/github-oauth-bypass.html
> 
> Daniel Roesler
> Co-founder & CTO, UtilityAPI
> dan...@utilityapi.com
> 
> 
> 
> On Wed, Nov 6, 2019 at 2:27 AM Hannes Tschofenig
>  wrote:
>> 
>> Hi all,
>> 
>> this is a working group last call for "OAuth 2.0 Security Best Current 
>> Practice".
>> 
>> Here is the document:
>> https://tools.ietf.org/html/draft-ietf-oauth-security-topics-13
>> 
>> Please send you comments to the OAuth mailing list by Nov. 27, 2019.
>> (We use a three week WGLC because of the IETF meeting.)
>> 
>> Ciao
>> Hannes & Rifaat
>> 
>> IMPORTANT NOTICE: The contents of this email and any attachments are 
>> confidential and may also be privileged. If you are not the intended 
>> recipient, please notify the sender immediately and do not disclose the 
>> contents to any other person, use it for any purpose, or store or copy the 
>> information in any medium. Thank you.
>> 
>> ___
>> OAuth mailing list
>> OAuth@ietf.org
>> https://www.ietf.org/mailman/listinfo/oauth
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list

Re: [OAUTH-WG] WGLC for "OAuth 2.0 Security Best Current Practice"

2019-11-09 Thread David Waite
On Nov 9, 2019, at 1:08 PM, Torsten Lodderstedt  wrote:
> But what does „same client“ mean? Is it the client_id? Sounds reasonable for 
> a web app, but would also mean instances of the same native app residing on 
> different devices could share the consent. That’s great from a convenience 
> perspective but the AS has to really make sure it’s the same user on the 
> other device using the 3rd party app and it’s the same app again otherwise an 
> attacker could easily abuse the grant.
> 
> This in turn would call for client authentication, which in this case (same 
> client_id shared among instances) means the OAuth dialog must happen in a 
> backend otherwise the secret could be obtained by an attacker from the 
> installed app.



Very true, and this is one reason why in Native Apps and Web Apps deployments 
(which use public clients) the AS may use the ability to redirect to a unique 
redirect URI as a form of lightweight identifier/ownership proof. That means 
some forms of redirect URI where you cannot provide unique ownership (custom 
schemes on most platforms, localhost redirects) should understand that other 
software can imitate the given client to the user, and that as a result allowed 
authorizations might be reduced and SSO/consent should be more carefully 
managed.

> I hear “dynamic client registration” to give every instance client_id and 
> secret? Well, looks like an alternative, but one needs to establish the 
> relationship to the legal entity in a secure manner, otherwise sharing the 
> consent is dangerous. Software statements or registration access tokens are 
> the means at hand but both are shared secrets one would need to deploy with 
> all app instances ... not advisable at all.

As DCR gives you the means to uniquely identify a client, I would expect each 
instance to be represented as a separate client. That brings you up from 
‘consent on every token request by every client instance’ to ‘consent once for 
each client instance’. You could still use something like unique redirect URI 
or platform attestation to make it 'consent once per client’.

DCR does give you another handle for tracking a client instance across 
authorization requests, which can be otherwise difficult in the scenario public 
clients relying on a browser user agent. A native app may be able to persist 
that better than say what you get via browser storage.

> Let’s talk about “same scope”: equality can be defined as byte level string 
> equality or by interpreting the scope. The first approach will cause another 
> user consent dialog if the order of the scope values change (or just a space 
> is added). The latter approach is highly implementation specific since left 
> undefined in RFC 6749.
> 
> In the case of Open Banking and similar scenarios, this scope will be fine 
> grained, dynamic or even transactional meaning storing a consent and issuing 
> another code in subsequent authorization transactions might be possible but 
> scope value specific.
> 
> Constraints regarding the duration of a consent can easily be enforced by the 
> AS. It just won’t issue further codes or access tokens (in case of refresh 
> token grant) if the consent needs to be refreshed.
> 
> bottom line: to define when an AS can issue an authorization code without 
> asking for user consent again is easy. Implementing a policy that is secure 
> and convenient is not.
> 
> We are working on more sophisticated ways to represent and compare scopes 
> (https://tools.ietf.org/html/draft-lodderstedt-oauth-rar-03 
> ). The client 
> identification problem will most likely stay.

One of the benefits of the OAuth abstraction is that it puts the authorization 
business logic (including things like presentation) squarely in the hands of 
the AS. OAuth gives the flexibility for the AS to implement that logic 
appropriately, as well as to evolve that logic without impacting the protocol 
contract with the clients/protected resources.

That there even is consent is part of that business requirements on an 
organization, so it would be difficult to give formal recommendations outside 
of business needs (e.g. regulatory compliance). Outside that, recommendations 
would be non-prescriptive considerations for AS implementors, such as “limit 
the scopes available to a client based on client need, client audit, business 
relationship, and regulatory restrictions/requirements”

There are also cases where general user interaction (e.g. non-transparent 
SSO/authorization) is technically desirable, such as fortifying the AS from 
being classified as a tracker in Safari ITP or ensuring Universal Links will 
work on iOS.  If you are requiring the user click to continue as a technical 
solution, you might decide to use that as an opportunity for information or 
informed consent as well.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman

Re: [OAUTH-WG] WGLC for "OAuth 2.0 Security Best Current Practice"

2019-11-10 Thread David Waite
On Nov 10, 2019, at 2:02 PM, Lee McGovern  wrote:
> 
> 
> 3.1 - "Clients MUST memorize which authorization server they sent an 
> authorization request to" - is memorize the best synonym here, perhaps store 
> or retain is more aligned with computational language?

Store, retain, persist are all common.

> 
> 3.1.2 How does the draft 
> https://tools.ietf.org/html/draft-parecki-oauth-browser-based-apps-02 align 
> with this guidance and will a future BCP update include a direct reference to 
> the final published version of this spec?

The dependency will be the other way - Browser-Based Apps will inform AS and RS 
implementors/operators what they need to do to allow javascript clients, and 
browser clients will have guidance toward meeting the Security BCP, where 
possible. Other drafts like DPoP exist to try to reduce the delta between the 
security BCP and what is feasible to deploy in browsers today.

> 3.5, 3.6 Since there is a reference to the MTLS draft could there also be 
> some guidance on the usage of token exchange best practise and also for the 
> contents of the access token to be aligned 
> https://tools.ietf.org/html/draft-ietf-oauth-access-token-jwt-02
> 

-DW

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] New Version Notification for draft-fett-oauth-dpop-03.txt

2019-11-16 Thread David Waite
On Nov 15, 2019, at 8:32 AM, Paul Querna  wrote:
> Supporting `HS256` or similar signing of the proof would be one way to
> reduce the CPU usage concerns.

There are a number of other potential asymmetrically signed messages, such as 
the access token. Is the assumption that these are also symmetrically 
protected, or that the cost here is amortized by caching?

If you are changing either your access tokens or dPoP proofs to use symmetric 
keys, you want to limit the number of parties who know that secret to the 
client, AS, and a single resource server. You’ll be audience-scoping either 
way, so it may make sense to use a symmetric algorithm for both. It starts to 
look like kerberos in HTTP and JSON when you squint.

> 
> The challenge seems to be getting the symmetric key to the RS in a
> distributed manner.

Yes, you need the same infrastructure for HMAC and AEAD in this case.

> 
> This use case could be scoped as a separate specification if that
> makes the most sense, building upon DPoP.
> 
> Throwing out a potential scheme here:
> 
> - **5.  Token Request (Binding Tokens to a Public Key)**: The request
> from the client is unchanged. If the AS decides this access token
> should use a symmetric key it:
> 1) Returns the `token_type` as `DPoP+symmetric`
> 2) Adds a new field to the token response: `token_key`.  This should
> be a symmetric key in JWK format, encrypted to the client's DPoP-bound
> asymmetric key using JWE.  This means the client still must be able to
> decrypt this JWE before proceeding using its private key.

If you encrypt the key to the resource, then there is a risk that the key is 
retained while unprotected in memory. ECDH may be better here, although then we 
are making assumptions on the types of keys being used.

> - **6.  Resource Access (Proof of Possession for Access Tokens)**: The
> DPoP Proof from the client would use the `token_key` issued by the AS.
> 
> - **7.  Public Key Confirmation**: Instead of the `jkt` claim, add a
> new `cnf` claim type: JSON Encrypted Key or  `jek`.  The `jek` claim
> would be an JWE encrypted value, containing the symmetric key used for
> signing the `DPoP` proof header in the RS request.   The JWE
> relationship between the AS and RS would be outside the scope of the
> specification -- many AS's have registries of RS and their
> capabilities, and might agree upon a symmetric key distribution system
> ahead of time, in order to decrypt the `jek` confirmation.

If you are negotiating a symmetric key with the RS for access tokens (again, 
why not at this point, just call it a JOSE Service Ticket) you can just use 
AEAD and not bother with wrapping/encrypting the client-negotiated key within 
the access token.

> I think this scheme would change RS validation of an DPoP-bound proof
> from one asymmetric key verify, into two symmetric key operations: one
> signature verify on the DPoP token, and potentially one symmetric
> decrypt on the `jek` claim.

-DW

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] DPoP symmetric key idea

2019-11-21 Thread David Waite
There seems two prevailing approaches:

1. The AS generates a symmetric key and encrypts it to a specific audience as 
part of/associated with the access token (KDC-type model).
2. The client attempts asymmetric use, and the resource server negotiates a 
symmetric key specific to it.

The first model has advantages in terms of potentially eliminating all 
API-level symmetric crypto and of simplifying/optimizing the first 
client/resource interaction. 

The second model has an advantage of being an extension of the asymmetric 
model, leaving the AS out of a non-authorization requirement of the resource 
server, and amortizes the cost of the crypto over the lifetime of the 
authorization (since the negotiated key can be reused with the next access 
token). The audience target for the token is no longer restricted by the shared 
secret, since you can negotiate a separate symmetric key per resource server. 
The first access of a resource does however have an extra round-trip.

The first model has a single shared secret between the AS and RS, which would 
need to be somehow negotiated.  The private key in the second model can create 
issues where a resource server is actually a distributed system like a CDN - 
draft-ietf-tls-subcerts is an effort to try to make that more robust in the TLS 
space. The second model's protocol may wind up using a ’service ticket’ style 
sharing of a symmetric key so that each RS node does not have to do their own 
challenge and key derivation on first communication, and to lighten the need 
for caching.

Both systems wind up adding complexity around key rotation. The first model can 
report an issue with key rotation by using a 401 to trigger a refresh of the 
access token - the AS would know in this case the RS has a new symmetric key 
and take that into account with the new access token. The second model would 
trigger a renegotiation on the RS itself.

Finally, it is worth considering that some secure elements (such as on iOS 
devices) do not expose support for symmetric keys, and SubtleCrypto in browsers 
will likely require any symmetric key to be imported such that the key itself 
exists in the Javascript sandbox unencrypted, at least for some period of time. 
Use of symmetric keys thus increases the risk of exfiltration, so the time 
between refreshes (or the access token lifetime in environments without refresh 
tokens) may be reduced in consideration. Under this reduced lifetime, 
amortization of asymmetric crypto may have less of an effect.

-DW

> On Nov 21, 2019, at 3:07 AM, Dick Hardt  wrote:
> 
> One take away I had from the meeting today, and form the mail list, is the 
> concern of doing asymmetric crypto on API calls. How about if we use the 
> Client's public key to encrypt a symmetric key and pass that back to the 
> Client in the token request response?
> 
> EG: 
> 
> In response to the token request, the AS additionally returns a derived 
> symmetric key (SK) encrypted in a JWE using the Client's public key from the 
> DPoP Proof. 
> 
> The SK = hash( salt, R )
> 
> R and the salt version V are included in the access token
> 
> The AS and the RS share salts with versions.
> 
> The Client decrypts the JWE and now has a symmetric key to sign a Symmetric 
> DPoP Proof.
> 
> The RS take R and V to calculate SK, and verify the signature of the 
> Symmetric DPoP
> 
> Here is an updated flow:
> 
> ++  +---+
> ||--(A)-- Token Request --->|   |
> | Client |(DPoP Proof)  | Authorization |
> ||  | Server|
> ||<-(B)-- DPoP-bound Access Token --|   |
> ||(token_type=DPoP) +---+
> ||PoP Refresh Token for public clients
> ||Symmetric Key JWE
> 
> Client decrypts DPoP Symmetric Key
> 
> ||
> ||  +---+
> ||--(C)-- DPoP-bound Access Token ->|   |
> ||(Symmetric DPoP Proof) |Resource   |
> ||  | Server|
> ||<-(D)-- Protected Resource ---|   |
> ||  +---+
> ++
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Meeting Minutes

2019-12-17 Thread David Waite
+1 to adopting PAR.

For RAR I have a number of questions myself with the approach and with some of 
the ramifications. I’m most concerned with the coupling of business-specific 
presentation, process validation and workflow within the AS, but also with the 
mixing of single transactional approval into accesses token that is normally 
meant for longer-lived, coarser client authorizations.

To stick with the primary payment example - there are payment cases which model 
well for single resource authorization, such as a PayPal-style transaction 
where the client is also the recipient of funds. For other types of 
transactions, I would worry this may become primarily an AS-executed action 
rather than a client authorization.

Before the device flow and before CIBA, I’d probably try and make a case for 
not adopting it. The decoupling of the client from any user-agent that could 
ask for user authorization outside of OAuth has made an increase in scope (of 
scopes) a higher need - the current communication pipe between the client and 
user-agent is only defined in the scope of the actual OAuth grant processes.

-DW


> On Dec 16, 2019, at 9:26 AM, Brian Campbell 
>  wrote:
> 
> With respect to the Pushed Authorization Requests (PAR) draft the minutes do 
> capture an individual comment that it's a "no brainer to adopt this work" but 
> as I recall there was also a hum to gauge the room's interest in adoption, 
> which was largely in favor of such. Of course, one hum in Singapore isn't the 
> final word but, following from that, I was hoping/expecting to see a call for 
> adoption go out to the mailing list? 
> 
> On Tue, Dec 3, 2019 at 1:26 AM Hannes Tschofenig  > wrote:
> Here are the meeting minutes from the Singapore IETF meeting:
> 
> https://datatracker.ietf.org/meeting/106/materials/minutes-106-oauth-03 
> 
>  
> 
> Tony was our scribe. Thanks!
> 
>  
> 
>  
> 
> IMPORTANT NOTICE: The contents of this email and any attachments are 
> confidential and may also be privileged. If you are not the intended 
> recipient, please notify the sender immediately and do not disclose the 
> contents to any other person, use it for any purpose, or store or copy the 
> information in any medium. Thank you.
> ___
> OAuth mailing list
> OAuth@ietf.org 
> https://www.ietf.org/mailman/listinfo/oauth 
> 
> 
> CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
> material for the sole use of the intended recipient(s). Any review, use, 
> distribution or disclosure by others is strictly prohibited..  If you have 
> received this communication in error, please notify the sender immediately by 
> e-mail and delete the message and any file attachments from your computer. 
> Thank you.___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Call for Adoption: OAuth 2.0 Pushed Authorization Requests

2019-12-17 Thread David Waite
I support the adoption of PAR

> On Dec 17, 2019, at 5:59 AM, Rifaat Shekh-Yusef  wrote:
> 
> All,
> 
> This is a call for adoption of for the OAuth 2.0 Pushed Authorization 
> Requests document.
> https://datatracker.ietf.org/doc/draft-lodderstedt-oauth-par/ 
>  
> 
> There was a good support for this document during the Singapore meeting, and 
> on the mailing list in the Meeting Minutes thread.
> 
> Please, let us know if you support or object to adopting this document as a 
> working group document by Dec 27th.
> 
> If you have already indicated your support on the Meeting Minutes thread, you 
> do not need to do it again on this thread.
> 
> Regards,
>  Rifaat & Hannes
> 
> 
>  
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] PKCE and refresh tokens

2020-02-28 Thread David Waite


> On Feb 28, 2020, at 8:46 AM, Albin Nilsson  wrote:
> 
> Hello,
> 
> I'm having some trouble with oauth and the Authorization Code flow and PKCE. 
> How can I get a refresh token? The refresh token flow requires a 
> client_secret, but PKCE prohibits client_secret. Is refresh token a no go?

PKCE provides XSRF protection and proof that the two parts of the code flow are 
from the same client. It does not forbid using client secrets, and is 
recommended by the security BCP for both public and confidential clients due to 
its XSRF protection.

Refresh token grant requests only require authentication (such as with a client 
secret) for confidential clients. Public clients are permitted to refresh 
without providing a secret or other credentials.

The lack of allowances for public clients by some implementations initially is 
why the AppAuth BCP and browser-based apps draft allows for the use of a secret 
in both the code request and refresh request - with the understanding by the AS 
that policy-wise this must be treated as a public client.

-DW

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth 2.0 Token Introspection in RFC7662 : Refresh token?

2020-03-01 Thread David Waite
I would expect the AS to invalidate the refresh token in this case, which would 
not require a refresh token mode nor necessarily any signaling back to the 
resource.

-DW

> On Mar 1, 2020, at 12:12 AM, Andrii Deinega  wrote:
> 
> Hello Bill,
> 
> I'm just thinking out loud about possible scenarios for a protected
> resource here... It may decide to revoke a refresh token if a client
> application tried to use it instead of an access token when the
> protected resource is paranoid about security. In order to do that an
> introspection response should include a non-standard parameter which
> indicates that the requested token is refresh_token.
> 
> A user of the introspection endpoint should rely only on a value of
> the active parameter (which is a boolean indicator) of the endpoint
> response. This applies to both types of tokens. Note, the expiration
> date, as well as other parameters, are defined as optional in the
> specification. Both token types can be revoked before the expiration
> date comes even if this parameter is presented as part of the
> response. In my opinion, there are a number of reasons why this check
> (for a refresh token) can be useful on the client application side.
> 
> --
> Regards,
> Andrii
> 
> 
> On Fri, Feb 28, 2020 at 1:59 AM Bill Jung
>  wrote:
>> 
>> Hello, hopefully I am using the right email address.
>> 
>> Simply put, can this spec be enhanced to clarify "Who can use the 
>> introspection endpoint for a refresh token? A resource provider or a client 
>> app or both?"
>> 
>> RFC7662 clearly mentions that the user of introspection endpoint is a 
>> 'protected resource' and that makes sense for an access token. If we allow 
>> this to client apps, it'll give unnecessary token information to them.
>> However, the spec also mentions that refresh tokens can also be used against 
>> the endpoint.
>> In case of refresh tokens, user of the endpoint should be a client app 
>> because refresh tokens are used by clients to get another access token. 
>> (Cannot imagine how/why a resource server would introspect a refresh token)
>> 
>> Is it correct to assume that the endpoint should be allowed to client apps 
>> if they want to examine refresh token's expiry time? Then the RFC should 
>> clearly mention it.
>> 
>> Thanks in advance.
>> 
>> 
>> In https://tools.ietf.org/html/rfc7662
>> In '1.  Introduction' section says,
>> "This specification defines a protocol that allows authorized
>> protected resources to query the authorization server to determine
>> the set of metadata for a given token that was presented to them by
>> an OAuth 2.0 client."
>> Above makes clear that user of the endpoint is a "protected resource".
>> 
>> And under 'token' in '2.1.  Introspection Request' section says,
>> "For refresh tokens,
>> this is the "refresh_token" value returned from the token endpoint
>> as defined in OAuth 2.0 [RFC6749], Section 5.1."
>> So looks like a refresh token is allowed for this endpoint.
>> 
>> 
>> Bill Jung
>> Manager, Response Engineering
>> bj...@pingidentity.com
>> w: +1 604.697.7037
>> Connect with us:
>> 
>> CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
>> material for the sole use of the intended recipient(s). Any review, use, 
>> distribution or disclosure by others is strictly prohibited..  If you have 
>> received this communication in error, please notify the sender immediately 
>> by e-mail and delete the message and any file attachments from your 
>> computer. Thank you.___
>> OAuth mailing list
>> OAuth@ietf.org
>> https://www.ietf.org/mailman/listinfo/oauth
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth 2.0 Token Introspection in RFC7662 : Refresh token?

2020-03-01 Thread David Waite
On Mar 1, 2020, at 10:11 PM, Andrii Deinega  wrote:
> 
> How would the authorization server know who actually uses the
> introspection endpoint assuming that a protected resource and a client
> application use the same credentials (client_id and client_secret)?

In the external context, you have a client accessing a protected resource with 
an access token. The client should treat the token as opaque, and RFC7662 makes 
no allowances for that client to introspect its tokens.

If you control both the client and protected resource, you may decide to 
short-cut and have them share credentials. However, the client logic still 
should never be introspecting the tokens.

The security considerations also say that you must prove the authentication of 
the protected resource, which I have interpreted to mean that access tokens 
used to authorize the call to the introspection endpoint must be issued to a 
confidential client - public clients cannot protect credentials to perform an 
authentication. You want to limit introspection to prevent denial of service 
and probing attacks, and to limit the amount of information on viable attacks 
conveyed if someone steals a token.

-DW

> 
> Regards,
> Andrii
> 
> On Sun, Mar 1, 2020 at 7:38 PM David Waite  
> wrote:
>> 
>> I would expect the AS to invalidate the refresh token in this case, which 
>> would not require a refresh token mode nor necessarily any signaling back to 
>> the resource.
>> 
>> -DW
>> 
>>> On Mar 1, 2020, at 12:12 AM, Andrii Deinega  
>>> wrote:
>>> 
>>> Hello Bill,
>>> 
>>> I'm just thinking out loud about possible scenarios for a protected
>>> resource here... It may decide to revoke a refresh token if a client
>>> application tried to use it instead of an access token when the
>>> protected resource is paranoid about security. In order to do that an
>>> introspection response should include a non-standard parameter which
>>> indicates that the requested token is refresh_token.
>>> 
>>> A user of the introspection endpoint should rely only on a value of
>>> the active parameter (which is a boolean indicator) of the endpoint
>>> response. This applies to both types of tokens. Note, the expiration
>>> date, as well as other parameters, are defined as optional in the
>>> specification. Both token types can be revoked before the expiration
>>> date comes even if this parameter is presented as part of the
>>> response. In my opinion, there are a number of reasons why this check
>>> (for a refresh token) can be useful on the client application side.
>>> 
>>> --
>>> Regards,
>>> Andrii
>>> 
>>> 
>>> On Fri, Feb 28, 2020 at 1:59 AM Bill Jung
>>>  wrote:
>>>> 
>>>> Hello, hopefully I am using the right email address.
>>>> 
>>>> Simply put, can this spec be enhanced to clarify "Who can use the 
>>>> introspection endpoint for a refresh token? A resource provider or a 
>>>> client app or both?"
>>>> 
>>>> RFC7662 clearly mentions that the user of introspection endpoint is a 
>>>> 'protected resource' and that makes sense for an access token. If we allow 
>>>> this to client apps, it'll give unnecessary token information to them.
>>>> However, the spec also mentions that refresh tokens can also be used 
>>>> against the endpoint.
>>>> In case of refresh tokens, user of the endpoint should be a client app 
>>>> because refresh tokens are used by clients to get another access token. 
>>>> (Cannot imagine how/why a resource server would introspect a refresh token)
>>>> 
>>>> Is it correct to assume that the endpoint should be allowed to client apps 
>>>> if they want to examine refresh token's expiry time? Then the RFC should 
>>>> clearly mention it.
>>>> 
>>>> Thanks in advance.
>>>> 
>>>> 
>>>> In https://tools.ietf.org/html/rfc7662
>>>> In '1.  Introduction' section says,
>>>> "This specification defines a protocol that allows authorized
>>>> protected resources to query the authorization server to determine
>>>> the set of metadata for a given token that was presented to them by
>>>> an OAuth 2.0 client."
>>>> Above makes clear that user of the endpoint is a "protected resource".
>>>> 
>>>> And under 'token' in '2.1.  Introspection Request' 

Re: [OAUTH-WG] Corona Virus and Vancouver

2020-03-09 Thread David Waite
I will be there in person.

> On Mar 9, 2020, at 12:33 PM, Daniel Fett  wrote:
> 
> Hi all,
> 
> can we do a quick roll call on who is coming or not coming to Vancouver?
> 
> For me, at the current point in time, it depends on whether a significant 
> portion of the working group is attending in-person.
> 
> -Daniel
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Full Third-Party Cookie Blocking

2020-03-25 Thread David Waite
More specifically, SSO will not work anymore without either:
- prompting the user (via Storage Access API)
- using explicit front-channel mechanisms (popups and redirects)
- using back-channel mechanisms (refresh tokens and some backchannel logout 
infrastructure)

(FWIW, I proposed a back-channel session management mechanism which would work 
for SPA apps under Connect, 
https://bitbucket.org/openid/connect/src/default/distributed-token-validity-api.txt)

In my experience, the vast majority of apps only care about SSO from a user 
experience perspective, and don’t want synchronized session management. Many 
which do want session management are hosted _mostly_ under one origin since the 
organization is trying to hide that they are disparate applications - but many 
have exceptions, such as *.google.com and YouTube.com

-DW


> On Mar 25, 2020, at 7:55 AM, Dominick Baier  wrote:
> 
> This
> 
> https://webkit.org/blog/10218/full-third-party-cookie-blocking-and-more/ 
> 
> 
> Really means that “modern” SPAs based on a combination of OIDC and OAuth will 
> not work anymore
> 
> both
> 
> * silent-renew for access token management
> * OIDC JS session notifications
> 
> Will not work anymore. Or don’t work anymore already today - e.g. in Brave.
> 
> This means SPAs would need to be forced to do refresh tokens - and there is 
> no solution right now for session notifications.
> 
> Maybe the browser apps BCP / OAuth 2.1 should strictly advice against the 
> “browser apps without a back-end” scenario and promote the BFF style 
> architecture instead.
> 
> Cheers 
> ———
> Dominick Baier
> ___
> OAuth mailing list
> OAuth@ietf.org 
> https://www.ietf.org/mailman/listinfo/oauth 
> 
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth Security BCP -15

2020-04-05 Thread David Waite
On Apr 5, 2020, at 12:42 PM, Aaron Parecki  wrote:
> Aside from that, I'm struggling to understand what this section is actually 
> saying to do. Since this is in the "Authorization Code Grant" section, is 
> this saying that using response_type=code is fine as long as the client 
> checks the "nonce" in the ID Token obtained after it uses the authorization 
> code? It seems like that would still allow an authorization code to be 
> injected. I don't see how the "nonce" parameter solves anything to do with 
> the authorization code, it seems like it only solves ID token injections via 
> response_type=id_token.

With PKCE, the client sends a challenge value to the authorization endpoint and 
a verifier value to the token endpoint. The authorization server will thus be 
able to correlate the request and response were made by the same client 
instance, and reject issuing tokens if there is an issue.

With OIDC nonce, the client sends a nonce value to the auth endpoint and 
_receives_ that nonce back inside the id_token. The client can verify that the 
response corresponds to the request it made and reject requesting/using tokens 
if there is an issue.

The first solution will almost inarguably lead to better implementations, as 
the single AS is taking responsibility for verification before issuing tokens, 
vs relying on potentially many client instances to do proper verification. But 
both do solve the problem.

The problem I have with this choice is that a client library will not 
necessarily know PKCE is being honored vs ignored by the AS. So a client would 
need to always do _both_ to be a good citizen in some cases. So I would prefer 
that clients and servers MUST support PKCE, but servers MAY also allow nonce 
usage for security in the case of non-compliant clients (with appropriately 
vetted conformance to OIDC).

> In any case, this section could benefit from some more explicit instructions 
> on how exactly to prevent authorization code injection attacks.

Indeed.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Second WGLC on "JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens"

2020-04-19 Thread David Waite
There are a number of ambiguities and statements around using JWTs in various 
contexts:

1. Some implementations interpret “iat" to also have the meaning of “nbf” in 
the absence of “nbf”, although this is AFAIK not prescribed by any spec
2. The DPoP draft’s client-generated tokens have the resource servers use their 
own nbf/exp heuristics around “iat”, since the tokens are meant for immediate 
one time use by a party that may not have clock synchronization.
3. There are recommendations in the JWT profile for OAuth that the AS may 
reject tokens based on an “iat” too far in the past or “exp” too far in the 
future, but not that “nbf” was too far in the past or that the interval between 
nbf and exp was too large.

The JWT spec also allows implementers to provide some leeway for clock skew. 
Presumably this meant validators and not JWT creators, although there is 
history of messages setting similar values to account for clock skew (e.g. SAML 
IDPs setting notBefore to one minute before issuance and notOnOrAfter 5 minutes 
after issuance). 

-DW

> On Apr 19, 2020, at 2:50 AM, Vladimir Dzhuvinov  
> wrote:
> 
> On 16/04/2020 10:10, Dominick Baier wrote:
>> iat vs nbf
>> What’s the rationale for using iat instead of nbf. Aren’t most JWT libraries 
>> (including e.g. the .NET one) looking for nbf by default?
> Developers often tend to intuitively pick up "iat" over "nbf" because it 
> sounds more meaningful (my private observation). So given the empirical 
> approach of Vittorio to the spec, I suspect that's how "iat" got here.
> 
> If we bother to carefully look at the JWT spec we'll see that "iat" is meant 
> to be "informational" whereas it's "nbf" that is intended to serve (together 
> with "exp") in determining the actual validity window of the JWT.
> 
> https://tools.ietf.org/html/rfc7519#section-4.1.5 
> 
> My suggestion is to require either "iat" or "nbf". That shouldn't break 
> anything, and deployments that rely on one or the other to determine the 
> validity window of the access token can continue using their preferred claim 
> for that.
> 
> Vladimir
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Microsoft feedback on DPoP during April 2020 IIW session

2020-04-30 Thread David Waite
To add: there was discussion was whether the “htu" parameter should contain 
scheme/host/port/path, or just scheme/host/port. Dmitri indicated that it would 
aid their implementation to have the path eliminated. 

During JTI scale discussions, it was pointed out that some implementations may 
have individual protected resources at different paths behind a reverse proxy - 
attempting to implement one-time-use semantics would either require 
coordination between the path-bound protected resources, or for DPoP processing 
to be added as a function of the reverse proxy. One possible option proposed 
was separating out the scheme/host/port and the path to two parameters, so that 
they could have different recommendations around enforcement.

It was also alluded to that if there is eventually a round trip to negotiate a 
symmetric key with a resource, it is possible that system could leverage the 
secret to scope the token for JTI enforcement. I also wonder if the 
cryptographic requirement to use a different IV per message could be enforced 
by the recipient in lieu of a separate JTI as well (but admittedly I did not 
articulate this as well as I would have liked during the session)

-DW

> On Apr 30, 2020, at 20:29, Mike Jones 
>  wrote:
> 
> Daniel Fett and David Waite (DW) hosted a great session on OAuth 2.0 
> Demonstration of Proof-of-Possession at the Application Layer (DPoP) 
> <https://tools.ietf.org/html/draft-ietf-oauth-dpop-00> at the virtualized IIW 
> <https://internetidentityworkshop.com/> this week.  Attendees also included 
> Vittorio Bertocci, Justin Richer, Dmitri Zagidulin, and Tim Cappalli.
>  
> After Daniel and DW finished doing their overview of DPoP, I used some of the 
> time to discuss feedback on DPoP from Microsoft Azure Active Directory (AAD) 
> engineers.  We discussed:
> How do we know if the resource server supports DPoP?  One suggestion was to 
> use a 401 WWW-Authenticate response from the RS.  We learned at IIW that some 
> are already doing this.  People opposed trying to do Resource Metadata for 
> this purpose alone.  However, they were supportive of defining AS Metadata to 
> declare support for DPoP and Registration Metadata to declare support for 
> DPoP.  This might declare the supported token_type values.
> How do we know what DPoP signing algorithms are supported?  This could be 
> done via AS Metadata and possibly Registration Metadata.  People were also in 
> favor of having a default algorithm – probably ES256.  Knowing this is 
> important to preventing downgrade attacks.
> Can we have server nonces?  A server nonce is a value provided by the server 
> (RS or AS) to be signed as part of the PoP proof.  People agreed that having 
> a server nonce would add additional security.  It turns out that Dmitri is 
> already doing this, providing the nonce as a WWW-Authenticate challenge value.
> Difficulties with jti at scale.  Trying to prevent replay with jti is 
> problematic for large-scale deployments.  Doing duplicate detection across 
> replicas requires ACID consistency, which is too expensive to be 
> cost-effective..  Instead, large-scale implementations often use short 
> timeouts to limit replay, rather performing reliable duplicate detection.
> Is the DPoP signature really needed when requesting a bound token?  It seems 
> like the worst that could happen would be to create a token bound to a key 
> you don’t control, which you couldn’t use.  Daniel expressed concern about 
> this enabling substitution attacks.
> It seems like the spec requires the same token_type for both access tokens 
> and refresh tokens.  Whereas it would be useful to be able to have DPoP 
> refresh tokens and Bearer access tokens as a transition step.  Justin pointed 
> out that the OAuth 2 protocol only has one token_type value – not separate 
> ones for the refresh token and access token.  People agreed that this 
> deserves consideration.
> Symmetric keys are significantly more efficient than asymmetric keys.  In 
> discussions between John Bradley, Brian Campbell, and Mike Jones at IETF 106, 
> John worked out how to deliver the symmetric key to the Token Endpoint 
> without an extra round trip, however it would likely be more complicated to 
> deliver it to the resource without an extra round trip.  At past IETFs, both 
> Amazon and Okta have also advocated for symmetric key support.
> What are the problems resulting from PoP key reuse?  The spec assumes that a 
> client will use the same PoP key for singing multiple token requests, both 
> for access token and refresh token requests.  Is this a security issue?  
> Daniel responded that key reuse is typically only a problem when the same key 
> is used for different algorithms or in different application contexts, when 
> this reuse enables substitution att

Re: [OAUTH-WG] Implementation questions around refresh token rotation

2020-10-10 Thread David Waite
On Oct 6, 2020, at 16:05, Aaron Parecki  wrote:
> However that also kind of defeats the purpose since attacks within that grace 
> period would be hard to detect. I'm looking for an idea of where people have 
> landed on that issue in practice.

This is effectively a race condition, and a grace period hides your ability to 
detect the race. Because of the race condition is no guarantee that the second 
refresh token is the one that is retained, the client could still fail once it 
needs its next access token.

Instead, an ideal system would allow you to make a security exception and turn 
off rotation, possibly only until the client revises their logic.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Implementation questions around refresh token rotation

2020-10-12 Thread David Waite
An AS may decide refresh token rotation is useful for other reasons (such as if 
the token is an encrypted value and the AS cluster does key rotation), or may 
decide to rotate all tokens for consistency.

Eventually best practices may indicate sender constrained tokens for public 
clients. At that point, refresh rotation may not be security practice but could 
still be something an AS does as part of its own design. And an AS may elect to 
do this often so that clients fail (and correct their logic) faster.

-DW

> On Oct 12, 2020, at 03:15, Torsten Lodderstedt 
>  wrote:
> 
>> 
>> Am 12.10.2020 um 09:04 schrieb Dave Tonge > >:
>> 
>> 
>> Hi Neil
>> 
>>  > refresh token rotation is better thought of as providing protection 
>> against insecure token storage on the client
>> 
>> I agree with your reasoning - and that was more the intent of what I said. 
>> We've seen refresh token rotation used for confidential clients that have 
>> secure storage (i.e. are run in a data center not on a mobile device) and it 
>> has caused problems with zero additional security benefits. 
> 
> Those are good examples of why refresh token rotation should not be used if 
> there are better ways available to protect refresh tokens from replay. Client 
> authentication or sender constrained refresh tokens are the better choice.

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] We appear to still be litigating OAuth, oops

2021-02-26 Thread David Waite


> On Feb 26, 2021, at 9:32 AM, Aaron Parecki  wrote:

> The point is that basically nobody uses it because they don't want to allow 
> arbitrary client registration at their ASs. That's likely due to a 
> combination of pre-registration being the default model in OAuth for so long 
> (the Dynamic Client Registration draft was published several years after 
> OAuth 2.0), as well as how large corporations have decided to run their ASs 
> where they want to have (what feels like) more control over the things 
> talking to their servers.

Do you disagree that this gives them control over which things talk to their 
servers?

FWIW my personal mental model here is pretty simple:

With users, there are services you provide anonymously and services you provide 
only to registered/authenticated/trusted parties for various reasons. Once you 
are delegating user access, you still have many of the same reasons to provide 
access to anonymous or registered/authenticated/trusted delegates.

Dynamic registration arriving later and requiring additional complexity has 
unfortunately encouraged registration in use cases where anonymous clients 
might have been acceptable, but shifting the timelines or complexity balance 
would not  have changed business needs for authentication and trust of 
delegates. Omitting registration would have caused businesses to use other 
protocols that met their needs.

If AS’s are only getting what feels like proper control for their business 
needs, we should attempt to give them the actual control they require.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Re-creation of Access Token on Single Page Application

2021-03-14 Thread David Waite


> On Mar 14, 2021, at 8:36 PM, Tatsuya Karino  wrote:
> 
> On Safari, you have no workaround.
> 3rd-party cookie is dead, and all JS-writable data is removed in 7 days there.
> 
> As you stated, option 1 does not work in cross-site scenarios in Safari & 
> Brave at the moment. Other browsers are likely to follow the same pattern in 
> the future.
> Option 2 only works if there are already tokens available, which is typically 
> not the case at first load. Also, keeping long-lived refresh tokens in a 
> browser is not always the best idea.
> 
> I see... Thank you. I understood that it is difficult to re-creation Access 
> Token silently. As reading your workarounds, I felt handling AccessToken 
> inside SPA is a little difficult.

Refresh tokens wind up being a bit more valuable to bridge the state in the two 
sandboxes and let you get new access tokens.

The AS policy can also be adjusted potentially to give longer-lived access 
tokens when the type of access being granted has less security impact if leaked.

> Generally speaking, preparing Backend Server looks better from a security and 
> UIUX point of view. If there is a Backend Server for the SPA, we can use the 
> Backend as a Confidential Client, and create a session and save it on http 
> only cookie for the SPA. If we have a human resource to do so... 

A back-end server only changes access token availability in the case where the 
AS security policy is that only confidential clients can be issued refresh 
tokens.

If you use a backend to expose/proxy API requests via a set cookie, remember 
that you need to protect against CSRF attacks. Some sites use the need for a 
manually-applied OAuth header as CSRF protection.

-DW

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] self-issued access tokens

2021-09-30 Thread David Waite
Are you using DPoP at issuance of the credential and embedding the public key 
as the means to verify the subject? Are you going so far as using DPoP in lieu 
of Verifiable Presentation wrappers?

-DW

> On Sep 30, 2021, at 12:47 AM, Nikos Fotiou  wrote:
> 
> FYI, this is exactly what we are doing in [1] to manage Verifiable 
> Credentials using OAuth2.0. The AS issues a verifiable credential that stays 
> (for long time) in the client. The client uses DPoP to prove ownership of the 
> credential. We just started a new project funded by essif [2] that will 
> further develop this idea and provide implementations.
> 
> Best,
> Nikos
> 
> [1] N. Fotiou, V.A. Siris, G.C. Polyzos, "Capability-based access control for 
> multi-tenant systems using Oauth 2.0 and Verifiable Credentials," Proc. 30th 
> International Conference on Computer Communications and Networks (ICCCN), 
> Athens, Greece, July 2021 
> (https://mm.aueb.gr/publications/0a8b37c5-c814-4056-88a7-19556221728c.pdf)
> [2]https://essif-lab.eu
> --
> Nikos Fotiou - http://pages.cs.aueb.gr/~fotiou
> Researcher - Mobile Multimedia Laboratory
> Athens University of Economics and Business
> https://mm.aueb.gr

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] self-issued access tokens

2021-10-01 Thread David Waite


> On Oct 1, 2021, at 11:06 AM, Dick Hardt  wrote:

> If there is really only one service, then there is little value in an AS. I 
> would have the client post a JWT that has the request payload in it, or a 
> detached signature if it is a large payload. Personally, I like sending the 
> request as a JWT as it allows services further down the processing pipeline 
> to independently verify the request from the client.
> 
> This assumes sufficient computing power on the IoT device, and reasonably low 
> call volume.
> ᐧ

One interpretation of the purpose in the AS is to create tokens based on its 
authorization decisions, while direct submission of client-authored JWTs would 
be more in line with having the RS make those decisions directly.

Even if they were hosted on the same hardware, I’d still push to use an AS-role 
component in order to optimize the decision making process and to not have to 
refactor (or risk duplication) of that logic later.

-DW

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Call for Adoption - OAuth Proof of Possession Tokens with HTTP Message Signature

2021-10-08 Thread David Waite
I do not support adopting this work as proposed, with the caveat that I am a 
co-editor of the DPoP work.

We unfortunately do not have a single approach for PoP which works for all 
scenarios and deployments, why we have had several proposals and standards such 
as Token Binding, mutual TLS, and DPoP. There have been other less generalized 
approaches as well, such as forming signed request and response objects on the 
channel when one needs end-to-end message integrity or confidentiality.

Each of these has their own capabilities and trade-offs, and their 
applicability to scenarios where the others falter is why multiple approaches 
is justified.

The preferred solution for HTTPS resource server access is to leverage MTLS. 
However, browsers have both poor/nonexistent API to manage ephemeral client 
keys and poor UX around mutual TLS in general.

DPoP was proposed to attempt a “lightest lift” to provide cryptographic 
evidence of the sender being involved, so that browsers could protect their 
tokens from exfiltration by non-exportable, ephemeral keys. In that way, we 
keep from having to define a completely separate security posture for 
resource-constraining browser apps.

The motivations for the HTTPSig specification don’t clearly state why it is 
essential to have another promoted PoP approach. I would expect more 
prescriptive text about the use case that this is proposed for. In particular, 
I would love to see an additional use case, outside of DPoP, not solved by MTLS 
but solved by this proposal.

If it turns out the target between a HTTP Message Signatures and DPoP overlap 
completely, I suspect we would have the issue of two competing adopted drafts 
in the working group. I personally do not know the ramifications of such an 
event. I do not believe there would be consensus on eliminating one, nor would 
there be a significant reduction in complexity by combining them.

Deferring until HTTPSig is interoperably implemented in the industry gives us 
concrete motivation in the future to support both.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] DPoP, how will it be implemented in a browser?

2021-10-11 Thread David Waite
Public clients can create their own ephemeral key (say, non-exportable keys 
made with WebCrypto) to have bound to the access and refresh tokens at issuance 
time. DPoP is independent of the client authentication to the AS.

-DW

> On Oct 11, 2021, at 11:40 AM, Nikos Fotiou  wrote:
> 
> Hi,
> How do you believe DPoP will be implemented in a browser? In particular, how 
> the browser will retrieve client's private key and generate the appropriate 
> signature? Do you imagine interoperability with a specification such as 
> WenAuthN? Something else (e.g., DPoP-enabled "wallets")? 
> 
> Best,
> Nikos
> --
> Nikos Fotiou - http://pages.cs.aueb.gr/~fotiou
> Researcher - Mobile Multimedia Laboratory
> Athens University of Economics and Business
> https://mm.aueb.gr
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] convert to credentialed client... ( was OAuth2.1 credentialed client )

2021-10-11 Thread David Waite

> On Oct 11, 2021, at 11:52 AM, Dick Hardt  wrote:
> 
> 
> Thanks for the feedback Brian. We have struggled in how to concisely describe 
> credentialed clients.
> 
> "identifying a client" can be interpreted a number of ways.
> 
> The intent is that the AS knows a credentialed client is the same client it 
> previously interacted with, but that the AS can not assume any other 
> attributes of the client, for example that it is a client from a given 
> developer, or has a specific name.

It sounds like the goal is to distinguish authenticating the client from trust 
of the client pedigree, e.g. the only authenticity of a public client might be 
that it can catch the redirect_uri, and the only authenticity of a dynamically 
registered client is what you required and verified up to that point. 

Some of that trust may be on confidentiality of data, prior reputation, 
safeguards to prevent token exfiltration or unauthorized token use locally, etc.

A credentialed client is not more trusted than a confidential client - it is 
just more uniquely identifiable. A public client does not have a mechanism 
(within OAuth today) to prove its trustworthiness on request because it is not 
authenticated as the party with that trust.  You instead would need to e.g. do 
client registration with a software statement. 

It may help to know what actions are MUST NOT or SHOULD NOT for credentialed 
clients vs confidential clients. Without that, the distinction seems it should 
be self contained in 2.1 like the client profiles, and maybe the term 
confidential client be explained to be a misnomer and more broadly explained 
that confidential vs public client is _not_ to meant to be a described as a 
trust distinction.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Authorization code reuse and OAuth 2.1

2021-10-13 Thread David Waite
I agree that PKCE (with a non-plain operational mode) protects the code from 
attacker use by the security BCP model (but not necessarily stronger models)

Would the prevalence for ASs which cannot enforce an atomic code grant warrant 
further language against plain PKCE?

-DW

> On Oct 13, 2021, at 11:16 AM, Mike Jones 
>  wrote:
> 
> During today’s call, it was asked whether we should drop the OAuth 2.0 
> language that:
>  The client MUST NOT use the authorization code
>  more than once.  If an authorization code is used more than
>  once, the authorization server MUST deny the request and SHOULD
>  revoke (when possible) all tokens previously issued based on
>  that authorization code.”
>  
> The rationale given was that enforcing one-time use is impractical in 
> distributed authorization server deployments.
>  
> Thinking about this some more, at most, we should relax this to:
>  The client MUST NOT use the authorization code
>  more than once.  If an authorization code is used more than
>  once, the authorization server SHOULD deny the request and SHOULD
>  revoke (when possible) all tokens previously issued based on
>  that authorization code.”
>  
> In short, it should remain illegal for the client to try to reuse the 
> authorization code.  We can relax the MUST to SHOULD in the server 
> requirements in recognition of the difficulty of enforcing the MUST.
>  
> Code reuse is part of some attack scenarios.  We must not sanction it.
>  
>   -- Mike
>  
> ___
> OAuth mailing list
> OAuth@ietf.org 
> https://www.ietf.org/mailman/listinfo/oauth 
> 
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] [UNVERIFIED SENDER] Call for Adoption - OAuth Proof of Possession Tokens with HTTP Message Signature

2021-10-13 Thread David Waite



> On Oct 13, 2021, at 12:26 PM, Richard Backman, Annabelle 
>  wrote:
> 
> Those issues that could be addressed without completely redesigning DPoP have 
> been discussed within the Working Group multiple times. (See quotes and 
> meeting notes references in my previous message) The authors have pushed back 
> on extending DPoP to cover additional use cases them due to a desire to keep 
> DPoP simple and lightweight. I don't begrudge them that. I think it's 
> reasonable to have a "dirt simple" solution, particularly for SPAs given the 
> relative limitations of the browser environment.
> 
> Other issues are inherent to fundamental design choices, such as the use of 
> JWS to prove possession of the key. E.g., you cannot avoid the data 
> duplication issue since a JWS signature only covers a specific serialization 
> of the JWT header and body.

Agreed with keeping DPoP simple, which was why I was asking if the proposal 
could indicate it was targeting some of these other use cases. The current 
draft being proposed for adoption I believe is fixed to the same HTTP 
properties that DPoP leverages, and thus appears to be targeting the same use 
cases with a different proof expression.

The duplication within the token is also a trade-off: it allows an 
implementation to have a white list of acceptable internal values, if say the 
host and path are rewritten by reverse proxies. It also allows an 
implementation to give richer diagnostic information when receiving 
unacceptable DPoP tokens, which may very well come at runtime from an 
independently-operating portion of an organization reconfiguring intermediaries.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Call for Adoption - OAuth Proof of Possession Tokens with HTTP Message Signature

2021-10-13 Thread David Waite
inutes-105-oauth-201907261000/>:
> MTLS is good but not great for browser. TOKBIND has no current browser 
> support. Need solution for browser apps.
> 
> [Daniel Fett]: DPOP is hopefully a simple and concise mechanism.
> 
> [Brian Campbell]: DPOP came out of a desire for a simplified concise public 
> key mechanism at both the authz and resource server….there isn’t the overhead 
> for symmetric keys.
> 
> [Annabelle Backman]: We too find [DPoP] limiting without symmetric as 
> asymmetric can be just too slow.
> 
> [John Bradley]: The origin of [DPoP] came from the security workshop 
> specifically focused on applications to do PoP should token binding not come 
> to fruition. We could use web-crypto and create a non-exportable key in the 
> browser. This is why there is no support for symmetric key.
> 
> [Mike Jones]: Want to use different POP keys for AT and RT.
> 
> [Justin Richer]: I really like this approach. But I agree with Hannes that 
> having a server provided symmetric key is useful.
> 
> Roman [Danyliw]: Strongly urge the equities of other groups and surface them.
> 
> IETF 106 
> <https://datatracker.ietf.org/meeting/106/materials/minutes-106-oauth-03.pdf>:
> Annabelle [Backman]: Would you consider using a HTTP signing solution and not 
> do this
> John [Bradley]: ...[DPoP] has limited aspirations than the http signing.
> 
> Some discussions on symmetric vs asymmetric encryption and Annabelle is 
> concerned about the scaling and crypto costs. So some folks want both types, 
> this would increase the scope of the effort [for DPoP].
> 
> The scope [of DPoP] was to be able to use something with sender constraint 
> for SPA, this is not for broader usage, so this is limited scope not doing 
> what HTTP Signing would be used for. So this needs to be presented as a very 
> focused effort.
> 
> Mike [Jones]: The usage of TLS for sender constraint is not deployable
> 
> OAuth WG Interim Meeting – 2021-03-15 
> <https://datatracker.ietf.org/doc/minutes-interim-2021-oauth-01-202103151200/>:
> Francis [Pouatcha]: DPoP should be by no way a replacement for HTTP signing.
> 
> —
> Annabelle Backman (she/her)
> richa...@amazon.com <mailto:richa...@amazon.com>
> 
> 
> 
> 
>> On Oct 8, 2021, at 5:38 PM, David Waite 
>> > <mailto:david=40alkaline-solutions@dmarc.ietf.org>> wrote:
>> 
>> CAUTION: This email originated from outside of the organization. Do not 
>> click links or open attachments unless you can confirm the sender and know 
>> the content is safe.
>> 
>> 
>> 
>> I do not support adopting this work as proposed, with the caveat that I am a 
>> co-editor of the DPoP work.
>> 
>> We unfortunately do not have a single approach for PoP which works for all 
>> scenarios and deployments, why we have had several proposals and standards 
>> such as Token Binding, mutual TLS, and DPoP. There have been other less 
>> generalized approaches as well, such as forming signed request and response 
>> objects on the channel when one needs end-to-end message integrity or 
>> confidentiality.
>> 
>> Each of these has their own capabilities and trade-offs, and their 
>> applicability to scenarios where the others falter is why multiple 
>> approaches is justified.
>> 
>> The preferred solution for HTTPS resource server access is to leverage MTLS. 
>> However, browsers have both poor/nonexistent API to manage ephemeral client 
>> keys and poor UX around mutual TLS in general.
>> 
>> DPoP was proposed to attempt a “lightest lift” to provide cryptographic 
>> evidence of the sender being involved, so that browsers could protect their 
>> tokens from exfiltration by non-exportable, ephemeral keys. In that way, we 
>> keep from having to define a completely separate security posture for 
>> resource-constraining browser apps.
>> 
>> The motivations for the HTTPSig specification don’t clearly state why it is 
>> essential to have another promoted PoP approach. I would expect more 
>> prescriptive text about the use case that this is proposed for. In 
>> particular, I would love to see an additional use case, outside of DPoP, not 
>> solved by MTLS but solved by this proposal.
>> 
>> If it turns out the target between a HTTP Message Signatures and DPoP 
>> overlap completely, I suspect we would have the issue of two competing 
>> adopted drafts in the working group. I personally do not know the 
>> ramifications of such an event. I do not believe there would be consensus on 
>> eliminating one, nor would there be a significant reduction in complexity by 
>> combining them.
>> 
>> Deferring until HTTPSig is interoperably implemented in the industry gives 
>> us concrete motivation in the future to support both.
>> 
>> -DW
>> ___
>> OAuth mailing list
>> OAuth@ietf.org <mailto:OAuth@ietf.org>
>> https://www.ietf.org/mailman/listinfo/oauth 
>> <https://www.ietf.org/mailman/listinfo/oauth>
> 

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Authorization code reuse and OAuth 2.1

2021-10-15 Thread David Waite

> On Oct 15, 2021, at 7:23 PM, Ash Narayanan  wrote:
> ...
> 
>> As I see it, the retry in case of network failures should happen by 
>> performing a new authorization request – not by trying to reuse an 
>> authorization code – which is indistinguishable from an attack.
> This offers no additional security and creates a poor user experience. With 
> idempotent tokens, there is no incentive for the attacker to replay than to 
> just use the already generated tokens, which I assume the attacker would have 
> access to given they have access to the code/verifier.

Note that only some OAUTH authorization code grants can be idempotent without 
being a yncheonous process, as the code is not the only input. For instance, 
PoP schemes will provide the public or symmetric key portion on the back 
channel. 

I’m of the mind that multi use SHOULD be a failure, but that does not mean one 
or both parties SHOULD fail to get an access token. For example, the condition 
can be found afterward and trigger revocations of the refresh tokens.

-DW
> 
>> Let’s not use OAuth 2.1 as an opportunity to sanction behaviors that we 
>> can’t distinguish from attacks.
> I would argue that we should also take the opportunity to make Oauth2.1 more 
> meaningful/easier to implement, by reducing unnecessary requirements that 
> offer no additional security.
> 
> Daniel did also mention this bit, which I agree with:
>> And ideally, the code SHOULD also be invalidated if the PKCE verifier does 
>> not match, not sure if that is in the current text or not.
>  
> 
>> On Sat, Oct 16, 2021 at 10:42 AM Mike Jones 
>>  wrote:
>> As I see it, the retry in case of network failures should happen by 
>> performing a new authorization request – not by trying to reuse an 
>> authorization code – which is indistinguishable from an attack.
>> 
>>  
>> 
>> Let’s not use OAuth 2.1 as an opportunity to sanction behaviors that we 
>> can’t distinguish from attacks.
>> 
>>  
>> 
>> The prohibition on clients reusing an authorization code needs to remain.
>> 
>>  
>> 
>>   -- Mike
>> 
>>  
>> 
>> From: Vittorio Bertocci  
>> Sent: Friday, October 15, 2021 4:19 PM
>> To: Richard Backman, Annabelle 
>> Cc: Mike Jones ; oauth@ietf.org
>> Subject: [EXTERNAL] Re: [OAUTH-WG] Authorization code reuse and OAuth 2.1
>> 
>>  
>> 
>> I am a fan of this approach. It feels pretty empty to cast people out of 
>> compliance just because they are handling a realistic circumstance, such as 
>> network failures, that we know about beforehand. 
>> 
>> In addition, this gives us a chance to provide guidance on how to handle the 
>> situation, instead of leaving AS implementers to their own device.
>> 
>>  
>> 
>> On Fri, Oct 15, 2021 at 11:32 AM Richard Backman, Annabelle 
>>  wrote:
>> 
>> The client MUST NOT use the authorization code more than once.
>> 
>>  
>> 
>> This language makes it impossible to build a fault tolerant, spec compliant 
>> client, as it prohibits retries. We could discuss whether a retry really 
>> constitutes a separate "use", but ultimately it doesn't matter; multiple 
>> presentations of the same code look the same to the AS, whether they are the 
>> result of retries, the client attempting to get multiple sets of tokens, or 
>> an unauthorized party trying to replay the code.
>> 
>>  
>> 
>> I think we can have a fault tolerant, replay-proof implementation, but it 
>> takes some effort:
>> 
>>  
>> 
>> The AS can prevent the authorized client from using one code to get a bunch 
>> of independent refresh and access token pairs by either re-issuing the same 
>> token (effectively making the token request idempotent) or invalidating 
>> previously issued tokens for that code. (Almost but not quite 
>> idempotent…idempotent-adjacent?)
>> The AS can prevent unauthorized parties from replaying snooped codes+PKCE by 
>> requiring stronger client authentication: implement dynamic client 
>> registration and require a replay-resistant client authentication method 
>> like `jwt-bearer`. The AS can enforce one-time use of the client credential 
>> token without breaking fault tolerance, as the client can easily mint a new 
>> one for each retry to the token endpoint.
>>  
>> 
>> Yes, I know, this is way more complex than just a credential-less public 
>> client doing PKCE. Perhaps we can have our cake and eat it too with language 
>> like:
>> 
>>  
>> 
>> The client MUST NOT use the authorization code more than once, unless 
>> retrying a token request that failed for reasons beyond the scope of this 
>> protocol. (e.g., network interruption, server outage) Refer to [Fault 
>> Tolerant Replay Prevention] for guidance.
>> 
>>  
>> 
>> …where Fault Tolerant Replay Prevention is a subsection under Security 
>> Considerations. I don't think this wording is quite right, as the guidance 
>> is really going to be for the AS, not the client, but hopefully it's enough 
>> to get the idea across.
>> 
>>  
>> 
>

Re: [OAUTH-WG] [EXTERNAL] Rotating RTs and grace periods

2021-11-02 Thread David Waite
My perspective is that the rotation of refresh tokens is an AS mechanism to 
push for one-time-usage and break idempotency. This is specifically employed to 
reduce the impact in scenarios where the refresh token can be leaked and used 
by a third party attacker. A leaked refresh token can only be used in 
environments where the client credentials required for a refresh grant can also 
be leaked.

So,

1. Refresh token rotation is just one mechanism an AS could use to attempt to 
make reuse of leaked refresh tokens harder. Therefore it is entirely acceptable 
for an AS to have a policy around a grace period, as well as other policies - 
the goal is to try to make attacker’s jobs harder while letting through as many 
legitimate requests as possible

2. In scenarios where the client is expected to have non-exfiltratable 
credentials, the need for these sorts of protections goes down possibly to 
zero. Of course, this has a component of the AS trust of the client, e.g. to 
not be leveraging exportable keys.

-DW

> On Nov 2, 2021, at 3:20 PM, Neil Madden  wrote:
> 
> The grace period is to allow the client to retry if it fails to receive the 
> new RT for any reason. For example, the client performs a successful refresh 
> flow but loses mobile network signal before receiving the response. The grace 
> period allows the client to simply retry the request, whereas without a grace 
> period the first request would have invalidated the old RT leaving the client 
> with no option but to perform a full authorization flow again to get a new 
> one. 
> 
> I’m generally against allowing a grace period at all, but given that it’s a 
> common request and some implementations are already allowing this, I’m hoping 
> we can find some wording we can all agree on. 
> 
> I agree that a grace period is more acceptable if the RT is 
> sender-constrained by something like DPoP, but then in that case does RT 
> rotation add anything anyway? The current BCP lists these two as either/or 
> rather than defence in depth. 
> 
> — Neil
> 
>> On 2 Nov 2021, at 14:09, Pieter Kasselman  
>> wrote:
>> 
>> 
>> Neil
>>  
>> Is the goal to accommodate network latency or clock drift? It would be 
>> helpful to include reasons for why a grace period should be considered if it 
>> is allowed.
>>  
>> Without knowing the reasons for the grace period it is not clear why a grace 
>> period is a better solution than just extending the expiry time by a set 
>> time (60 seconds in your example) and having the client present the token a 
>> little earlier.
>>  
>> If grace periods are allowed, it may be worth considering adding additional 
>> mitigations against replay. For example, a grace period may be allowed if 
>> the refresh token is sender constrained with DPoP so there is at least some 
>> assurances that the request is originating from the sender (especially if 
>> the nonce option is used with DPoP).
>>  
>> I would worry about adding more complexity and less predictability by adding 
>> grace periods though (e.g. by looking at a refresh token, will you be able 
>> to tell if it can still be used or not), but your point that implementors 
>> may solve for it in other less predictable ways raises a valid point.
>>  
>> Cheers
>>  
>> Pieter
>>  
>> From: OAuth  On Behalf Of Neil Madden
>> Sent: Tuesday 2 November 2021 10:29
>> To: oauth 
>> Subject: [EXTERNAL] [OAUTH-WG] Rotating RTs and grace periods
>>  
>> Hi all,
>>  
>> There was a previous discussion on whether to allow a grace period during 
>> refresh token rotation, allowing the client to retry a refresh if the 
>> response fails to be received due to some transient network issue/timeout 
>> [1]. Vittorio mentioned that Auth0 already implement such a grace period. We 
>> (ForgeRock) currently do not, but we do periodically receive requests to 
>> support this. The current security BCP draft is silent on whether 
>> implementing such a grace period is a good idea, but I think we should add 
>> some guidance here one way or another.
>>  
>> My own opinion is that a grace period is not a good idea, and if it is to be 
>> supported as an option then it should be kept as short as possible. The 
>> reason (as I mentioned in the previous thread) is that it is quite easy for 
>> an attacker to observe when a legitimate client performs a refresh flow and 
>> so can easily sneak in their own request afterwards within the grace period. 
>> There are several reasons why it is easy for an attacker to observe this:
>>  
>> - RT rotation is primarily intended for public clients, such as mobile apps 
>> and SPAs. These clients are geographically distributed across the internet, 
>> and so there is a good chance that the attacker is able to observe the 
>> network traffic of at least some of these client instances.
>> - The refresh flow is typically the only request that the client makes 
>> directly to the AS after initial authorization, so despite the traffic being 
>> encrypted it is very easy f

Re: [OAUTH-WG] JWK Thumbprint URI Specification

2021-11-24 Thread David Waite
I would investigate whether there are advantages of having this be a URN vs a 
URI in a new base scheme (e.g. jkt:bTz_1…). I haven’t seen much URN namespacing 
of dynamic values (e.g. values not being maintained by the entity responsible 
for the namespace or sub-spaces), and a new scheme is a terser form. 

Also, do you foresee any reason to support other hashing algorithms, since 
thumbprints themselves do not dictate a hashing algorithm? An optional hashing 
seems simple enough to add, except I don’t know of a hash algorithm registry to 
reference

-DW

Sent from my iPhone

> On Nov 24, 2021, at 4:18 PM, Mike Jones 
>  wrote:
> 
> The JWK Thumbprint is typically used as a key identifier. Yes, the key needs 
> to be known by other means if you’re going to use it.  Some protocols work 
> that way, which is what this spec is intended to enable.  For instance, the 
> Self-Issued OpenID Provider (SIOP) v1 and v2 protocols send the public key 
> separately in a “sub_jwk” claim.  In other use cases, it may already be known 
> to the receiving party – for instance, from a prior discovery step.
>  
> It would be fine to separately also define a URI representation communicating 
> an entire JWK, but that would be for different use cases, and not the goal of 
> this (intentionally narrowly scoped) specification.
>  
>Cheers,
>-- Mike
>  
> From: OAuth  On Behalf Of David Chadwick
> Sent: Wednesday, November 24, 2021 12:36 PM
> To: oauth@ietf.org
> Subject: Re: [OAUTH-WG] JWK Thumbprint URI Specification
>  
> On 24/11/2021 20:07, Mike Jones wrote:
> The JSON Web Key (JWK) Thumbprint specification [RFC 7638] defines a method 
> for computing a hash value over a JSON Web Key (JWK) [RFC 7517] and encoding 
> that hash in a URL-safe manner. Kristina Yasuda and I have just created the 
> JWK Thumbprint URI specification, which defines how to represent JWK 
> Thumbprints as URIs. This enables JWK Thumbprints to be communicated in 
> contexts requiring URIs, including in specific JSON Web Token (JWT) [RFC 
> 7519] claims.
>  
> My immediate observation is why are you sending the thumbprint of the JSON 
> Web Key and not sending the actual key value in the URI?
> 
> Sending the thumbprint means the recipient still has to have some other way 
> of obtaining the actual public key, whereas sending the public key as a URI 
> means that no other way is needed.
> 
> Kind regards
> 
> David
> 
>  
> 
> Use cases for this specification were developed in the OpenID Connect Working 
> Group of the OpenID Foundation. Specifically, its use is planned in future 
> versions of the Self-Issued OpenID Provider v2 specification.
>  
> The specification is available at:
> 
> 1.   
> https://www.ietf.org/archive/id/draft-jones-oauth-jwk-thumbprint-uri-00.html
>  
>-- Mike
>  
> P.S.  This note was also published at https://self-issued.info/?p=2211 and as 
> @selfissued.
>  
> 
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
>  
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Proposed changes to RFC 8705 (oauth-mtls)

2021-12-09 Thread David Waite


> On Dec 9, 2021, at 2:35 PM, Neil Madden  wrote:
> 
> On 9 Dec 2021, at 20:36, Justin Richer  > wrote:
>> 
>> I disagree with this take. If there are confirmation methods at all, it’s no 
>> longer a Bearer token, and pretending that it is doesn’t help anyone. I 
>> think combining confirmation methods is interesting, but then you get into a 
>> weird space of how to define the combinations, and what to do if one is 
>> missing, etc. It opens up a weird space for interop problems. It’s not 
>> insurmountable, but I don’t think it’s a trivial as it might look at first.
>> 
>> Plus, the “backwards compatible” argument is what led to the existing RFC 
>> using Bearer again. In my view, this actually opens up the possibility of 
>> downgrade attacks against the RS, where a lazy RS doesn’t check the binding 
>> because it sees “Bearer” and calls it a day.
> 
> I think this actually strongly argues the opposite - it is precisely because 
> the scheme is under attacker control that enables such downgrade attacks. So 
> relying on the scheme to tell you what kind of PoP checks to apply makes 
> these kinds of attacks more likely, not less. I’m suggesting instead that the 
> RS decides what checks to enforce based on the “cnf” content in the token - 
> which is either signed by the AS or retrieved directly from the AS through 
> introspection. On the other hand, the token type is not even defined in the 
> recent RFC 9068 for JWT-based ATs. So an attacker could easily change the 
> scheme from MTLS to Bearer to see if the RS stops performing checks, but they 
> can’t remove a “cnf” claim from the token itself.
> 
> In hindsight, “Bearer” might have been better named “AccessToken” or similar, 
> but I don’t think the name matters as much as the semantics.

While I can’t speak for those involved, I suspect there was a desire to carry 
over OAuth 1 behavior with message signatures at the authorization level. That 
is to say, I suspect the name Bearer was to distinguish against say a PoP or 
HttpSig scheme.

In that light, I suspect the separation was not necessarily one trying to 
capture security semantics, but in understanding the format of the 
authorization header itself. A PoP scheme might include a signed 
challenge-response as a mandatory second parameter, or via wrapping the access 
token into a security structure such as a JWT. Neither of these would be valid 
for the Bearer authorization header, which is meant to convey an access token 
and provides for no additional parameters.

MTLS did not change the semantics of the bearer authorization header, since the 
format/meaning and validation of an access token has always been 
implementation-defined. Thus, a “MTLS” authentication scheme does not provide 
meaningful distinction, even ignoring the issues such distinction gives under 
an attacker model.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] can a resource server provide indications about expected access tokens?

2021-12-11 Thread David Waite

> On Dec 11, 2021, at 3:35 AM, Nikos Fotiou  wrote:
> 
> Hi,
> 
> I have a use case where a resource server is protected  and can only be 
> accessed if a JWT is presented. Is there any way for the server to "indicate" 
> the "expected" format of the JWT. For example,  respond to unauthorized 
> requests with something that would be translated into "I expect tokens form 
> iss X with claims [A,B,C]"

Normally, the scope of the token is part of the contract between the resource 
server and client (what sort of authorization is needed for the resource 
server), but other aspects of the relationship - such as format, or required 
information, or additional verification steps the user needs to take - are part 
of the contract between the AS and resource server.

The ways to work with indicating that these requirements exist at token 
issuance include:

1. Scopes - wrap requirements up into scopes, such as having an “admin” scope 
require additional user authentication, or a “purchasing” scope require the 
user’s shipping address be embedded as a claim

2. Resources - require the client to use the `resource` parameter to indicate 
which resource server the token is meant for, and use AS policy to say which 
RSs get what sort of tokens or have what sort of issuance policy.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] [EXTERNAL] Re: OAuth Redirection Attacks

2021-12-17 Thread David Waite



> On Dec 17, 2021, at 2:44 PM, Brian Campbell 
>  wrote:
> 
> Relax how aggressively OAuth demands that the AS automatically redirect in 
> error conditions. And either respond with a 400 directly (which just stops 
> things at that point) or provide a meaningful interstitial page to the user 
> before redirecting them (which at least helps users see something is amiss). 
> I do think OAuth is a bit overzealous in automatically returning the user's 
> browser context to the client in error conditions. There are some situations 
> (like prompt=none) that rely on the behavior but in most cases it isn't 
> necessary or helpful and can be problematic. 

The problem is that if prompt=none still requires redirection without prompt or 
interstitial, someone wishing to treat dynamic registrations of malicious sites 
as clients will just start using prompt=none. Likewise, a site could still 
attempt to manipulate the user to release information by imitating an extension 
to the authentication process, such as an "expired password change" prompt.

I agree with Nov Matake's comment - phishing link email filters should treat 
all OAuth URLs as suspect, as OAuth has several security-recommended features 
like state and PKCE which do not work as expected/reliably with email. Filters 
integrated into the browser (such as based on the unsafe site list in Chrome) 
should not need changes, as they will warn on redirect to the known malicious 
site.

We should also continue to push as an industry for authentication technologies 
like WebAuthn (as well as mutual TLS and Kerberos) which are phishing 
resistant. We are really talking about failure of a single phishing mitigation 
for _known_ malicious sites - the opportunity to use any unknown malicious site 
or a compromised legitimate site remains open even if we do suggest changes to 
error behavior.

-DW

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Implicit Grant Flow for authentication on both Client UI and Back End by OIDC id_token verification

2022-01-18 Thread David Waite


Sent from my iPhone

> On Jan 18, 2022, at 3:54 PM, Sergey Ponomarev  wrote:

> 
> The Implicit grant flow was intended for authorising clients which
> can't store the `client_secret` like SPA.

It is orthogonal to that, you can do code flow without client secrets as well. 
OAuth was released at a time of much simpler JavaScript development, and before 
standardization of CORS. Implicit was more of an implementation simplicity 
trade off for JavaScript clients. 

> OIDC added `id_token` which is a signed JWT (JWS) that contains user info.

The id_token is a message from the OP/AS to the client about the end user - 
their subject identifier and other authentication information. other user 
claims are sometimes bundled in by implementations, such as when the client is 
not asking for an access token and thus will not be able to hit the user info 
endpoint. 

> If we just need for authentication it's now possible to request the
> only `response_type=id_token` i.e. we aren't interested in getting the
> `access_token`.

Yes. 

> Anybody can verify that the token was issued by the Auth Server and it
> wasn't changed.

If the client has a confidential cryptographic key the id_token may be 
encrypted. But 99.9% of the time, yes. 

> We may also ask to include our own `nonce` into the `id_token` and
> thus we may protect from reusing the `id_token` twice.

The nonce claim is required for implicit, and really should be mandatory for 
all cases where the id_token is in a front channel. The reuse restriction is 
really one of having the authentication protocol be interactive. 

> This gives us an ability to use the `id_token` for server validation.

The goal is for server validation of id_tokens. A JavaScript client has limited 
power in making security decisions, e.g. restricting user access to data in the 
local browser IndexedDB isn’t really possible. 

Browser consumption of id_tokens is really a demo-level construct. You could 
potentially use it to pull values out and personalize the page AKA “Welcome, 
” or attempt to lock down the presentation layer to prevent casual 
drive-by information leaking, but there’s no reason for the server to trust an 
assertion from a JavaScript client about the user. 

> To explain the flow let's take for example a Google:


These are correct. For implicit, the state parameter is used to prevent some 
cross browser issues such as XSRF attacks. Guidance for code flow is to use 
PKCE, so state can actually be used “just” for application state, such as what 
the user was trying to do before authentication was required. 

> The key advantage of the flow is that the Client Server doesn't have
> to perform a side channel request to the Auth Server as it needs in
> the Authorization Code flow.
> This not only improves performance but also allows to decouple Client
> Server from Auth Service.

There are ramifications of this approach, such as losing access to other OAuth 
extensions which are only defined for code flow, and potentially having PII 
flow through the browser. You also lose the ability to potentially get new 
id_tokens (extending the session) through refresh, since there’s no chance for 
a refresh token. Finally, it propagates implicit flow, which has an 
interoperability impact. 

Generally my advice is “use code unless you can’t.”

> For example the Client Server can't connect to the Auth Service
> because of connectivity problems.
> Or if the AS is blocked in the Client Server country (e.g. Yandex and
> VK.com in Ukraine, Google in China, Twitter in Nigeria etc.).

Generally these sorts of connectivity issues and clocks will impact the 
implicit channel as well - they won’t block Google’s auth endpoint, they’ll 
block Google. 

> Another reason if the Client Server wants to hide its IP from the Auth
> Service e.g. this a Tor Hidden Service with .onion domain.

The call for the code grant can also be made through the same VPN-style 
interfaces. 

It is more of an issue when the client has different networking access, such as 
an employee using an on-prem OIDC OP to get access to a hosted RP product. The 
JavaScript can interact within the firewall, but operations has not exposed the 
token endpoint properly. 

> Now it's possible to block any outgoing connections from the Client
> Server that significantly improves safety.

It does, at the loss of other functionality and a change in security 
requirements and properties between code and implicit clients.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC for JWK Thumbprint URI document

2022-02-04 Thread David Waite
> On Feb 4, 2022, at 6:32 PM, Mike Jones 
>  wrote:
> 
> Kristina and I spoke about it today and we agreed that it makes sense to make 
> the hash algorithm explicit.  So for instance, we’d propose that the example
> urn:ietf:params:oauth:jwk-thumbprint:NzbLsXh8uDCcd-6MNwXF4W_7noWXFZAfHkxZsRGC9Xs
> become
> urn:ietf:params:oauth:jwk-thumbprint:S256:NzbLsXh8uDCcd-6MNwXF4W_7noWXFZAfHkxZsRGC9Xs
> when using the SHA-256 hash function.
>  
> Similarly, we’d propose to also define S384, S512, and possibly also S3-256, 
> S3-384, and S3-512 (for the SHA-3 hash functions).

My ideal would be making the algorithm explicit in the name, while deferring 
establishing a registry of other algorithms until a technical need is 
established.

While it is not necessary that a URN namespace define a unique name for a 
resource, it is a useful property that would be lost with multiple hashing 
schemes. Use of a hashing scheme not supported by a piece of software would 
also mean that there is no way to verify the name corresponds to a given 
resource.

For this reason, if we do support multiple algorithms I would expect a mandate 
in dependent specs and systems that mandate a specific one or a specific set. 
For example, they may exclude the Kekkak variants (SHA3, SHAKE) as there are no 
other algorithms registered for JOSE which depend upon them.

>  
> For extra credit, if there’s already an IANA registry with string names for 
> these hash functions, we’d consider using it.  I looked for one and 
> surprisingly didn’t find it.  Or we could create one.
>  

The COSE algorithms are declared with both numbers and names, and include 
hashes as algorithms.

https://www.iana.org/assignments/cose/cose.xhtml#algorithms

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] DPoP and OpenID Connect

2022-02-17 Thread David Waite
Hello Dmitry!

AFAIK there isn’t any accepted work within OpenID Foundation defining an 
interaction between id_tokens and DPoP.  Like most of the OAuth additions which 
have come out after OpenID Connect 1.0, one could expect that it would layer on 
top transparently as something which pertains to access tokens, not id_tokens.

Similar to how OpenID Connect extends OAuth 2.0, I would expect that the use of 
confirmed id_tokens to be an extension on top of OpenID Connect (in this case, 
by Solid). I would expect that to be the most appropriate community to engage 
with.

Within the traditional OpenID Connect model, there is no reason for a client to 
share its id_token to other parties, or a sense of what sort of decisions 
another party could make if they receive an id_token which was not meant for 
them. It is not immediately apparent to me what an id_token with proof of 
possession would mean to a party which receives it, what additional 
requirements such an id_token might have, what relationship that other party 
would have a relationship with the client or OP, or how such an id_token would 
be sent.

I could see several independent efforts profiling different meanings and 
usages, which consequently would have different security considerations. In the 
absence of a proposal for a specification from OIDF, it may be better for these 
efforts to declare their own new tokens with the semantics they require rather 
than overloading id_tokens.

-DW


> On Feb 16, 2022, at 4:03 PM, Dmitry Telegin 
>  wrote:
> 
> Could we somehow clarify the relationship between DPoP and OIDC? (sorry if 
> this is the wrong ML)
> 
> For example, it's relatively obvious that the OIDC UserInfo should support 
> DPoP, as it is an OAuth 2.0 protected resource. What's not obvious is that 
> the WWW-Authenticate challenge (in case of 401) will most likely contain 
> multiple challenges (Bearer and DPoP), and it could be a bit tricky from the 
> browser compatibility PoV.
> 
> Another non-obvious thing is that ID tokens could be DPoP-bound as well. Some 
> technologies even rely on it, Solid-OIDC being a notable example: 
> https://solid.github.io/solid-oidc/#tokens-id 
> 
> 
> Dmitry
> Backbase / Keycloak
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Second WGLC for JWK Thumbprint URI document

2022-02-23 Thread David Waite
I support publication as well.

-DW


> On Feb 23, 2022, at 5:53 AM, Tim Cappalli 
>  wrote:
> 
> +1 in support of publication!
>  
>  
>  
> Tim Cappalli | @timcappalli 
> did:ion:EiBgPHSLu66o1hQWT7ejtsV73PfrzeKphDXpgbLchRi32w
>  
> 
>  
>  
> From: OAuth  on behalf of Rifaat Shekh-Yusef 
> 
> Date: Monday, February 21, 2022 at 08:13
> To: oauth 
> Subject: [OAUTH-WG] Second WGLC for JWK Thumbprint URI document
> 
> All,
>  
> Mike and Kristina made the necessary changes to address all the great 
> comments received during the initial WGLC.
> 
> This is a second WG Last Call for this document to make sure that the WG has 
> a chance to review these changes:
> https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-00.html 
> 
> 
> Please, provide your feedback on the mailing list by March 7th.
> 
> Regards,
>  Rifaat & Hannes
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth: The frustrating lack of good libraries

2022-03-04 Thread David Waite

> On Mar 1, 2022, at 10:18 AM, Daniel Fett  wrote:
> 
>  * The core of OAuth is easy to implement. The need to create or use a 
> library might not be obvious to developers. Of course, if you want a proper 
> implementation with correct error handling, observing all the security 
> recommendations, etc., the effort is huge. But just getting OAuth to work for 
> one specific use case is relatively easy.

I’d argue this point - it is not easy to implement. It is far easier to 
describe.

The separation between codes, refresh and access tokens means that you have 
logic from a library being integrated at multiple levels, from API access to 
persistence to user presentation. There are also complexities that arise - any 
API call could require changes to persistence or new user interaction.

Because of the variability in the kinds of applications which could be 
supported, many libraries wind up looking like simple message object builders, 
with complex state and processing mechanisms underneath that do not necessarily 
map at all into the application architecture. On top of this you have to extend 
your own app with the communication and asynchronicity required.

>   * OAuth is traditionally hard to configure: authorization and token 
> endpoint URLs, client id and secret, supported scopes (and claims for OIDC), 
> supported response types and modes, and required security features are just 
> some of the things a developer has to figure out - often from the API's 
> documentation

I find the OAuth Server Metadata response to be a good format for the server 
configuration (even if not hosted via well-known, or if it is client-specific), 
and the client metadata from RFC 7591 to be a good start.



> What can we do about this?



>  * The OpenID Foundation has a great set of conformance tests for OIDC, FAPI 
> and other stuff. Creating conformance tests for OAuth would be harder, given 
> that the framework leaves many options for implementers to choose from. I’m 
> not sure if running a conformance programme would be in the scope of IETF, 
> but it can be worthwhile to think about if we could support such an endeavor.

I would suspect it would be adding more constraints to profile behavior (beyond 
what we have done already in say the Security BCP) and then having tooling and 
conformity assessments based on that profile. My scope suspicion is that 
tooling and testing would be external.



>  * The single most important thing to do would, in my opinion, be to set a 
> goal: Tell library developers and language maintainers what can be expected 
> from a good, modern, and universal OAuth library. Such a recommendation would 
> shine a light on the most important extensions for OAuth like PKCE and might 
> even be a prerequisite for conformance tests. It may turn out to be OAuth 2.1 
> or something else. For me, this would in any case include AS Metadata, as 
> that is the single most valuable building block we have to address 
> configuration complexity. 

The only wrinkle I would add is that pre-established clients may have per 
client AS metadata, but the AS metadata format itself (e.g. JSON with specific 
keys) is still useful for representing that in a consistent manner as a format 
(rather than an endpoint). I have seen some slight extensions there, such as a 
parameter to provide JWK information inline.

Client metadata is harder, as there may be information in both the request and 
response that needs to be understood, as well as local configuration and 
secrets (such as private keys). There is also a chance for reproduction as well 
as uncaught differences when supporting multiple distinct AS as a client.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC for DPoP Document

2022-03-28 Thread David Waite

> On Mar 28, 2022, at 8:28 AM, Denis  wrote:


>The primary aim of DPoP is to bind a token to a public key upon 
> issuance and requiring that the client proves possession 
>of the corresponding private key when using the token.  This does not 
> demonstrate that the client presenting the token is 
>necessarily the legitimate client. In the case of non-collaborating 
> clients, DPoP prevents unauthorized or illegitimate parties 
>from using leaked or stolen access tokens. In the case of 
> collaborating clients, the security of DPoP is ineffective 
>(see section 11.X).
> 


> If a client agrees to collaborate with another client, the 
> security of DPoP is no longer effective.  When two clients agree to 
> collaborate, 
> these results of the cryptographic computations performed by one 
> client may be communicated to another client. 
> 
> 
If a system has shared its tokens and/or credentials with another system, they 
are both operating as part of a single client. Neither DPoP nor OAuth define 
how two clients can share access, such as by applying scopes issued against the 
client with identifier “foo” to the client with id “bar”. 

From an AS or user perspective, multiple parties could collaborate beyond the 
expectations and limitations they intended the client to have. However, sharing 
across parties or underlying systems could be entirely within expectations - 
such as multiple services which together use information from the resource 
server to fulfill a request.

One could have text such as:

DPoP does not prevent sharing of data or access by a client with additional 
parties which are not authorized by the AS. In particular, a client may 
voluntarily share either private keys or constructed DPoP proofs.

But this is somewhat matter-of-factly stating that the AS should continue to 
have the same evaluation process of what parties should be given access as 
clients - that DPoP is not a DRM or DLP scheme.


> Even if the private key used for DPoP is stored in such a way 
> that it cannot be exported, e.g., in a hardware or software security module, 
> the client can perform all the cryptographic computations needed 
> by the other client to create DPoP proofs. 
> 
This seems unneeded with the text above. In addition, DPoP does not define a 
way for an AS to ensure it only issues access tokens against PoP keys which are 
non-exportable.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC for DPoP Document

2022-03-29 Thread David Waite
I also support publication of this specification

-DW

> On Mar 29, 2022, at 3:12 PM, Mike Jones 
>  wrote:
> 
> I support publication of the specification.
>  
>-- Mike
>  
> From: OAuth  On Behalf Of Rifaat Shekh-Yusef
> Sent: Monday, March 28, 2022 5:01 AM
> To: oauth 
> Subject: [OAUTH-WG] WGLC for DPoP Document
>  
> All,
> 
> As discussed during the IETF meeting in Vienna last week, this is a WG Last 
> Call for the DPoP document:
> https://datatracker.ietf.org/doc/draft-ietf-oauth-dpop/ 
> 
> 
> Please, provide your feedback on the mailing list by April 11th.
> 
> Regards,
>  Rifaat & Hannes
>  
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Listing OAuth Access Token Metadata

2022-04-02 Thread David Waite

> On Apr 1, 2022, at 3:24 AM, Dhaura Pathirana  
> wrote:
> 
> I would like to know if anyone has seen this (listing token metadata) as a 
> common use case in OAuth2 and a standard way of doing it had been proposed 
> before? 

OAuth Token Introspection (RFC 7662) defines a way to query for active state 
and meta-info.

However, its use is defined only for protected resources, and not the resource 
owner or the client the token was issued to. 

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Call for adoption - Step-up Authentication

2022-04-26 Thread David Waite
I support the working group adopting this work.

> On Apr 26, 2022, at 3:46 AM, Rifaat Shekh-Yusef  
> wrote:
> 
> This is a call for adoption for the Step-up Authentication document
> https://datatracker.ietf.org/doc/draft-bertocci-oauth-step-up-authn-challenge/
>  
> 
> 
> Please, provide your feedback on the mailing list by May 10th.
> 
> Regards,
>  Rifaat & Hannes
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Last Call: (JWK Thumbprint URI) to Proposed Standard

2022-05-11 Thread David Waite
RFC 7517 does define an "application/jwk+json" media type which could be used 
with the ct= query parameter for ni-scheme uri. The resulting ni-scheme URI 
could be used to refer to a specific generated JWK document.

However, I do not believe this would be a sufficient way to indicate that this 
is the pre-hash minimized, canonicalized form required for thumbprint 
generation in RFC 7638 (e.g. non-required members removed, JSON documents in 
lexicographical key order represented as UTF-8).

The information dropping of the canonicalization in JWK thumbprints results in 
a few important properties - in particular, a local JWK document representing a 
private key and the shared JWK document representing the corresponding public 
key will have the same thumbprint. This enables the JWK Thumbprint to serve as 
an algorithmic key identifier for all participating parties.

This creates the issue with using the ni scheme - a NI URI could be used to 
refer to a single JWK document. However, the semantics when interpreting a 
thumbprint are that it references potentially multiple data forms with 
different binary representations, and that a software ‘lookup’ operation taking 
a JWK thumbprint may result in data which does not have the specified hash 
value. My interpretation would be that these behaviors go against the spirit of 
RFC 6920.

-DW

> On May 6, 2022, at 6:27 AM, Rifaat Shekh-Yusef  
> wrote:
> 
> Mike,
> 
> RFC6920 defines an optional query parameter, in section 3:
> https://www.rfc-editor.org/rfc/rfc6920.html#section-3 
> 
> 
> I guess you could have added a query parameter to add that specificity.
> 
> Regards,
>  Rifaat

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Last Call: (JWK Thumbprint URI) to Proposed Standard

2022-05-11 Thread David Waite
On May 11, 2022, at 6:45 AM, Rifaat Shekh-Yusef  wrote:
> On Wed, May 11, 2022 at 4:53 AM David Waite  <mailto:da...@alkaline-solutions.com>> wrote:
> The information dropping of the canonicalization in JWK thumbprints results 
> in a few important properties - in particular, a local JWK document 
> representing a private key and the shared JWK document representing the 
> corresponding public key will have the same thumbprint.
> 
> Can you elaborate on this? how would two different documents produce the same 
> hash?

Sure. In terms of named information, we could refer to an instance of a JWK 
that has had the thumbprint canonicalization rules partially applied to it, 
such that it would refer to a valid JWK with the identical hash of the JWK 
thumbprint.

However, a named information is meant to refer to a specific resource with a 
specific series of bytes. This ni-scheme URI would refer only to the resource 
that is that specific canonicalized JWK, with certain information stripped. 

In the semantics of a JWK thumbprint however, the hash refers to an infinite 
set of documents that might have different JSON serialization order, 
whitespace, and/or additional optional JWK fields (including potentially the 
private key information.) The JWK thumbprint is meant to serve as a common 
identifying value by all parties to a particular logical key. JWK Thumbprint 
URI simply provide a common scheme for expressing that identifier as a URI.

So, the resolving a named information URI is defined to give you a particular 
binary entity representation, while resolving a thumbprint will give you one of 
possibly many object representations that share certain cryptographic 
properties. Thus I don’t feel it is appropriate to use the same URI scheme to 
represent both, even though they are potentially isomorphic data forms - we 
would be using ni-scheme URI as a hash value container rather than for the 
defined named information semantics.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Call for adoption - SD-JWT

2022-07-29 Thread David Waite


> On Jul 29, 2022, at 5:35 AM, Warren Parad  
> wrote:
> 
> I too do not support adoption.
> 
> Something is "off" for me, I don't quite get the expectation on the secure 
> flow, in this draft, doesn't the issuer have to know the claims that could be 
> requested up front? If the goal is to not have the issuer contain any of this 
> data, but let the holder "add in their claims in a verifiable way", the 
> simple solution is just sharto share the access token with the actual data. I 
> think I would really want to see a concrete expectation about how this would 
> be used.

To give an example of the equivalent digital representation of a physical 
document (such as a driver’s license), the issuing authority (e.g. motor 
vehicle division) would issue the SD-JWT along with equivalents for every value 
on the license as a subject claim, by providing them as hashed values.

Later, the party it was issued to would release this JWT, along with 
selectively releasing the data which correctly hashes to those values.

The trust framework itself determines which subject claims (and security 
claims) are required to be present or are optional. In the driver’s license 
context, there are standards for international licenses as well as additional 
country-specific information that may extend that.

This is done because network availability and privacy concerns may separate the 
act of acquiring the SD-JWT of a license from the issuing authority, and 
presenting it (such as days later during a traffic stop on a mountain road).

> The other part is I want to challenge that it will actually have the benefit 
> that we want it to have (above and beyond JWEs).
> 
> For example, let's take the cornerstone argument from the draft:
>> However, when a signed JWT is
>>intended to be multi-use, it needs to contain the superset of all
>>claims the user might want to release to verifiers at some point.
>>The ability to selectively disclose a subset of these claims
>>depending on the verifier becomes crucial to ensure minimum
>>disclosure and prevent verifiers from obtaining claims irrelevant for
>>the transaction at hand.
> 
> We already have a parallel today with scopes. Normally, we expect that there 
> can be progressive scope increases, via new interactions with the user agent 
> and the AS. However, in practice, Resource Servers ask User Agents to approve 
> all scopes up front, and worse still AS don't allow the user agent to select 
> which scopes they want to grant. In practice, theory and practice are not the 
> same.

Right. The desire to get more information (or in the scopes case, permissions) 
up-front is a consequence of trying to maintain flexibility. This is usually 
counterbalanced by AS policy.

Selective disclosure of claims means I can acquire a SD-JWT with all relevant 
information, but only disclose what is needed.

The equivalent for a SD-JWT based access token would require clients to have 
semantic knowledge of the access token and to be sender-vouched, but would let 
them selectively tune which previously granted scopes should apply to the 
request.

> Selective disclosure is only a small subset of the problem posed by scopes, 
> because scopes actually convey permissions. If we are going to improve 
> anything, it should be restricting any and all data in not just the id_token 
> but also the access_token. And the solution could be this draft's 
> implementation, or maybe it is something similar to macaroons 
> . I don't think 
> this draft get's us closer to that unfortunately.
My understanding is that macaroons are somewhat different. A macaroon would 
have you choose to add constraints, such as saying ‘but don’t share this data 
with others’ or ‘ignore that write access scope’.

SD-JWT lets you effectively remove information, such as ‘I’m not sharing this 
personal data with this party’ or ’this resource doesn’t see that I have write 
access’.

> Second, I challenge the perspective of multi-use. While I completely agree 
> tokens are multi-use, they tend to be multi-use inside of an opaque 
> "platform", the user-agent interacts with RSs in the platform in an 
> indistinguishable way, so meaningfully, RS will request all the scopes they 
> know about all the time, even if they don't need them. The platform will 
> still request everything, and the user-agent will be forced to share the 
> SD-JWT-R for all the claims.
> If there are multiple RS or clients involved, then the process would be to 
> generate multiple tokens, one for each client interaction, as we do today. 
> The only way out of this I can see, is like macaroons you can selectively 
> restrict further information for the next hop. But that's based on delegation 
> and legal trust, not security.
My expectation is that for subject claims, both the end user consent and trust 
framework policy enforcement would indeed be the limiting factors of such 
overrea

Re: [OAUTH-WG] Call for adoption - SD-JWT

2022-08-05 Thread David Waite
I can’t speak to what group or charter the JWP work would eventually be under, 
but the JWT specification is one of several examples of a specification that 
heavily leveraged the JOSE work but which was started here at OAUTH, outside of 
the (at the time active) JOSE WG.

Without perusing old email archives across two groups, I speculate that this is 
because JWTs are at a different layer than JOSE - the JOSE specifications 
defined algorithms and serializations for cryptographic messages, while JWT 
defined (generalized) application-layer semantics. JWS and JWE allows for 
arbitrary binary payloads, while JWT mandates a specific document format - a 
JSON object of higher-layer security and subject claims. 

Likewise, other groups and individuals outside of OAUTH have further defined 
how to process JWTs for their own specific application space, just like OAUTH 
produced a specification recommending how to use JWTs for access tokens.

To have SD-JWT under the JOSE group (or another JWP group) would mean that it 
was chartered to define such application-layer semantics, in addition to the 
lower layer work of specifying algorithms and serialization of cryptographic 
data.

SD-JWT is an incremental addition on top of JWTs; while it does need a new 
compact representation to express additional information, the idea is that it 
can be implemented as incremental logic on top of existing JWT processing 
libraries. This is core to the current design, and why there are already 
multiple implementations of the draft.

JWP does envision a different approach from SD-JWT, where you have the concept 
of multiple (potentially binary) payloads as part of the core data model and 
serializations. A JWT/SD-JWT equivalent (such as JPTs -  
https://www.ietf.org/id/draft-jmiller-jose-json-proof-token-00.html ) would 
define how claims are mapped to particular payloads in a JWP.

To avoid going too deeply into my own biases here, I think the approach defined 
by JWP has many benefits. However, that work is just beginning. 

Similar to how the work on TLS 1.3 didn’t stop people from specifying new 
capabilities for TLS 1.2, I don’t think the eventual goals of JWP+JPT should 
detract from specifying SD-JWT.

-DW

> On Aug 5, 2022, at 3:28 AM, Warren Parad  
> wrote:
> 
> Maybe they have a good reason for not wanting it, and then we shouldn't be 
> the WG that backdoor's it in. Also: "other people have already implemented 
> it" is a cognitive fallacy, so let's not use that as a justification we have 
> to make the standard.
> 
> We should get a concrete reason why a WG that seems like the appropriate one, 
> thinks it wouldn't make sense. If it is just a matter of timing, then 
> whatever. But if there are concrete recommendations from that group, I would 
> love to hear them.
> 
> On Fri, Aug 5, 2022 at 10:26 AM Daniel Fett 
> mailto:40danielfett...@dmarc.ietf.org>> 
> wrote:
>> Am 05.08.22 um 10:22 schrieb Warren Parad:
>>> > and nobody involved in the JWP effort thinks that SD-JWT should be in 
>>> > that WG once created
>>> 
>>> Why?
>> For the reasons listed, I guess?
>> 
>> Also, mind the "As far as I am aware" part, but I don't remember any 
>> discussions in that direction at IETF114.
>> 
>> -Daniel
>> 
>> 
>> ___
>> OAuth mailing list
>> OAuth@ietf.org 
>> https://www.ietf.org/mailman/listinfo/oauth
> 

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] DPoP - IPR Disclosure

2022-08-10 Thread David Waite
I also am unaware of any IPR.

-DW

> On Aug 10, 2022, at 3:37 PM, Rifaat Shekh-Yusef  
> wrote:
> 
> Daniel, Brian, John, Torsten, Mike, and David,
> 
> As part of the shepherd write-up for the DPoP document, there is a need for 
> an IPR disclosure from the authors.
> https://datatracker.ietf.org/doc/draft-ietf-oauth-dpop/
> 
> Are you aware of any IPRs associated with this document?
> 
> Regards,
>  Rifaat & Hannes
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] oauth with command line clients

2017-06-12 Thread David Waite
FYI, A few years ago I did a demonstration on OpenID Connect at Cloud Identity 
Summit using a collection of bash scripts and command-line utilities (nc, jq). 
I used the macOS system command ‘open’ to launch a browser, and netcat to field 
the response as a poor man’s HTTP endpoint.  The code for that presentation is 
at 
https://github.com/dwaite/Presentation-Code-OpenID-Connect-Dynamic-Client-Registration

A few options for the user challenge/consent portion of the authentication are:
- pop up the system browser (you can use window.close() to dismiss on redirect 
back to your client) - thats the one I used.
- device flow
- use a console browser like lynx or ELinks (which has rudimentary ecmascript 
support at a fairly big cost)
- use non-HTML request/response API (around some custom MIME type) to drive a 
user agent through the authentication/scope approval/etc stages of your AS
- punt and use resource owner credentials grant.

-DW
 
> On Jun 12, 2017, at 7:29 AM, Hollenbeck, Scott  
> wrote:
> 
> From: OAuth [mailto:oauth-boun...@ietf.org ] 
> On Behalf Of Bill Burke
> Sent: Monday, June 12, 2017 9:23 AM
> To: Aaron Parecki mailto:aa...@parecki.com>>
> Cc: OAuth WG mailto:oauth@ietf.org>>
> Subject: [EXTERNAL] Re: [OAUTH-WG] oauth with command line clients
>  
> I've read about these techniques, but, its just not a good user experience.  
> I'm thinking more of something where the command line console is the sole 
> user agent and the auth server drives a plain text based interaction much 
> like an HTTP Server drives interaction with HTML and the browser.  
> 
> This isn't anything complex.  It should be a simple protocol, but I'd like to 
> piggy back on existing solutions to build some consensus around what I think 
> is a common issue with using OAuth.  If there isn't anything going on here in 
> the OAuth group surrounding this, would be willing to draw up a Draft if 
> there is interest.
> 
> [SAH] I’m certainly interested! I have a use case for federated client 
> authentication and authorization for the Registration Data Access Protocol 
> (RDAP) that has the same need for command line web service clients like wget 
> and curl.
>  
> Scott
> ___
> OAuth mailing list
> OAuth@ietf.org 
> https://www.ietf.org/mailman/listinfo/oauth 
> 

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] What Does Logout Mean?

2018-03-28 Thread David Waite


> On Mar 28, 2018, at 11:40 AM, Richard Backman, Annabelle 
>  wrote:
> 
> I'm reminded of this session from IIW 21 
> . ☺ I 
> look forward to reading the document distilling the various competing use 
> cases and requirements into some semblance of sanity.

I was just thinking how much I’d like to discuss this at an IIW. While 
developing the DTVA submission I wound up taking IMHO a different stance on 
sessions and logout, both technically and conceptually.

>  
> > If the framework has no way of invalidating a session across the cluster…
>  
> Is this a common deficiency in application frameworks? It seems to me that 
> much of the value of a server-side session record is lost if its state isn’t 
> synchronized across the fleet.

Most application frameworks are relatively simple - they initiate a session and 
maintain it locally. They don’t have a single session record that is maintained 
across all applications in a domain. Even frameworks with native support for 
federation protocols or form-based SSO wind up using this authentication to 
create an application-specific session.

Many also attempt to maintain the session information in an ideally 
integrity-protected, time limited, etc cookie, similar to an access token, 
rather than having a database within their application for synchronizing the 
session state. You wind up needing an additional state mechanism in this case 
to record invalidated sessions/tokens, which is typically not provided by 
frameworks.

This was one of the primary focuses of my DTVA submission - a REST API where 
you could submit the `sid` of a token in order to find out if it had been 
invalidated. If you were using some cookie-based storage mechanism, tossing the 
`sid` in let you make this API call after discarding the id_token - hopefully 
allowing for application developers to add checks for an invalidated session as 
part of their global pipeline.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] is updated guidance needed for JS/SPA apps?

2018-05-18 Thread David Waite
I have written some guidance already (in non-RFC format) on preferring code for 
single page apps, and other security practices (CORS, CSP). From the AS point 
of view, it aligns well with the native apps BCP. There are benefits of 
thinking about native and SPA apps just as ‘public clients’ from a 
policy/properties point of view. It also greatly simplifies OAuth/OIDC support 
on both the AS administrator and client developer side when converting web 
properties into native apps using technologies like Electron or Cordova.

For the later requirements in the list around token policy, I am not sure these 
are requirements for single page apps per se. I don’t believe the need for a 
policy using short-lived refresh tokens, revoking at signout, or use of the 
revocation endpoint are different from browser and native applications. Rather 
they seem to be a function of usage patterns that an AS may need to support, 
and we happen to sometimes associate those usage patterns with typical usage of 
native apps vs of browser apps. For example, browser login on a borrowed device 
can easily leak over to being app authorization - the 
authentication/authorization are web-based processes to achieve SSO.

I have been working on some guidance here around token lifetimes and policies, 
but I don’t know whether that brings in too much AS/OP business logic (and, 
likely implied product/deployment features) to be industry practices.

-DW

> On May 17, 2018, at 10:23 AM, Hannes Tschofenig  
> wrote:
> 
> Hi Brock,
>  
> there have been several attempts to start writing some guidance but so far we 
> haven’t gotten too far.
> IMHO it would be great to have a document.
>  
> Ciao
> Hannes
>  
> From: OAuth [mailto:oauth-boun...@ietf.org ] 
> On Behalf Of Brock Allen
> Sent: 17 May 2018 14:57
> To: oauth@ietf.org 
> Subject: [OAUTH-WG] is updated guidance needed for JS/SPA apps?
>  
> Much like updated guidance was provided with the "OAuth2 for native apps" 
> RFC, should there be one for "browser-based client-side JS apps"? I ask 
> because google is actively discouraging the use of implicit flow:
>  
> https://github.com/openid/AppAuth-JS/issues/59#issuecomment-389639290 
> 
>  
> From what I can tell, the complaints with implicit are:
> * access token in URL
> * access token in browser history
> * iframe complexity when using prompt=none to "refresh" access tokens
>  
> But this requires:
> * AS/OP to support PKCE
> * AS/OP to support CORS 
> * user-agent must support CORS
> * AS/OP to maintain short-lived refresh tokens 
> * AS/OP must aggressively revoke refresh tokens at user signout (which is not 
> something OAuth2 "knows" about)
> * if the above point can't work, then client must proactively use revocation 
> endpoint if/when user triggers logout
>  
> Any use in discussing this?
>  
> -Brock
>  
> IMPORTANT NOTICE: The contents of this email and any attachments are 
> confidential and may also be privileged. If you are not the intended 
> recipient, please notify the sender immediately and do not disclose the 
> contents to any other person, use it for any purpose, or store or copy the 
> information in any medium. Thank you. 
> ___
> OAuth mailing list
> OAuth@ietf.org 
> https://www.ietf.org/mailman/listinfo/oauth 
> 

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] is updated guidance needed for JS/SPA apps?

2018-05-18 Thread David Waite
I don’t believe code flow today with an equivalent token policy as you have 
with implicit causes any new security issues, and it does correct _some_ 
problems. The problem is that you immediately want to change token policy to 
get around hidden iframes and special parameters.

Once you start trying to alter token policy (such as adding refresh tokens), 
you start to have new security considerations because of the execution 
environment of javascript in the browser. Specifically token exfiltration from 
the browser origin, which can be mitigated via token binding or service workers.

You don’t need to exfiltrate a token for a third party to use a the associated 
access; they can inject behavior onto the page via XSS or a browser extension. 
This is not related to token lifetime policy, or the use of implicit vs code. 
This is the more immediate area where I see guidance being important - 
especially considering that token exfiltration becomes closer to a theoretical 
attack if the behavior of my app is controlled.

-DW

On May 18, 2018, at 10:47 AM, John Bradley  wrote:
> 
> There are lots of issues with the current implicit flow around fragment 
> encoding as well.
>  
> However moving the token used for refresh from being a HTTP only cookie to a 
> refresh token available in the DOM makes me uncomfortable without having 
> sufficient mitigations against XSS.
>  
> The current flow is vulnerable to  XSS for the AT, however if that is short 
> lived it restricts the damage.
>  
> The better solution is token binding the AT and perhaps a RT. 
>  
> We need to start talking about it.  There are issues around potentially using 
> service workers etc as well.
>  
> So we should start but I am not sure of what the correct answer is yet.
>  
> John B.
>  
> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
>  
> From: Brock Allen <mailto:brockal...@gmail.com>
> Sent: Friday, May 18, 2018 6:36 PM
> To: John Bradley <mailto:ve7...@ve7jtb.com>; David Waite 
> <mailto:da...@alkaline-solutions.com>; Hannes Tschofenig 
> <mailto:hannes.tschofe...@arm.com>
> Cc: oauth@ietf.org <mailto:oauth@ietf.org>
> Subject: Re: [OAUTH-WG] is updated guidance needed for JS/SPA apps?
>  
> It sounds to me as if you're hesitant to recommend code flow (at least for 
> now) then for browser-based JS apps.
>  
> -Brock
>  
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] is updated guidance needed for JS/SPA apps?

2018-05-18 Thread David Waite


> On May 18, 2018, at 11:55 AM, Brock Allen  wrote:
> 
> > I don’t believe code flow today with an equivalent token policy as you have 
> > with implicit causes any new security issues, and it does correct _some_ 
> > problems. The problem is that you immediately want to change token policy 
> > to get around hidden iframes and special parameters.
> 
> Hidden frames and special params -- are those really the main concerns with 
> implicit?

They aren’t the only issues, no. The point was that you can use code flow 
instead of implicit, keep a 10 minute access token lifetime and no refresh 
token, and it doesn’t add new security concerns. The security concerns are 
around changing token policy once you are doing code flow, due to the execution 
environment of the browser.

The main initial motivation around implicit was client simplicity (plus it was 
rather early for CORS). Once you are implementing a second iframe-based 
approach to discretely retrieve updated access tokens, the simplicity argument 
doesn’t hold.

It is also an additional security consideration for the AS - ideally I want to 
reject my user authentication/consent content from being loaded in frames as a 
static policy, but now I need to allow it when prompt=none is set. This isn’t a 
policy recommended anyplace, just something the developers may have to argue 
with against the security people so that their app can have a halfway decent 
experience.

-DW

> I thought the access token being sent in the URL is a bigger concern, and 
> that's why code+PKCE is a better approach.

> 
> -Brock

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Dynamic Scopes

2018-06-18 Thread David Waite
One of the reasons I hear for people wanting parameterized scopes is to deal 
with transactions. I’d love to hear thoughts from the group on if/how OAuth 
should be used to authorize a transaction, vs authorize access to 
information/actions for a period of time. This approach for instance sounds 
like it is trying to scope down access to a single resource representing a 
transaction to be performed?

I also hear people wanting dynamic scopes to support a finer-grained access 
control, for instance not ‘allow moderation of chat rooms’ but rather the list 
of *specific* rooms. There is sometimes a case to be made that this would be 
better served as local state in the resource, or as the result of an API call, 
but there is value in some use cases to represent this as a finer-grained 
consent to the user.

I’ve seen parameterized scopes take the form of colon-deliminated name:param, 
as a function name(param), or as a URL https://nameurl?param=value.  The latter 
is recommended sometimes in specs like opened connect as a way to prevent 
conflicting vendor extensions.

In terms of requesting a parameterized scope - I prefer overloading scope (or 
perhaps claims when using connect) vs adding a new authorization request 
parameter - for one, use of authorization request parameters limits your grant 
type options unless you also define them as token request parameters for the 
other types. Second, the consent/business logic for determining which scopes a 
client get should already be a customization point for a reusable AS.

-DW

> On Jun 18, 2018, at 9:34 AM, Torsten Lodderstedt  
> wrote:
> 
> Hi all,
> 
> I have been working lately on use cases where OAuth is used to authorize 
> transactions in the financial sector and electronic signing. What I learned 
> is there is always the need to pass resource ids (e.g. account numbers) or 
> transaction-specific values (e.g. amount or hash to be signed) to the OAuth 
> authorization process to further qualify the scope of the requested access 
> token. 
> 
> It is obvious a static scope value, such as „payment“or „sign“, won’t do the 
> job. For example in case of electronic signing, one must bind the 
> authorization/access token to a particular document, typically represented by 
> its hash.
> 
> I would like to get your feedback on what you consider a good practice to 
> cope with that challenge. As a starting point for a discussion, I have 
> assembled a list of patterns I have seen in the wild (feel free to extend). 
> 
> (1) Parameter is part of the scope value, e.g. „sign:“ or 
> "payments:" - I think this is an obvious way to 
> represent such parameters in OAuth, as it extends the scope parameter, which 
> is intended to represent the requested scope of an access token. I used this 
> pattern in the OAuth SCA mode in Berlin Group's PSD API. 
> 
> (2) One could also use additional query parameter to add further details re 
> the requested authorization, e.g. 
> 
> GET /authorize?
> 
> &scope=sign
> 
> &hash_to_be_signed=
> 
> It seems to be robust (easier to implement?) but means the scope only 
> represents the static part of the action. The AS needs to look into a further 
> parameter to fully understand the requested authorization. 
> 
> (3) Open Banking UK utilizes the (OpenID Connect) „claims“ parameter to carry 
> additional data. 
> 
> Example:  
> 
> "claims": {
>"id_token": {
>"acr": {
>"essential": true,
>"value": "..."
>  },
>"hash_to_be_signed": {
>"essential": true,
>"value": ""
>}
>},
>"userinfo": {
>"hash_to_be_signed": {
>"essential": true,
>"value": ""
>}
>}
>  }
> 
> I‘m looking forward for your feedback. Please also indicated whether you 
> think we should flush out a BCP on that topic. 
> 
> kind regards,
> Torsten.___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Mail regarding draft-ietf-oauth-discovery

2018-07-10 Thread David Waite


> On Jul 10, 2018, at 12:19 PM, Andres Torres  wrote:

> In terms of API design the final result is confusing. The resource 
> /.well-known/oauth-authorization-server becomes a collection of resources 
> where issuer is a subresource.. However, 
> /.well-known/oauth-authorization-server should be a subresource of the 
> issuer/tenant. It is my understanding that .well-known is a prefix for known 
> resources in a given service. Multiple instances of a service (ie: tenants) 
> can be hosted using the same hostname in the form 
> {issuer|tenant-identifier}/.well-known/{known-resource}. This way a proper 
> resource hierarchy can be maintained in the URI namespace and heterogeneous 
> services can be deployed under the same hostname.

This is/was actually how it was done within OpenID Connect. However, the only 
structured URL components allowed within IETF specifications are underneath a 
/.well-known root. Since a multi-tenant application may have a different issuer 
per tenant all within one origin, this transformation was created such that 
each can have their own metadata.

Another option would have been to have the issuer URL be the discovery URL, but 
this would require an issuer of https://example.com  to 
modify the root of their service to respond to requests for metadata (such as 
in response to the requirements of an Accepts header).

A third option might be to define an ‘issuer’ parameter and behavior on the 
metadata endpoint, such that servers which host only one issuer could ignore 
it, but a server with multiple issuers could require and act on this parameter.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] updated Distributed OAuth ID

2018-07-19 Thread David Waite
Four comments.

First: What is the rationale for including the parameters as Link headers 
rather than part of the WWW-Authenticate challenge, e.g.:

WWW-Authenticate: Bearer realm="example_realm",
 scope="example_scope",
 error=“invalid_token",
resource_uri="https://api.example.com/resource";,

oauth_server_metadata_uris="https://as.example.com/.well-known/oauth-authorization-server
 https://different-as.example.com/.well-known/oauth-authorization-server";


My understanding is that the RFC6750 auth-params are extensible (but not 
currently part of any managed registry.)

One reason to go with this vs Link headers is CORS policy - exposing Link 
headers to a remote client must be done all or nothing as part of the CORS 
policy, and can’t be limited by the kind of link.

Second: (retaining link format) Is there a reason the metadata location is 
specified the way it is? It seems like

; 
rel=“oauth_server_metadata_uri"

should be

; rel=“oauth_issuer"

OAuth defines the location of metadata under the .well-known endpoint as a 
MUST. An extension of OAuth may choose from at least three different methods 
for handling extensions beyond this:
1. Add additional keys to the oauth-authorization-server metadata
2. Add additional resources to .well-known for clients to supporting their 
extensions to attempt to resolve, treating ‘regular’ OAuth as a fallback.
3. Define their own parameter, such as rel=“specialauth_issuer”, potentially 
using a different mechanism for specifying metadata

Given all this, it seems appropriate to only support specifying of 
metadata-supplying issuers, not metadata uris.

Third: I have doubts of the usefulness of resource_uri in parallel to both the 
client requested URI and the Authorization “realm” parameter.

As currently defined, the client would be the one enforcing (or possibly 
ignoring) static policy around resource URIs - that a resource URI is arbitrary 
except in that it must match the host (and scheme/port?) of the requested URI. 
The information on the requested URI by the client is not shared between the 
client and AS for it to do its own policy verification. It would seem better to 
send the client origin to the AS for it to do (potential) policy verification, 
rather than relying on clients to implement it for them based on static spec 
policy.

The name “resource URI” is also confusing, as I would expect it to be the URI 
that was requested by the client. The purpose of this parameter is a bit 
confusing, as it is only defined as “An OAuth 2.0 Resource Endpoint specified 
in [RFC6750] section 3.2 - where the term doesn’t appear in 6750 and there does 
not appear to be a section 3.2 ;-)

It seems that:
a. If the resource_uri is a direct match for the URI requested for the client, 
it is redundant and should be dropped
(If the resource URI is *not* a direct match with the URI of the resource 
requested by the client, it might need a better name).
b. If the resource URI is meant to provide a certain arbitrary grouping for 
applying tokens within the origin of the resource server, what is its value 
over the preexisting “ realm” parameter?
c. If the resource URI is meant to provide a way for an AS to allow resources 
to be independent, identified entities on a single origin - I’m unsure how a 
client would understand it is meant to treat these resource URIs as independent 
entities (e.g. be sure not to send bearer tokens minted for resource /entity1 
to /entity2, or for that matter prevent /entity1 from requesting tokens for 
/entity2).

Admittedly based on not fully understanding the purpose of this parameter, it 
seems to me there exists a simpler flow which better leverages the existing 
Authentication mechanism of HTTP. 

A request would fail due to an invalid or missing token for the realm at the 
origin, and and the client would make a request to the issuer including the 
origin of the requested resource as a parameter. Any other included parameters 
specified by the WWW-Authenticate challenge understood by the client (such as 
“scope”) would also be applied to this request.

Normal authorization grant flow would then happen (as this is the only grant 
type supported by RFC6750). Once the access token is acquired by the client, 
the client associates that token with the “realm” parameter given in the 
initial challenge by the resource server origin. Likewise, the ‘aud’ of the 
token as returned by a token introspection process would be the origin of the 
resource server.

It seems any more complicated protected resource groupings on a resource server 
would need a client to understand the structure of the resource server, and 
thus fetch some sort of resource server metadata.

Fourth and finally: Is the intention to eventually recommend token binding 
here? Token binding as a requirement across clients, resources

Re: [OAUTH-WG] updated Distributed OAuth ID

2018-07-20 Thread David Waite


> On Jul 20, 2018, at 2:33 AM, n-sakimura  wrote:
> 
> I did not quite follow why CORS is relevant here. We are just talking about 
> the client to server communication, and there are no embedded resources in 
> the response. Could you kindly elaborate a little, please? 

Sure!  It is effectively an additional (complex) restriction on 
implementation/capabilities of CORS and the design of the resources.

There are five possible access results for a resource that come to mind:
1. Client does not have authorization but gets a (possibly limited) entity 
response
2. Client does not have authorization and is challenged
3. Client has authorization and gets a (possibly customized) entity response
4. Client has insufficient authorization and is challenged (e.g. for a new 
access token, possibly with more scopes)
5. Client has insufficient authorization and is refused access

The CORS policies returned for 1 and 3 may be different than 2 and 4, may be 
different for 5, and may come from different infrastructure (such as an 
authenticating reverse proxy “gateway”). Note also that cases 1 and 3 may have 
a WWW-Authenticate header on the response, indicating that providing 
authorization may return a different entity response.

One way to handle remote access for all of these cases with commonality would 
be to expose the WWW-Authenticate header via the CORS policy.

With the Link header used as well, you would need to also expose the Link 
header in all of these cases (or at least 1-4). However, the Link header can 
return many relations beyond this one authorization use case, and you would be 
exposing those all-or-nothing. 

You effectively lose the ability to regulate visibility of the Link header via 
CORS, and must resort to selective disclosure of headers as your mechanism of 
control (or serialize those links in another way such as within the content 
body, when an available option)

-DW

>  
> For the second point, since it was discussed in the WG meeting yesterday, I 
> will defer to that discussion.
>  
> Best, 
>  
> Nat Sakimura
>  
>  
> From: OAuth [mailto:oauth-boun...@ietf.org] On Behalf Of David Waite
> Sent: Thursday, July 19, 2018 4:55 PM
> To: Dick Hardt 
> Cc: oauth@ietf.org
> Subject: Re: [OAUTH-WG] updated Distributed OAuth ID
>  
> Four comments.
>  
> First: What is the rationale for including the parameters as Link headers 
> rather than part of the WWW-Authenticate challenge, e.g.:
>  
> WWW-Authenticate: Bearer realm="example_realm",
>  scope="example_scope",
>  error=“invalid_token",
> resource_uri="https://api.example.com/resource 
> <https://api.example.com/resource>",
> 
> oauth_server_metadata_uris="https://as.example.com/.well-known/oauth-authorization-server
>  <https://as.example.com/.well-known/oauth-authorization-server> 
> https://different-as.example.com/.well-known/oauth-authorization-server 
> <https://different-as.example.com/.well-known/oauth-authorization-server>"
>  
> 
> My understanding is that the RFC6750 auth-params are extensible (but not 
> currently part of any managed registry.)
>  
> One reason to go with this vs Link headers is CORS policy - exposing Link 
> headers to a remote client must be done all or nothing as part of the CORS 
> policy, and can’t be limited by the kind of link.
>  
> Second: (retaining link format) Is there a reason the metadata location is 
> specified the way it is? It seems like
>  
> <https://as.example.com/.well-known/oauth-authorization-server 
> <https://as.example.com/.well-known/oauth-authorization-server>>; 
> rel=“oauth_server_metadata_uri"
>  
> should be
>  
> <https://as.example.com <https://as.example.com/>>; rel=“oauth_issuer"
>  
> OAuth defines the location of metadata under the .well-known endpoint as a 
> MUST. An extension of OAuth may choose from at least three different methods 
> for handling extensions beyond this:
> 1. Add additional keys to the oauth-authorization-server metadata
> 2. Add additional resources to .well-known for clients to supporting their 
> extensions to attempt to resolve, treating ‘regular’ OAuth as a fallback.
> 3. Define their own parameter, such as rel=“specialauth_issuer”, potentially 
> using a different mechanism for specifying metadata
>  
> Given all this, it seems appropriate to only support specifying of 
> metadata-supplying issuers, not metadata uris.
>  
> Third: I have doubts of the usefulness of resource_uri in parallel to both 
> the client requested URI and the Authorization “realm” parameter.
>  
> As currently defined, the client would be the one enforcing (or possibly 
> ignoring) static policy around

Re: [OAUTH-WG] updated Distributed OAuth ID

2018-07-20 Thread David Waite
There is also existing software that expects to be able to act/respond based 
only on the WWW-Authenticate header. See

https://hc.apache.org/httpcomponents-client-4.5.x/httpclient/apidocs/org/apache/http/auth/AuthScheme.html#processChallenge(org.apache.http.Header)
 
<https://hc.apache.org/httpcomponents-client-4.5.x/httpclient/apidocs/org/apache/http/auth/AuthScheme.html#processChallenge(org.apache.http.Header)>

-DW

> On Jul 20, 2018, at 2:33 AM, n-sakimura  wrote:
> 
> Hi David, 
>  
> Thanks for the comment, and sorry for the delay in my reply.
>  
> Doing it with Web Linking [RFC8288]  has several advantages.
>  
> It is standard based J It is just a matter of registering link relations to 
> the IANA Link Relation Type Registry, and it is quite widely used.
> No or very little coding needed: Other than adding some HTTP server 
> configuration, the rest stays the same as RFC6750.
> Standard interface: this kind of metadata is applicable not only for letting 
> the client find the appropriate authorization server but for other metadata 
> as well. Also, other endpoints as long as it is supporting the direct 
> communication with the client, can provide relevant metadata with it without 
> going through the client authorization.
>  
> I did not quite follow why CORS is relevant here. We are just talking about 
> the client to server communication, and there are no embedded resources in 
> the response. Could you kindly elaborate a little, please? 
>  
> For the second point, since it was discussed in the WG meeting yesterday, I 
> will defer to that discussion.
>  
> Best, 
>  
> Nat Sakimura
>  
>  
> From: OAuth [mailto:oauth-boun...@ietf.org] On Behalf Of David Waite
> Sent: Thursday, July 19, 2018 4:55 PM
> To: Dick Hardt 
> Cc: oauth@ietf.org
> Subject: Re: [OAUTH-WG] updated Distributed OAuth ID
>  
> Four comments.
>  
> First: What is the rationale for including the parameters as Link headers 
> rather than part of the WWW-Authenticate challenge, e.g.:
>  
> WWW-Authenticate: Bearer realm="example_realm",
>  scope="example_scope",
>  error=“invalid_token",
> resource_uri="https://api.example.com/resource 
> <https://api.example.com/resource>",
> 
> oauth_server_metadata_uris="https://as.example.com/.well-known/oauth-authorization-server
>  <https://as.example.com/.well-known/oauth-authorization-server> 
> https://different-as.example.com/.well-known/oauth-authorization-server 
> <https://different-as.example.com/.well-known/oauth-authorization-server>"
>  
> 
> My understanding is that the RFC6750 auth-params are extensible (but not 
> currently part of any managed registry.)
>  
> One reason to go with this vs Link headers is CORS policy - exposing Link 
> headers to a remote client must be done all or nothing as part of the CORS 
> policy, and can’t be limited by the kind of link.
>  
> Second: (retaining link format) Is there a reason the metadata location is 
> specified the way it is? It seems like
>  
> <https://as.example.com/.well-known/oauth-authorization-server 
> <https://as.example.com/.well-known/oauth-authorization-server>>; 
> rel=“oauth_server_metadata_uri"
>  
> should be
>  
> <https://as.example.com <https://as.example.com/>>; rel=“oauth_issuer"
>  
> OAuth defines the location of metadata under the .well-known endpoint as a 
> MUST. An extension of OAuth may choose from at least three different methods 
> for handling extensions beyond this:
> 1. Add additional keys to the oauth-authorization-server metadata
> 2. Add additional resources to .well-known for clients to supporting their 
> extensions to attempt to resolve, treating ‘regular’ OAuth as a fallback.
> 3. Define their own parameter, such as rel=“specialauth_issuer”, potentially 
> using a different mechanism for specifying metadata
>  
> Given all this, it seems appropriate to only support specifying of 
> metadata-supplying issuers, not metadata uris.
>  
> Third: I have doubts of the usefulness of resource_uri in parallel to both 
> the client requested URI and the Authorization “realm” parameter.
>  
> As currently defined, the client would be the one enforcing (or possibly 
> ignoring) static policy around resource URIs - that a resource URI is 
> arbitrary except in that it must match the host (and scheme/port?) of the 
> requested URI. The information on the requested URI by the client is not 
> shared between the client and AS for it to do its own policy verification. It 
> would seem better to send the client origin to the AS for it to do 
> (potential) policy verif

Re: [OAUTH-WG] Mobile Native apps and renewing access tokens

2018-09-05 Thread David Waite
The offline_access scope is defined as part of OpenID Connect, not as part of 
OAuth. There is no requirement that offline_access scope be the only way to 
have a refresh token issued, although some implementations have chosen to do 
this. My interpretation is that the offline_access is a partial misnomer - the 
user is only ‘offline’ in the sense of whether they currently have an 
authenticated session with the IDP.

The reason many OIDC implementations try to not return refresh tokens is that 
they want to have all authentication decisions flow through the user agent as 
potentially interactive. You can track user authentication within a refresh 
token, but this adds complications such as requiring persistent state within 
the application.

The question is whether your app cares about current user authentication at the 
IDP. If your application either:
- Uses IDP authentication more as a registration/account recovery to a local 
application account
- Doesn’t care about IDP authentication but is just using the token for 
authorization purposes

Then there isn’t a reason to go back through the web-based authentication 
process. Just do what you need to get a refresh token within the TOS of the IDP 
and go with that.

IMO, the value of knowing active IDP authentication is reduced for mobile use 
cases, because (rightly or wrongly) users are expected to control access to the 
native applications through screen locks, passcodes, and biometrics. The 
primary user authentication is local and implicit in being able to open the 
app. The UX expectation is that further authentication challenges by remote 
services are to be done only as needed for their own CYA.

If however you do want to rely on the IDP authentication, you’ll need to play 
within the process of the IDP chosen. Don’t request offline access, hope they 
give you a refresh token to use, and if not you’ll be popping up a browser pane 
(with a user consent on iOS 11+) every time the access token expires so the IDP 
can determine if the authentication still holds.

And hope in the future that in the absence of offline_access, the IDP give a 
refresh token tracking the user authentication session, in order to reduce the 
frequency users are sent through that browser pane.

-DW

> On Sep 5, 2018, at 9:29 AM, Ron Alleva  wrote:
> 
> Hi all,
> 
> I was looking around for guidance around how to refresh access tokens on 
> native mobile experiences. 
> 
> Suppose we’re using a normal OAuth auth code flow with a mobile app (Chrome 
> custom tabs/ASWebAuthenticationSession and all). Also, want to reduce the 
> interruptions to the end user. 
> 
> In general, it seems like refresh token is not the tool for the job, as it 
> generally implies offline_access, when the user is not there. So if the user 
> signed out of their identity provider, I would not want them to be able to 
> get a new access token.
> 
> Via normal web flow, the “prompt=none” ability is available (called Silent 
> Authentication by Auth0), so the refresh can be done in the background, 
> without ever bothering the user at all if they are still logged in. I don’t 
> think this is possible with a chrome custom tab or iOS equivalent, even if 
> the user never needs to enter their password.
> 
> Is there some type of similar flow for native mobile applications? It seems 
> like most of the suggestions out there refer to just using the refresh 
> tokens. Also, another note, SMART on FHIR seems to introduce the concept of 
> “online_access”, which seems to indicate a refresh token that is tied to an 
> authentication session. That also seems interesting to me.
> 
> So anyway, is there any general guidance? Is everyone just using refresh 
> tokens? Some combination of long access tokens and longer web authentication 
> sessions?
> 
> Thanks in advance,
> 
> Ron
> ___
> OAuth mailing list
> OAuth@ietf.org 
> https://www.ietf.org/mailman/listinfo/oauth 
> 
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] draft-hardt-oauth-distributed suggestions

2018-10-25 Thread David Waite


> On Sep 28, 2018, at 9:10 PM, Evert Pot  wrote:
> 

> 
> One thing I've missed from using OAuth2-powered services over HTTP Basic
> / Digest, is the ability for a browser to handle login. The idea that a
> browser can potentially do all the steps required, means that a user
> could potentially hit a resource server directly and browse it
> interactively without requiring a non-browser client. I think this
> concept is really powerful, and Distributed OAuth is a good step in that
> direction.
> 
> The piece that's missing though is that using the current OAuth2
> framework, a generic client would still need to have a client_id.
> 
> I don't fully understand the security implications of this, but could
> this specification potentially be expanded so that the WWW-Autenticate
> challenge can optionally also include a client_id?

I assume this is the WWW-Authenticate Bearer challenge at the resource.

I lean toward reducing the coupling between the resources and the AS to 
understanding the access token - how to directly verify it or introspect it, 
the meaning of data contained within such as scopes, and so on. This sort of 
dynamic specification expands that to needing to state the issuer (hopefully 
not raw metadata url) of the issuers it wishes to receive authorization from - 
a pretty minimal (and logical) expansion on the requirements of the resource 
server for what is being attempted.

Having the resource server also maintain metadata on how anonymous clients 
should work seems like an unnecessary expansion on resource server 
responsibilities. I would say instead:
- Expand Dynamic Client Registration (7591) as necessary to meet new 
requirements.
- Give an anonymous client identifier as part of the AS server metadata (8414)

Having a unique client identifier (or token) per browser may help with managing 
security constraints the AS may want to place on anonymous clients, such as SSO 
or persistent consent.

> The way I see this work is that a browser could grab it and attempt
> using it with the implicit flow.

For a built-in browser support (especially for a static client_id) it isn’t 
clear what a recommendation for the redirect uri would be. Reducing the 
requirements placed upon a resource server, I’d prefer this not be a ‘dummy 
resource’ for the browser to consume rather than routing through.

> I did some experimentation with this, and I believe that this feature
> could actually be built as a web extension, but it will only work in
> Firefox as Chrome does not give web extensions access to the
> Authorization header.

The other approach I’ve played with for this is using service workers, although 
I haven’t gotten far enough to figure out the UX for user authentication and 
consent. Is there any feasibility to having an extension inject a service 
worker? I suppose this could conflict with CSP.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] AS Discovery in Distributed Draft

2018-11-05 Thread David Waite
Is there a need for a client to understand the identity of an authorization 
server?

This would seem to mean that the token or authorization endpoint would need to 
be that identity, rather than the issuer (since now the metadata might not be 
from an authoritative location)

-DW

> On Nov 5, 2018, at 10:19 PM, Justin P Richer  wrote:
> 
> In the meeting tonight I brought up a response to the question of whether to 
> have full URL or plain issuer for the auth server in the RS response’s 
> header. My suggestion was that we have two different parameters to the header 
> to represent the AS: one of them being the full URL (as_uri) and one of them 
> being the issuer to be constructed somehow (as_issuer). I ran into a similar 
> problem on a system that I built last year where all of our servers had 
> discovery documents but not all of them were easily constructed from an 
> issuer style URL (using OIDC patterns anyway). So we solved it by having two 
> different variables. If the full URL was set, we used that; if it wasn’t, we 
> tried the issuer; if neither was set we didn’t do any discovery.
> 
> I’m sensitive to Torsten’s concerns about complexity, but I think this is a 
> simple and deterministic solution that sidesteps much of the issue. No pun 
> intended.
> 
> — Justin
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] draft-parecki-oauth-browser-based-apps-00

2018-11-07 Thread David Waite
> On Nov 7, 2018, at 9:08 AM, Simon Moffatt  wrote:
> 
> It's an interesting topic.  I think there is a definitely a set of options 
> and considerations for this.  Namely operational.  For example, hugely 
> popular mobile apps (multi-million downloads across different OS's) using 
> dynamic reg with per-instance private creds requires the AS to be able to 
> store and index those client profiles easily.  Smaller scale custom built 
> authorization servers are not necessarily going to be able to handle that - 
> hence the popularity of assuming everything is generic and public coupled 
> with PKCE.
> 
Having unique client identifiers does provide some niceties. As examples: it 
gives a user a chance to administer/revoke those clients, and it gives the AS 
an opportunity to do behavioral analysis with a per-client rather than per-user 
granularity.

It also allows you to track user-granted consent per client. There are very 
limited options (really, just id_token_hint from OIDC) to indicate when hitting 
the authorization endpoint that you have prior consent.

In any case, the ability to work with public clients or the need to do dynamic 
client registration is AS policy, not something clients typically have the 
power to decide.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] draft-parecki-oauth-browser-based-apps-00

2018-11-08 Thread David Waite


> On Nov 8, 2018, at 4:19 AM, Tomek Stojecki 
>  wrote:
> 
> Thanks for putting this together Aaron. 
> 
> Having read through the document, I am not as convinced that there is enough 
> of a benefit of Authorization Code + PKCE vs Implict Flow for SPAs.
> 
> In section 7.8. the document outlines the Implicit flow disadvantages as 
> following:
> 
> "- OAuth 2.0 provides no mechanism for a client to verify that an access 
> token was issued to it, which could lead to misuse and possible impersonation 
> attacks if a malicious party hands off an access token it retrieved through 
> some other means to the client."
> 
> If you use Code + PKCE with no client secret (public client) as it is being 
> advocated, you can't verify the client either. PKCE is not for authenticating 
> the client, it is there to provide a mechanism to verify inter-app 
> communication, which occurs between a browser and a native app. There is no 
> inter-app communication in implicit (everything stays in the browser), so no 
> need for PKCE.

Use of a fixed set of uniquely resolvable redirect URIs (e.g. not localhost, 
not arbitrarily registrable custom URI schemes) does provide an addressable 
channel to the public client for codes and implicit tokens. I should not be 
able to make a client that can reuses the public identifier as a third party, 
because I cannot catch the response URL from the authorization endpoint.

The difference is with implicit, the first party client cannot verify the 
tokens were meant for the client (or really even that they came from the AS). 
With code, you are contacting the AS and doing an exchange based on the code 
and your public client identifier.

In the sense of pure OAuth, where no authentication or additional authorization 
decisions are made based on the access token, the issuer and audience client 
could be checked via an introspection endpoint - but if you are using pure 
OAuth and not relying on things like CORS restrictions for your security model, 
it shouldn’t matter.

For OpenID Connect and cases where you are going beyond a pure authorization 
model (perhaps by having API access restricted by CORS), this sort of check is 
important. You could trust clients to explicitly check by looking in an 
id_token (when available) or calling an introspection endpoint, or you could 
have this check happen as part of normal flow by using code flow.

PCKE does not resolve any known code injection attacks for SPA public clients. 
Recommending administrators require PKCE does allow them to start to make a 
single coherent policy for public clients.

A consistent policy also helps in SPA/native app crossover cases. For examples, 
a javascript app could be published as a native app via wrapping in a 
Cordova-style toolset, or by sharing a significant amount of code using a 
React-native style toolset. Both the SPA and native app could then share 
handling links depending on native app installation due to universal/app link 
features of the operating systems. For this reason, there was a definite effort 
to propose best practices that overlapped with the native app BCP.

It is also worth noting that since the SPA and native app could share the same 
client identifier and redirect handling (via universal links), you *could* have 
code injection, but it would be between multiple first-party apps.

> "- Supporting the implicit flow requires additional code, more upkeep and 
> understanding of the related security considerations, while limiting the 
> authorization server to just the authorization code flow simplifies the 
> implementation."
> 
> This is subjective and can be easily argued the other way. I think one of the 
> main selling points behind implicit was its simplicity. It is hard to argue 
> (putting libraries aside) that making one redirect (implicit) requires more 
> code, work, etc.. than making a redirect and at least two additional calls in 
> order to get AT (plus CORS on AS side).

The implicit flow is only simpler for clients until they have to get a new 
access token. Typically then you need a different set of OAuth code to request 
a new token in the background via a hidden iframe, failing back to temporarily 
leaving the app via top-level redirect. There are also cases where the safari 
browser detects this cross-domain iframe bouncing as a tracker and segments any 
AS cookies/storage per client site.

It is more work for the AS to support both. You may also wind up having 
different security models for the two different public client flows. In 
particular, you may find yourself as an AS policy setter wanting to have longer 
lived sessions for implicit clients which do not have refresh tokens to extend 
access token validity in the background. For this reason, you may also decide 
implicit clients cannot get access to the same scopes that a code client can. 

There are also gotchas people are not used to in implicit, such as fragment 
identifiers being preserved on redirects.

For Open

Re: [OAUTH-WG] draft-parecki-oauth-browser-based-apps-00

2018-11-09 Thread David Waite
Hi Hans, I hope it is acceptable to reply to your message on-list.

Others could correct me if I am wrong, but I believe the purpose of this 
document is to recommend uses of other OAuth/OIDC specifications, not to 
include its own technologies.

In terms of being another spec to be referenced, I think it would be useful but 
I wonder hypothetically how to best write that specification. This method seems 
to be relying on standards-defined tokens and converting them to an application 
server session, which isn’t defined by behavior other than HTTP cookies. The 
session info hook then lets you use those proprietary session tokens to 
retrieve the access/id token.

As such, it is closer to an architecture for implementing a client - as a 
confidential client backend with an associated SPA frontend that needs to make 
OAuth-protected calls. It is not describing the communication between existing 
OAuth roles, such as between the client and AS.

There’s obvious value here, and it's one of several architectures for 
browser-based apps using a confidential client rather than a public one 
(another example being a reverse proxy which maps remote OAuth endpoints into 
local, session-token-protected ones). I personally am just not sure how you 
would define the communication between back-end and front-end portions of the 
client in these architectures as a standard within OAuth.

-DW

> On Nov 6, 2018, at 3:03 AM, Hans Zandbelt  wrote:
> 
> Hi Aaron, DW,
> 
> About draft-parecki-oauth-browser-based-apps:
> would you consider including a recommendation about and the standardization 
> of a "session info" endpoint (I'm open for better naming ;-)) as described in:
> https://hanszandbelt.wordpress.com/2017/02/24/openid-connect-for-single-page-applications/
>  
> 
> 
> this approach is not just something that I cooked up; it is based on real 
> world requests & deployments at Netflix and OAth.
> 
> Let me know what you think,
> 
> Hans.
> 
> On Tue, Nov 6, 2018 at 10:55 AM Hannes Tschofenig  > wrote:
> Hi all,
> 
> Today we were not able to talk about 
> draft-parecki-oauth-browser-based-apps-00, which describes  "OAuth 2.0 for 
> Browser-Based Apps".
> 
> Aaron put a few slides together, which can be found here:
> https://datatracker.ietf.org/meeting/103/materials/slides-103-oauth-sessa-oauth-2-for-browser-based-apps-00.pdf
>  
> 
> 
> Your review of this new draft is highly appreciated.
> 
> Ciao
> Hannes
> IMPORTANT NOTICE: The contents of this email and any attachments are 
> confidential and may also be privileged. If you are not the intended 
> recipient, please notify the sender immediately and do not disclose the 
> contents to any other person, use it for any purpose, or store or copy the 
> information in any medium. Thank you.
> 
> ___
> OAuth mailing list
> OAuth@ietf.org 
> https://www.ietf.org/mailman/listinfo/oauth 
> 
> 
> 
> -- 
> hans.zandb...@zmartzone.eu 
> ZmartZone IAM - www.zmartzone.eu 

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] I-D Action: draft-ietf-oauth-security-topics-10.txt

2018-11-20 Thread David Waite


> On Nov 20, 2018, at 1:37 PM, Aaron Parecki  wrote:
> 
> The new section on refresh tokens is great! I have a couple 
> questions/comments about some of the details.
> 
> Authorization servers may revoke refresh tokens automatically in case
> of a security event, such as:
> o  logout at the authorization server
>  
> This doesn't sound like the desired behavior for mobile apps, where the 
> user's expectation of how long they are logged in to the mobile app is not 
> tied to their web session where they authorized the app. However this does 
> likely match a user's expectation when authorizing a browser-based app. 
> Should this be clarified that it should not apply to the mobile app case, or 
> only apply to browser-based apps?

There is also the model where web sessions are perpetual; where you are 
evaluating access against a confidence that it is the legitimate user against 
known threats. In that model, you require authentication (perhaps by 
invalidating a client’s access and refresh tokens) as needed to rebuild that 
confidence.

This still is considered an online model; the offline model would be 
distinguished by evaluating the confidence that a client is still trusted and 
acting in the user’s interests.

In terms of user-initiated logout - logout is an interesting action, with both 
broad and miscommunicated purpose. I’ve found three different verbalizations of 
why a user hits this button:

1. It must be here for some reason, so I think I’m supposed to hit it when I’m 
done (aka 'security hygiene')
2. I suspect I might not be the next person who interacts with this device, so 
you should ask me who I am before allowing future access (aka ‘is this still 
you’)
3. I want software on this device to stop being able to access my accounts, and 
to destroy any cached information (aka ‘kill switch’)

All could be expected to stop access by online clients, while someone operating 
under expectation #3 could extend even to stopping offline access.

Likewise, it could be said that users with expectation #2 would resume access 
with previous scopes after an authentication, while expectation #3 would imply 
a need to reestablish consent to resume.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] draft-parecki-oauth-browser-based-apps-00

2018-11-21 Thread David Waite

> On Nov 21, 2018, at 12:08 AM, Hans Zandbelt  
> wrote:
> 
> I think of this as somewhat similar to:
> a)  a grant type where a cookie grant is exchanged at an "RP token endpoint" 
> for an associated access and refresh token; the protocol between SPA and the 
> API to do so would benefit from standardization e.g. into SDKs and server 
> side frameworks
> b) an "RP token introspection endpoint" where the cookie is introspected at 
> the RP and associated tokens are returned
> 
> if anyone comes up with a better name for this model and endpoint (and 
> probably less overloading of AS endpoint names...) and/or is willing to help 
> writing this down, please come forward and we'll pick it up on a new 
> thread/doc 

Hand waving follows :-)

This sounds like the RP environment as two pieces, a javascript application and 
back-end infrastructure. The RP infrastructure maintains local tokens which it 
derives from remote tokens issued by a single upstream AS/IdP, which it 
interacts with as a confidential client.

This RP infrastructure separately manages authentication/authorization for a 
javascript application. In this use case, this infrastructure allows the 
javascript application to get the access token issued by the upstream AS, so 
that the javascript application may then act as the client to interact with 
protected resources associated with that AS. (For protected resources within 
the RP environment, a separate local token is used for authorization; possibly 
a non-OAuth token such as the cookie)

The first requirement of access token exposure sounds like a fit for token 
exchange, with the RP exposing its own authorization service token endpoint for 
this purpose, and the javascript acting as a public client to the RP and not to 
the upstream OAuth AS. The “cookie token” would have a specific token type for 
this use case. Multiple exchanges would potentially return the same upstream 
access token, or could silently use the refresh token if needed to acquire and 
return a fresh access token.

In this scenario I would not expose the refresh token, as the javascript 
application should not have a direct relationship with the upstream AS, nor 
should it have credentials to perform the refresh.  Likewise, the id_token was 
addressed to the RP infrastructure and not the javascript application - I would 
expect authentication interactions to be between the RP infrastructure and the 
javascript application, indirectly based on the RP infrastructure’s 
relationship upstream service.

Once the javascript app has the access token, it should be able to use it to 
interact with a user info endpoint. This might be a RP user info endpoint, or 
the upstream user info endpoint, depending on RP requirements.

FWIW, if there are multiple upstream AS’s, I would expect the local RP 
environment to be a ‘full fledged’ AS issuing its own local access tokens, and 
to provide its own local protected resources which then may dispatch to the 
protected resources of the various upstream AS’s as needed. Everything above 
could be reused in this scenario, although you might just decide to have the 
local protected resources accept the cookie directly in addition to the local 
RP environment access tokens.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] draft-parecki-oauth-browser-based-apps-00

2018-12-02 Thread David Waite
Agreed, if the BCP is meant to describe javascript behavior for best practices 
as respect to being an OAuth client, I’m unsure what would belong in this 
document for javascript which is instead interacting over non-standard 
mechanisms with an OAuth client. 

Instead, it would be generalized browser javascript security practices. It 
would be sure to overlap with some of the recommendations made to a client 
managing OAuth in the browser, but such a spec wouldn’t be under the umbrella 
of the OAuth WG - would it? We would be talking about general non-OAuth browser 
security practices around important cookies.

If (as Hans proposed) there was a mechanism for javascript to get access tokens 
to interact with protected resources in lieu of the client, there could be BCP 
around managing that (which would likely also overlap with a genuine 
javascript-in-browser client), but unfortunately there aren’t technical specs 
to support that sort of architecture yet.

-DW

> On Dec 2, 2018, at 4:43 PM, Aaron Parecki  wrote:
> 
> In this type of deployment, as far as OAuth is concerned, isn't the backend 
> web server a confidential client? Is there even anything unique to this 
> situation as far as OAuth security goes? 
> 
> I wouldn't have expected an Angular app that talks to its own server backend 
> that's managing OAuth credentials to fall under the umbrella of this BCP.
> 
> 
> Aaron Parecki
> aaronparecki.com 
> 
> 
> 
> On Sat, Dec 1, 2018 at 11:31 PM Torsten Lodderstedt  > wrote:
> the UI is rendered in the frontend, UI control flow is in the frontend. just 
> a different cut through the web app’s layering 
> 
> All Angular apps I have seen so far work that way. And it makes a lot of 
> sense to me. The backend can aggregate and optimize access to the underlying 
> services without the need to fully expose them.
> 
> Am 02.12.2018 um 00:44 schrieb John Bradley  >:
> 
>> How is that different from a regular server client with a web interface if 
>> the backed is doing the API calls to the RS?
>> 
>> 
>> 
>> On 12/1/2018 12:25 PM, Torsten Lodderstedt wrote:
>>> I forgot to mention another (architectural) option: split an application 
>>> into frontend provided by JS in the browser and a backend, which takes care 
>>> of the business logic and handles tokens and API access. Replay detection 
>>> at the interface between SPA and backend can utilize standard web 
>>> techniques (see OWASP). The backend in turn can use mTLS for sender 
>>> constraining.
>>> 
>>> Am 01.12.2018 um 15:34 schrieb Torsten Lodderstedt >> >:
>>> 
 IMHO the best mechanism at hand currently to cope with token 
 leakage/replay in SPAs is to use refresh tokens (rotating w/ replay 
 detection) and issue short living and privilege restricted access tokens.
 
 Sender constrained access tokens in SPAs need adoption of token binding or 
 alternative mechanism. mtls could potentially work in deployments with 
 automated cert rollout but browser UX and interplay with fetch needs some 
 work. We potentially must consider to warm up application level PoP 
 mechanisms in conjunction with web crypto. Another path to be evaluated 
 could be web auth.
 
 Am 01.12.2018 um 10:15 schrieb Hannes Tschofenig 
 mailto:hannes.tschofe...@arm.com>>:
 
> I share the concern Brian has, which is also the conclusion I came up 
> with in my other email sent a few minutes ago.
> 
>  
> 
> From: OAuth mailto:oauth-boun...@ietf.org>> On 
> Behalf Of Brian Campbell
> Sent: Friday, November 30, 2018 11:43 PM
> To: Torsten Lodderstedt  >
> Cc: oauth mailto:oauth@ietf.org>>
> Subject: Re: [OAUTH-WG] draft-parecki-oauth-browser-based-apps-00
> 
>  
> 
>  
> 
> On Sat, Nov 17, 2018 at 4:07 AM Torsten Lodderstedt 
> mailto:tors...@lodderstedt.net>> wrote:
> 
> > Am 15.11.2018 um 23:01 schrieb Brock Allen  > >:
> > 
> > So you mean at the resource server ensuring the token was really issued 
> > to the client? Isn't that an inherent limitation of all bearer tokens 
> > (modulo HTTP token binding, which is still some time off)?
> 
> Sure. That’s why the Security BCP recommends use of TLS-based methods for 
> sender constraining access tokens 
> (https://tools.ietf.org/html/draft-ietf-oauth-security-topics-09#section-2...2
>  
> ).
>  Token Binding for OAuth 
> (https://tools.ietf.org/html/draft-ietf-oauth-token-binding-08 
> ) as well 
> as Mutual TLS for OAuth 
> (https://tools.ietf.org/html/draft-ietf-oauth-mtls-12 
> 

Re: [OAUTH-WG] draft-parecki-oauth-browser-based-apps-00

2018-12-03 Thread David Waite


> On Dec 3, 2018, at 1:25 AM, Torsten Lodderstedt  
> wrote:
> 
> I think the BCP needs to point out there are solutions beyond an app directly 
> interacting with AS and RSs from the browser. Otherwise people get the wrong 
> impression this is the only way to go. To the contrary and as I pointed out, 
> there are a lot of SPAs working as UI of a larger application. 

My feeling is different - these applications all _could_ be 
expressed/implemented in terms of OAuth 2/OpenID Connect. Instead of 
authorization being done via opaque access tokens, the non-OAuth application 
has authorization tracked via opaque cookies.

I think we can state this, and that many of the rules given could be used
> 
> Any multi user app needs a database. Will this database be directly exposed 
> to the frontend? I don’t think so. There will be a backend, exposing relevant 
> capabilities to the SPA.

Sure, but this doesn’t change the interface being exposed around the database 
as being a protected resource - just one protected by a token acquired via a 
different non-OAuth manner

> And if this app also uses external services, where do you want to store the 
> respective refresh tokens? In the browser’s local storage? I don’t believe 
> so. They will go into the same backend & database.

> And there are other reasons: No one will ever be able to implement a PSD2 TPP 
> as a stand-alone SPA, obviously because it requires a confidential client but 
> there are more aspects. 

You could have your AS also be responsible for fetching/maintaining remote 
tokens, and issue local environment tokens. It could expose either local 
protected resources which use these remote resources, or provide a reverse 
proxy that translates the calls directly, including applying the remote access 
token. This also looks very similar whether you are talking about the 
javascript being OAuth or using a proprietary cookie-based system.

> Moreover, some security objectives can only be achieved if a backend is used. 
> That’s how the discussion started (token binding and the like).

Cookies have browser-level support, so they can have browser-level protections 
asked for (SameSite, HttpOnly, Secure, separate path/domain limiting). IMHO, 
the other differences are apples-to-oranges comparing different protected 
resources, not access-vs-other-tokens.

Is there value in defining “official” recommendations around access tokens 
within cookies?

> IMHO omitting this option significantly reduces the relevance of the BCP.
> I’m not saying we shall describe the interaction between frontend and backend 
> in detail. I advocate for pointing out this option and its benefits. That’s 
> it.

Again, I think a significant portion of recommendations would have value for 
non-oauth-client javascript. But I think we should focus on defining solely in 
terms of OAuth clients. I agree we should point out the option in the sense 
that it will help people understand that it doesn’t significantly affect the 
security requirements. The rest seem points around protected resources and 
cases for a local AS to house business logic.

A lot of the above might be recommendations around protected resources and 
multi-level authorization (for example: having clients interact with a local 
environment which may behind-the-scenes be using OAuth itself with remote 
services). I’m unsure how you would rein in scope on something like this, 
though.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth Security Topics -- Recommend authorization code instead of implicit

2018-12-05 Thread David Waite


> On Dec 5, 2018, at 5:16 AM, Torsten Lodderstedt  
> wrote:
> 
> Hi Tomek, 
> 
>> Am 04.12.2018 um 19:03 schrieb Tomek Stojecki :
>> 
>> Thanks Torsten!
>> So if I am putting myself in the shoes of somebody who sets out to do that - 
>> switch an existing SPA client (no backend)
> 
> I would like to ask you a question: how many SPAs w/o a backend have you seen 
> in your projects?

Pivoting to apps with local domain business logic (aka a backend):

Setup - the developer is building a browser-targeted app and at least one 
mobile app - their backend would likely be identical across all three. 

In that case, would they want client access to that backend to be secured with 
access tokens? Or should that backend to be the client to the AS, and 
communication from the javascript to the backend be secured with some non-OAuth 
method like cookies or API keys? 

I push for OAuth in most of these cases, unless their strategy for mobile apps 
is to “wrap” the browser code and content into a native app - in which case 
more flexible access to that backend can be deferred if desired until there is 
stronger business need.

-DW

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth Security Topics -- Recommend authorization code instead of implicit

2018-12-06 Thread David Waite
One benefit of moving to code flow is that the refresh token can be used to 
check the validity of the user session (or rather, it allows the AS another 
avenue to force authentication events if the AS considers the user session to 
be expired (or has a drop in confidence).

There are indeed several ASs which, possibly because of an interpretation of 
OIDC, assume refresh tokens mean offline access and are mutually exclusive with 
public clients.

The ability to have refresh tokens track a user session is an AS implementation 
detail, and something that these ASs could choose to change to over time. In 
the meantime, there shouldn’t be anything preventing a client from doing the 
iframe and prompt=none step that they are doing today with implicit. Even if 
the AS is implemented in terms of stateless sessions, such functionality can be 
implemented by encoding user session information into the “code token".

-DW

> On Dec 6, 2018, at 11:51 AM, Phil Hunt  wrote:
> 
> While I generally agree with justin that moving everything to the back 
> channel is good, I worry that checking user login state may be more 
> important. 
> 
> What if upon refresh of a javascript client the AS would like to check the 
> validity of the current user session?
> 
> Many implementers solve the user experience issue by using prompt none in the 
> oidc authentication case. I seem to remember some oauth providers never 
> implemented refresh and they were able to create a good experience. 
> 
> Phil


> On Dec 6, 2018, at 7:47 AM, Justin Richer  wrote:
> 
> I support the move away from the implicit flow toward using the authorization 
> code flow (with PKCE and CORS)  for JavaScript applications. The limitations 
> and assumptions that surrounded the design of the implicit flow back when we 
> started no longer apply today. It was an optimization for a different time. 
> Technology and platforms have moved forward, and our advice should move them 
> forward as well. Furthermore, the ease of using the implicit flow 
> incorrectly, and the damage that doing so can cause, has driven me to telling 
> people to stop using it. 
> 
> There are a number of hacks that can patch the implicit flow to be slightly 
> better in various ways — if you tack on the “hybrid” flow from OIDC or JARM 
> plus post messages and a bunch of other stuff, then it can be better. But if 
> you’re doing all of that, I think you really need to ask yourself: why? What 
> do you gain from jumping through all of those hoops when a viable alternative 
> sits there? Is it pride? I don’t see why we cling to it. To me, it’s like 
> saying “hey sure my leg gets cut off when I do this, but I can stitch it back 
> on!”, when you could simply avoid cutting your leg off in the first place. 
> The best cure is prevention, and what’s being argued here is prevention.
> 
> So many of OAuth’s problems in the wild come from over-use of the front 
> channel, and any place we can move away from that is a good move. 
> 
> — Justin
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth Security Topics -- Recommend authorization code instead of implicit

2018-12-06 Thread David Waite
For systems with stateful sessions, you could reference that via the refresh 
token. If the access tokens are opaque to protected resources and meant to be 
used via introspection, you could also reference the state there as well.

For systems with stateteless sessions (e.g. JWT cookies), you have fewer 
options. A non-exhaustive list:
1. The refresh token can be modeled after a static user session policy, e.g. 
refresh will fail in four hours
2. Refresh tokens may have a sliding window within that policy, e.g. this 
refresh token is good for 30 minutes, but on refresh one will be issued good 
for another 30 minutes or the end of the four hour window, whichever is sooner
3. You can have a stateful system just for token revocations. This could 
reference a single session, or all sessions for the user (possibly under a 
specific client) generated before a particular time. The refresh (and possibly 
access) tokens would also have the same information in them for lookup. Logout 
could add an entry to this revocation list.

An aside:

This is kinda/sorta a  similar line of thinking that lead to DTVA ( 
https://bitbucket.org/openid/connect/raw/f76ffe99c47d4698bc2995c32dc7a402dd6e8c47/distributed-token-validity-api.txt
 
<https://bitbucket.org/openid/connect/raw/f76ffe99c47d4698bc2995c32dc7a402dd6e8c47/distributed-token-validity-api.txt>
 ). The “Distributed” part was about pushing the ability to validate access and 
id_tokens close to the protected resources/RPs. The goal was also to build an 
API that supported this sort of token validation by otherwise stateless apps.

It wasn’t expected that refresh tokens were based on this system - we 
envisioned most AS/IDP instances to be built for authentication, and therefore 
already have requirements and business processes that would require more 
complex / stateful sessions.

-DW

> On Dec 6, 2018, at 1:53 PM, Phil Hunt  wrote:
> 
> How would the token endpoint detect login status of the user?
> 
> Phil
> 
> Oracle Corporation, Cloud Security and Identity Architect
> @independentid
> www.independentid.com <http://www.independentid.com/>phil.h...@oracle.com 
> <mailto:phil.h...@oracle.com>
> 
>> On Dec 6, 2018, at 12:24 PM, David Waite > <mailto:da...@alkaline-solutions.com>> wrote:
>> 
>> One benefit of moving to code flow is that the refresh token can be used to 
>> check the validity of the user session (or rather, it allows the AS another 
>> avenue to force authentication events if the AS considers the user session 
>> to be expired (or has a drop in confidence).
>> 
>> There are indeed several ASs which, possibly because of an interpretation of 
>> OIDC, assume refresh tokens mean offline access and are mutually exclusive 
>> with public clients.
>> 
>> The ability to have refresh tokens track a user session is an AS 
>> implementation detail, and something that these ASs could choose to change 
>> to over time. In the meantime, there shouldn’t be anything preventing a 
>> client from doing the iframe and prompt=none step that they are doing today 
>> with implicit. Even if the AS is implemented in terms of stateless sessions, 
>> such functionality can be implemented by encoding user session information 
>> into the “code token".
>> 
>> -DW
>> 
>>> On Dec 6, 2018, at 11:51 AM, Phil Hunt >> <mailto:phil.h...@oracle.com>> wrote:
>>> 
>>> While I generally agree with justin that moving everything to the back 
>>> channel is good, I worry that checking user login state may be more 
>>> important. 
>>> 
>>> What if upon refresh of a javascript client the AS would like to check the 
>>> validity of the current user session?
>>> 
>>> Many implementers solve the user experience issue by using prompt none in 
>>> the oidc authentication case. I seem to remember some oauth providers never 
>>> implemented refresh and they were able to create a good experience. 
>>> 
>>> Phil
>> 
>> 
>>> On Dec 6, 2018, at 7:47 AM, Justin Richer >> <mailto:jric...@mit.edu>> wrote:
>>> 
>>> I support the move away from the implicit flow toward using the 
>>> authorization code flow (with PKCE and CORS)  for JavaScript applications. 
>>> The limitations and assumptions that surrounded the design of the implicit 
>>> flow back when we started no longer apply today. It was an optimization for 
>>> a different time. Technology and platforms have moved forward, and our 
>>> advice should move them forward as well. Furthermore, the ease of using the 
>>> implicit flow incorrectly, and the damage that doing so can cause, has 
>>> driven me to tell

Re: [OAUTH-WG] OAuth Security Topics -- Recommend authorization code instead of implicit

2018-12-07 Thread David Waite

> On Dec 7, 2018, at 5:50 AM, Jim Manico  wrote:


> I still encourage developers who are not XSS guru’s to stick to cookie based 
> sessions or stateless artifacts to talk to the back end and keep OAuth tokens 
> only flying intra-server. It’s an unpopular opinion, but even moderately good 
> XSS defense is equally unpopular

Is this a matter of saying they should have an API for these clients which 
exposes less of the risky activities? That cookies provide a defense against 
XSS exfiltration? And/or other?

-DW


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] draft-parecki-oauth-browser-based-apps and response_type/fragment

2018-12-09 Thread David Waite
I assume the original post meant response_mode, not response_type.

Fragments have their own data leakage problems, in particular they are 
preserved on 3xx redirects (per 
https://tools.ietf.org/html/rfc7231#section-7.1.2 
). The form_post mode is the 
safest, but unfortunately was not defined in the original specifications so it 
doesn’t have as widespread support.

In the absence of a response_mode RFC, I would typically suggest both killing 
the code in the referrer as part of processing, and a server-wide Referrer 
Policy of never or origin (as those have reasonably broad support) as 
server-wide response headers are easier to operationally audit.

-DW

> On Dec 8, 2018, at 3:53 PM, Brock Allen  wrote:
> 
> Not pure OAuth. This only came up as a question while I was implementing code 
> flow/pkce for oidc-client-js.
> 
> I can appreciate not expanding the current OAuth2 behavior in the BCP, so 
> that's fair. I only wanted to mention it in case it had not been considered.
> 
> Having said that, I think I will implement an optional response_type in my 
> code flow/pkce to allow fragment, but default to query (as that's the default 
> for pure code flow).
> 
> -Brock

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth Security Topics -- Recommend authorization code instead of implicit

2018-12-09 Thread David Waite


> On Dec 8, 2018, at 8:27 PM, Vittorio Bertocci 
>  wrote:
> 
> > Can you give a concrete example? To me it feels like you are explaining 
> > scenarios where OAuth is used for login.  
> 
> That's one of the scenarios of interest here. We can debate on whether that's 
> proper or not, but the practical consequence is that if I have two (or N) 
> apps that can call APIs via tokens obtained with the implicit flow, 
> eliminating AS the session cookie will prevent them from getting new tokens 
> automatically, without the developer having to write any code for "signout".
> The moment in which apps switch to code and hold on to RTs, the sheer fact 
> that the AS session cookie is gone will NOT stop individual apps from being 
> able to get new access tokens and call API.
> That would be an unintended consequence of the switch to code, and regardless 
> of whether it's a consequence of people abusing the protocol or not, I think 
> this scenario should be documented and people should be warned against it.

The AS is ultimately responsible for the security policy, though - if the AS 
policy isn’t supposed to allow my application access after the user hits log 
out, it should either:
1. Tie my application refresh tokens to be revoked at the logout event
2. Not give out refresh tokens to my application

Note that the session cookie is fulfilling the role of the refresh token in the 
second case. Also note that telling a browser to discard the cookie is not as 
good as supporting revoking it - if there is no revocation mechanism, a third 
party who gets the cookie/refresh token can use it for as long as policy allows.

I don’t expect application developers to use libraries that locally enforce 
more restrictive policy just because the operators of the AS aren’t doing their 
job setting appropriate policy for their clients. So this is really more of 
something that the AS needs to understand about their own policy.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] expires_in

2018-12-18 Thread David Waite
My understanding was that this parameter was advisory to the client - it 
neither mandated the client discard the token after the expires_in time, nor 
has a requirement that the token is no longer honored by protected resouces at 
that point in time (vs earlier or later).

Is there meaning that others assign to this value? The only use I’ve found is 
to schedule proactive refreshes to hopefully reduce latency by reducing the 
need to refresh in-line with user requests.

-DW

> On Dec 18, 2018, at 3:55 AM, Hannes Tschofenig  
> wrote:
> 
> Hi all,
> 
> In a recent email conversation on the IETF ACE mailing list Ludwig Seitz 
> suggested that the expires_in claim in an access token should actually be 
> mandatory.
> Intuitively it feels like access tokens shouldn't have an unrestricted 
> lifetime. I am curious whether recommendations would be useful here.
> 
> RFC 6819 talks about the expires_in claim and says:
> 
> 3.1.2.  Limited Access Token Lifetime
> 
>   The protocol parameter "expires_in" allows an authorization server
>   (based on its policies or on behalf of the end user) to limit the
>   lifetime of an access token and to pass this information to the
>   client.  This mechanism can be used to issue short-lived tokens to
>   OAuth clients that the authorization server deems less secure, or
>   where sending tokens over non-secure channels.
> 
> draft-ietf-oauth-security-topics-10 only talks about refresh token expiry.
> 
> In OpenID Connect the expires_in claim is also optional.
> 
> Ciao
> Hannes
> 
> IMPORTANT NOTICE: The contents of this email and any attachments are 
> confidential and may also be privileged. If you are not the intended 
> recipient, please notify the sender immediately and do not disclose the 
> contents to any other person, use it for any purpose, or store or copy the 
> information in any medium. Thank you.
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] MTLS and in-browser clients using the token endpoint

2019-01-08 Thread David Waite


> On Dec 28, 2018, at 3:55 PM, Brian Campbell 
>  wrote:
> 
> I spent some time this holiday season futzing around with a few different 
> browsers to see what kind of UI, if any, they present to the user when seeing 
> different variations of the server requesting a client certificate during the 
> handshake. 
> 
> In a non-exhaustive and unscientific look at the browsers I had easily at my 
> disposal (FF, Chrome, and Safari on Mac OS), it seems they all behave 
> basically the same. If the browser is configured with, or has access to, one 
> or more client certificates that match the criteria of the CertificateRequest 
> message from the server (basically if issued by one of the CAs in the 
> certificate_authorities of the CertificateRequest), a certificate selection 
> UI prompt will be presented to the user. Otherwise, a certificate selection 
> UI prompt is not presented all. When the CertificateRequest message has an 
> empty certificate_authorities list (likely the case with a optional_no_ca 
> type config), the browsers look for client certificates with any issuer 
> rather than narrowing it down. 

Was your testing via XHR/fetch?

FWIW,

Firefox behavior is determined by a global pick automatically / prompt every 
time flag. Details at https://wiki.mozilla.org/PSM:CertPrompt 


Safari on macOS relies on the keychain, where a record is created called an 
Identity Preference. This is a URL (https or email) to preferred certificate 
mapping. Previously, it would create this record the first time a user selected 
a certificate, then never prompt again.

Chrome seems to delegate to the underlying OS for certificate management, so on 
the Mac it has this behavior as well. This means however that other platforms 
may have different behaviors.

Safari on iOS used to automatically select a single certificate match, if the 
query was for a single client CA. I didn’t try with other small numbers (2, 3, 
etc) but when exposing the list of all available CAs as valid client CAs, it 
would prompt. This may not be the heuristic anymore, as knowing the name of a 
client CA (such one issued as part of a cloud EMM deployment) would allow 
certificates to be used for tracking.

IE (pre-edge) would allow the behavior to use an automatic cert or prompt to be 
configured per-zone, which would allow policy to send a device/user 
identification certificate to a particular set of sites by default. I have no 
experience with configuring Edge, unfortunately.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] MTLS and in-browser clients using the token endpoint

2019-01-08 Thread David Waite


> On Dec 28, 2018, at 3:55 PM, Brian Campbell 
>  wrote:
> 


> All of that is meant as an explanation of sorts to say that I think that 
> things are actually okay enough as is and that I'd like to retract the 
> proposal I'd previously made about the MTLS draft introducing a new AS 
> metadata parameter. It is admittedly interesting (ironic?) that Neil sent a 
> message in support of the proposal as I was writing this. It did give me 
> pause but ultimately didn't change my opinion that it's not worth it to add 
> this new AS metadata parameter.


Note that the AS could make a decision based on the token endpoint request - 
such as a policy associated with the “client_id”, or via a parameter in the ilk 
of “client_assertion_type” indicating MTLS was desired by this public client 
installation. The AS could then to TLS 1.2 renegotiation, 1.3 post-handshake 
client authentication, or even use 307 temporary redirects to another token 
endpoint to perform mutual authentication.

Both the separate metadata url and a “client_assertion_type”-like indicator 
imply that the client has multiple forms of authentication and is choosing to 
use MTLS. The URL in particular I’m reluctant to add support for, because I see 
it more likely a client would use MTLS without knowing it (via a device-level 
policy being applied to a public web or native app) than the reverse, where a 
single client (represented by a single client_id) is dynamically picking 
between forms of authentication.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] MTLS and in-browser clients using the token endpoint

2019-01-11 Thread David Waite


> On Jan 11, 2019, at 3:32 AM, Neil Madden  wrote:
> 
> On 9 Jan 2019, at 05:54, David Waite  wrote:
>> 
>>> On Dec 28, 2018, at 3:55 PM, Brian Campbell 
>>>  wrote:
>>> 
>> 
>> 
>>> All of that is meant as an explanation of sorts to say that I think that 
>>> things are actually okay enough as is and that I'd like to retract the 
>>> proposal I'd previously made about the MTLS draft introducing a new AS 
>>> metadata parameter. It is admittedly interesting (ironic?) that Neil sent a 
>>> message in support of the proposal as I was writing this. It did give me 
>>> pause but ultimately didn't change my opinion that it's not worth it to add 
>>> this new AS metadata parameter.
>> 
>> Note that the AS could make a decision based on the token endpoint request - 
>> such as a policy associated with the “client_id”, or via a parameter in the 
>> ilk of “client_assertion_type” indicating MTLS was desired by this public 
>> client installation. The AS could then to TLS 1.2 renegotiation, 1.3 
>> post-handshake client authentication, or even use 307 temporary redirects to 
>> another token endpoint to perform mutual authentication.
> 
> Renegotiation is an intriguing option, but it has some practical 
> difficulties. Our AS product runs in a Java servlet container, where it is 
> pretty much impossible to dynamically trigger renegotiation without accessing 
> private internal APIs of the container. I also don’t know how you could 
> coordinate this in the common scenario where TLS is terminated at a load 
> balancer/reverse proxy?
> 
> A 307 redirect could work though as the server will know if the client either 
> uses mTLS for client authentication or has indicated that it wants 
> certificate-bound access tokens, so it can redirect to a mTLS-specific 
> endpoint in those cases.

Agreed. There are trade-offs for both. As you say, I don’t know a way to have 
say a custom error code or WWW-Authenticate challenge to trigger renegotiation 
on the reverse proxy - usually this is just a static, location-based directive.

> 
>> Both the separate metadata url and a “client_assertion_type”-like indicator 
>> imply that the client has multiple forms of authentication and is choosing 
>> to use MTLS. The URL in particular I’m reluctant to add support for, because 
>> I see it more likely a client would use MTLS without knowing it (via a 
>> device-level policy being applied to a public web or native app) than the 
>> reverse, where a single client (represented by a single client_id) is 
>> dynamically picking between forms of authentication.
> 
> That’s an interesting observation. Can you elaborate on the sorts of device 
> policy you are talking about? I am aware of e.g. mobile device management 
> being used to push client certificates to iOS devices, but I think these are 
> only available in Safari.

The primary use is to set policy to rely on device level management in 
controlled environments like enterprises when available. So an AS may try to 
detect a client certificate as an indicator of a managed device, use that to 
assume a device with certain device-level authentication, single user usage, 
remote wipe, etc. characteristics, and decide that it can reduce user 
authentication requirements and/or expose additional scopes.

On more thought, this is typically done as part of the user agent hitting the 
authorization endpoint, as a separate native application may be interacting 
with the token endpoint, and in some operating systems the application’s 
network connections do not utilize (and may not have access to) the system 
certificate store.

In terms of user agents, I believe you can perform similar behavior (managed 
systems using client certificates on user agents transparently) on macOS, 
Windows, Chrome, and Android devices, Chrome (outside iOS) typically inherits 
device level policy. Firefox on desktop I assume you can do that in limited 
fashion as well.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] [UNVERIFIED SENDER] Re: MTLS and in-browser clients using the token endpoint

2019-02-04 Thread David Waite
My understanding is that a permanent redirect would be telling the client (and 
any other clients getting cached results from an intermediary) to now stop 
using the original endpoint in perpetuity for all cases. I don’t think that is 
appropriate (in the general case) for an endpoint with request processing 
business logic behind it, since that logic may change over time.

-DW

> On Feb 4, 2019, at 6:28 AM, Brian Campbell 
>  wrote:
> 
> Yeah, probably. 
> 
> On Sat, Feb 2, 2019 at 12:39 AM Neil Madden  > wrote:
> If we go down the 307 route, shouldn’t it rather be a 308 (permanent) 
> redirect? It seems unnecessary for the client to keep trying the original 
> endpoint or have to remember cache-control/expires timeouts. 
> 
> — Neil

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Correct error code for rate limiting?

2019-02-21 Thread David Waite
I don’t believe that any of the currently registered error codes are 
appropriate for indicating that the password request is invalid, let alone a 
more specific behavior like rate limiting.

It is also my opinion that 400 Bad Request shouldn’t be used for known 
transient errors, but rather for malformed requests - the request could very 
well be correct (and have the correct password), but it is being rejected due 
to temporal limits placed on the client or network address/domain.

So I would propose a different statuses such 401 to indicate the 
username/password were invalid, and either 429 (Too many requests) or 403 
(Forbidden) when rate limited or denied due to too many attempts. Thats not to 
say that the body of the response can’t be an OAuth-format JSON error, possibly 
with a standardized code - but again I don’t think the currently registered 
codes would be appropriate for conveying that.

That said, I don’t know what interest there would be in standardizing such 
codes, considering the existing recommendations against using this grant type.

-DW

> On Feb 21, 2019, at 10:57 PM, Aaron Parecki  wrote:
> 
> The OAuth password grant section mentions taking appropriate measures to rate 
> limit password requests at the token endpoint. However the error responses 
> section (
> https://tools.ietf.org/html/rfc6749#section-5.2 
> ) doesn't mention an error 
> code to use if the request is being rate limited.. What's the recommended 
> practice here? Thanks!
> 
> Aaron
> 
> -- 
> 
> Aaron Parecki
> aaronparecki.com 
> @aaronpk 
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] popular apps that use appauth?

2019-02-24 Thread David Waite
Offhand, Google Apps on iOS. Also the Facebook SDK uses a similar pattern. 

I believe third party apps which use Google for SSO are mandated to use it as 
well. Slack and Pokémon Go, for examples. 

A few apps will also use it or a similar pattern (for SAML) once they have 
determined it is an enterprise account. Some businesses are pushing hard for 
the others to change - a lot of EMM solutions and other authentication methods 
(like mutual TLS) don’t work properly with embedded browser views. 

-DW

> On Feb 24, 2019, at 1:26 AM, Dominick Baier  wrote:
> 
> The Uber app uses it for their OAuth flow to PayPal e.g.
> 
> ———
> Dominick
> 
>> On 23. February 2019 at 18:05:33, Brock Allen (brockal...@gmail.com) wrote:
>> 
>> I often have push back from customers (mainly the marketing department/UX 
>> folks) when suggesting AppAuth for native/mobile apps (IOW RFC8252). They 
>> ask for examples of any other popular or well known apps that follow this 
>> practice. Does anyone on this list have examples?
>> 
>> TIA
>> 
>> -Brock
>> 
>> ___ 
>> OAuth mailing list 
>> OAuth@ietf.org 
>> https://www.ietf.org/mailman/listinfo/oauth
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] popular apps that use appauth?

2019-02-24 Thread David Waite

> On Feb 24, 2019, at 10:43 AM, William Denniss 
>  wrote:
> 
> For 1P sign-in, there are several good reasons to go with 
> ASWebAuthenticationSession, like syncing the signed-in session with Safari 
> and using it if it already exists.

With enterprise 3P, you’ll have to use some web agent for authentication pretty 
much no matter what, and you’ll almost certainly get pressure to use 
ASWebAuthenticationSession, and/or potentially lose deals to competitors during 
product evaluations. It is simply what is required for robust integration into 
a corporate infrastructure.

For 1P on iOS, it depends on the complexity of authentication for first party. 
If you are just doing password and maybe SMS-based challenges, there is decent 
enough native app integration for password sharing and SMS keyboard for that to 
keep conversions high, even with having to authenticate twice.

However, if you want to authenticate the device (even pseudonymously with 
session cookies) or do other factors, the authentication is simpler with 
ASWebAuthenticationSession. Which means your life will be easier if you have 
more complex authentication requirements anywhere on your roadmap to just start 
off using ASWebAuthenticationSession.

It is likely that future authentication technologies like WebAuthn will not 
work with an embedded web view. The ability to arbitrarily inject javascript 
means that apps can phish webauthn responses for domains via embedded web views.

-DW

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] popular apps that use appauth?

2019-02-25 Thread David Waite


> On Feb 25, 2019, at 4:56 AM, Vittorio Bertocci  wrote:
> 
> The callbacks do avoid the loopback, which is great, but the usability 
> remains harder than mobile and the embedded case: the auth tab appears among 
> others, the modal windows remain a possibility, etc - the level of 
> sophistication of the target audience of the github app can definitely 
> (hopefully?) navigate those challenges, but for consumer grade apps they can 
> be blockers. When decision makers are presented with concrete support costs 
> from customer calls vs possible security issues, it's often hard to make a 
> case for the latter.

True, but these were all a reality when AppAuth first came about as well - the 
fall-back was custom URL schemes through the system browser, which meant an 
application switch, a new tab, a possible modal prompt to get the user back to 
the application, etc.

It is a harder problem on desktop operating systems because it is more 
challenging to decide if “external user-agent” always means “system browser” or 
“user default web browser”, and if the latter that means a testing matrix to 
understand the UX and limitations. Hypothetically, in some enterprises external 
user-agent might even mean “this security product we bought”.

However, we will see more mandatory sandboxing and hard-to-obtain entitlements 
necessary to talk to the resources we want for authentication. If you are only 
doing 1P authentication you have a longer runway than a company who wants to 
leverage third party or enterprise-deployed authentication. And to optimize the 
UX, those applications may have a period where they decide to include both 
AppAuth and non-AppAuth flows.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] draft-bertocci-oauth-access-token-jwt-00

2019-04-01 Thread David Waite
Do we know if there is a justifying use case for auth_time, acr, and amr to be 
available in OAuth JWT access tokens? These are meant to be messages about the 
client, either directly (in the case of client credentials) or about its 
delegated authorization of the user.

Embedding attributes about the user (such as group membership and roles) can be 
used for the resource to make finer-grained decisions than scopes, but normally 
I would expect say acr limitations enforced by a resource to instead be 
controlled by the AS requiring a higher quality authentication to release 
certain scopes.

Thats of course not to say extensions to OAuth such as OIDC can’t provide these 
values, just that they might better be defined by those extensions.

-DW

> On Apr 1, 2019, at 9:12 AM, George Fletcher 
>  wrote:
> 
> Thanks for writing this up. One comment on auth_time...
> 
>auth_time  OPTIONAL - as defined in section 2 of [OpenID.Core 
> ].
>   Important: as this claim represents the time at which the end user
>   authenticated, its value will remain the same for all the JWT
>   access tokens issued within that session.  For example: all the
>   JWT access tokens obtained with a given refresh token will all
>   have the same value of auth_time, corresponding to the instant in
>   which the user first authenticated to obtain the refresh token.
> 
> During a current session a user can be challenged for additional credentials 
> or required to re-authenticate due to a number of different reasons. For 
> example, OIDC prompt=login or max_age=NNN. In this context, I'd assume that 
> the auth_time value should be updated to the latest time at which the user 
> authenticated. 
> 
> If we need a timestamp for when the "session" started, then there could be a 
> session_start_time claim.
> 
> Thanks,
> George
> 
> On 3/24/19 7:29 PM, Vittorio Bertocci wrote:
>> Dear all,
>> I just submitted a draft describing a JWT profile for OAuth 2.0 access 
>> tokens. You can find it in 
>> https://datatracker.ietf.org/doc/draft-bertocci-oauth-access-token-jwt/ 
>> .
>> I have a slot to discuss this tomorrow at IETF 104 (I'll be presenting 
>> remotely). I look forward for your comments!
>> 
>> Here's just a bit of backstory, in case you are interested in how this doc 
>> came to be. The trajectory it followed is somewhat unusual.
>> Despite OAuth2 not requiring any specific format for ATs, through the years 
>> I have come across multiple proprietary solution using JWT for their access 
>> token. The intent and scenarios addressed by those solutions are mostly the 
>> same across vendors, but the syntax and interpretations in the 
>> implementations are different enough to prevent developers from reusing code 
>> and skills when moving from product to product.
>> I asked several individuals from key products and services to share with me 
>> concrete examples of their JWT access tokens (THANK YOU Dominick Baier 
>> (IdentityServer), Brian Campbell (PingIdentity), Daniel Dobalian 
>> (Microsoft), Karl Guinness (Okta) for the tokens and explanations!). 
>> I studied and compared all those instances, identifying commonalities and 
>> differences. 
>> I put together a presentation summarizing my findings and suggesting a rough 
>> interoperable profile (slides: 
>> https://sec.uni-stuttgart.de/_media/events/osw2019/slides/bertocci_-_a_jwt_profile_for_ats.pptx
>>  
>> 
>>  ) - got early feedback from Filip Skokan on it. Thx Filip!
>> The presentation was followed up by 1.5 hours of unconference discussion, 
>> which was incredibly valuable to get tight-loop feedback and incorporate new 
>> ideas. John Bradley, Brian Campbell Vladimir Dzhuvinov, Torsten Lodderstedt, 
>> Nat Sakimura, Hannes Tschofenig were all there and contributed generously to 
>> the discussion. Thank you!!!
>> Note: if you were at OSW2019, participated in the discussion and didn't get 
>> credited in the draft, my apologies: please send me a note and I'll make 
>> things right at the next update.
>> On my flight back I did my best to incorporate all the ideas and feedback in 
>> a draft, which will be discussed at IETF104 tomorrow. Rifaat, Hannes and 
>> above all Brian were all super helpful in negotiating the mysterious syntax 
>> of the RFC format and submission process.
>> I was blown away by the availability, involvement and willingness to invest 
>> time to get things right that everyone demonstrated in the process. This is 
>> an amazing community. 
>> V.
>> 
>> 
>> ___
>> OAuth mailing list
>> OAuth@ietf.org 
>> https://www.ietf.org/mailman/listinfo/oauth 
>> 
> 
> __

Re: [OAUTH-WG] feedback on draft-ietf-oauth-browser-based-apps-00

2019-04-03 Thread David Waite
Multiple concepts often get tacked onto a particular term, which both aids and 
hinders communication.

From RFC 6749, a public client is defined as:
 "Clients incapable of maintaining the confidentiality of their
  credentials (e.g., clients executing on the device used by the
  resource owner, such as an installed native application or a web
  browser-based application), and incapable of secure client
  authentication via any other means.”

RFC 6749 also defines a user-agent-based application:
   " A user-agent-based application is a public client in which the
  client code is downloaded from a web server and executes within a
  user-agent (e.g., web browser) on the device used by the resource
  owner.  Protocol data and credentials are easily accessible (and
  often visible) to the resource owner.  Since such applications
  reside within the user-agent, they can make seamless use of the
  user-agent capabilities when requesting authorization.”

These have over time been conflated. So when people speak of public clients, 
they may mean a client which has some subset of the following aspects:
- A client which is also a user agent (I personally consider native 
applications to also be a user agent, but that’s neither here nor there)
- A client that cannot keep a secret and thus cannot be issued a secret
- A lack of guarantee that the client represents a particular agent (that it is 
unmodified 1st or 3rd party code)
- A client that cannot keep access tokens and traffic confidential (even if 
that is from a creative resource owner)

The ability to keep a secret is perhaps the least meaningful part of this list. 
The secret serves a purpose to identify the client, and thus “know” access 
tokens are being requested by that client. It isn’t that public clients cannot 
hold a secret that matters - it is that they cannot be reliably identified or 
assumed authentic.

In the confidential client case, the client is expected to represent particular 
business goals and possibly a particular organizational relationship. The 
access token is confidential, the communication with the resources is 
confidential - driven by business logic which is meant to be fixed to represent 
those business goals.

In a public client, the security model can try to defend against malicious 
third parties like web attackers and malicious parties acting as other clients, 
authorization servers, or protected resources - but you can’t defend against a 
compromised platform or a sufficiently motivated end user. 

Since a hybrid sharing of tokens between the two is not defined by any 
specification, we can only assume things like:
- either the backend gives the front-end an access token, another token which 
acts equivalent to the access token, or sets a cookie value which when applied 
to resources acts equivalent to an access token
- this means while the backend can keep a secret, it is meaningless in that the 
access token rights gained from that secret are not confidential
- since the frontend initiates the protocol traffic and has this new 
credential, the protocol data and which actions are performed are not secured

Now if this backend exposes its own reduced resource server and its own tokens, 
then this is different - but then I would argue either that:
- conceptually the backend is now the new AS and resource server for my 
frontend, which is still a public client (but perhaps no longer a public client 
in the eyes of the original AS)
- or that this is no longer OAuth

-DW

> On Apr 2, 2019, at 9:52 AM, George Fletcher  wrote:
> 
> Hi,
> 
> In section 6.2 the following statement is made...
> 
>In this scenario, the backend component may be a confidential client
>which is issued its own client secret.  Despite this, there are still
>some ways in which this application is effectively a public client,
>as the end result is the application's code is still running in the
>browser and visible to the user.
> 
> I'm curious as to how this model is different from many existing resource 
> server deployments acting as confidential clients. While the application code 
> is running in the browser, only the access token is exposed to the browser as 
> is the case for many RS deployments where the RS returns the access token to 
> the browser after the authorization flow completes. My interpretation of 
> "confidential client" does not include whether the client's code is "visible" 
> to externals or not, but rather whether the client can protect the secret.
> 
> In that sense I don't believe this deployment model is "effectively a public 
> client". A hybrid model description is fine, and I don't disagree that some 
> authorization servers may want to treat these clients in a different way.
> 
> Thanks,
> George

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] draft-fett-oauth-dpop-00

2019-04-09 Thread David Waite
My understanding:

The proof-of-possession needs to have a limited destination to prevent replay 
against other resources. Similar to resource indicators and to distributed 
OAuth, the client is expected to use a resource URL view of the world rather 
than an access-token-specific audience or scoped view of the world. (And 
method, because thats cheap to do.)

HTTP request signing has a high degree of complexity, and has had several 
iterations each with their own strengths and weaknesses (which I know you are 
intimately familiar with!)

There is nothing currently to prevent other specification(s) adding extra 
key/values corresponding to a header set and hash, query hash, body hash, and 
so on. If that holds true in the final specification, then an environment could 
require those keys to be present, and then leverage DPoP for both 
proof-of-possession and non-repudiation. 

-DW

> On Apr 9, 2019, at 8:36 PM, Justin Richer  wrote:
> 
> Then why include the request at all? Simpler to just sign a nonce and send 
> those, then.
> 
> — Justin
> 
>> On Apr 9, 2019, at 7:05 PM, Brian Campbell > > wrote:
>> 
>> The thought/intent is that it's really about proof-of-possession rather than 
>> protecting the request. So the signature is over a minimal set of 
>> information.
>> 
>> On Mon, Apr 8, 2019 at 5:41 PM Justin Richer > > wrote:
>> Corollary to this, are there thoughts of header protection under this 
>> method, and the associated issue of header modification?
>> 
>> — Justin
>> 
>>> On Apr 8, 2019, at 7:23 PM, Phil Hunt >> > wrote:
>>> 
>>> Question. One of the issues that Justin Richer’s signing draft tried to 
>>> address was url modification by tls terminators/load balencers/proxies/api 
>>> gateways etc. 
>>> 
>>> How do you see this issue in dpop? Is it a problem? 
>>> 
>>> Phil
>>> 
>>> On Apr 3, 2019, at 9:01 AM, George Fletcher 
>>> >> > wrote:
>>> 
 Perfect! Thank you! A couple comments on version 01...
 
POST /token HTTP/1.1
Host: server.example.com 
Content-Type: application/x-www-form-urlencoded;charset=UTF-8
DPoP-Binding: eyJhbGciOiJSU0ExXzUi ...
 
grant_type=authorization_code
&code=SplxlOBeZQQYbYS6WxSbIA
&redirect_uri=https%3A%2F%2Fclient%2Eexample%2Ecom%2Fcb
(remainder of JWK omitted for brevity)
 
 I believe the "(remainder of JWK..." should be moved to the DPoP-Binding 
 header...
 
 Also, there is no discussion of the DPoP-Binding header outside of the 
 token request, but I suspect that is the desired way to communicate the 
 DPoP-Proof to the RS.
 
 Possibly an example in the session for presenting the token to the RS 
 would help.
 
 Thanks,
 George
 
 On 4/3/19 11:39 AM, Daniel Fett wrote:
> This is fixed in -01:
> 
> https://tools.ietf.org/html/draft-fett-oauth-dpop-01 
> 
> 
> -Daniel
> 
> Am 03.04.19 um 17:28 schrieb George Fletcher:
>> A quick question regarding...
>> 
>>o  "http_uri": The HTTP URI used for the request, without query and
>>   fragment parts (REQUIRED).
>> 
>> Is 'without' supposed to be 'with' ? The example shows the http_uri 
>> *with* the query parameters :)
>> 
>> On 3/28/19 6:17 AM, Daniel Fett wrote:
>>> Hi all,
>>> 
>>> I published the first version of the DPoP draft at 
>>> https://tools.ietf.org/html/draft-fett-oauth-dpop-00 
>>> 
>>> Abstract
>>> 
>>>This document defines a sender-constraint mechanism for OAuth 2.0
>>>access tokens and refresh tokens utilizing an application-level
>>>proof-of-possession mechanism based on public/private key pairs.
>>> 
>>> Thanks for the feedback I received so far from John, Mike, Torsten, and 
>>> others during today's session or before!
>>> 
>>> If you find any errors I would welcome if you open an issue in the 
>>> GitHub repository at https://github.com/webhamster/draft-dpop 
>>> 

Re: [OAUTH-WG] draft-fett-oauth-dpop-01 implementation feedback

2019-05-04 Thread David Waite


> On May 2, 2019, at 12:32 AM, Paul Querna  wrote:
> Jumping into specific items:
> 
> cnf in DPoP-Proof
>>  o  "cnf": Confirmation claim as per [RFC7800] containing a member
>> "dpop+jwk", representing the public key chosen by the client in
>> JWK format (REQUIRED for DPoP Binding JWTs, OPTIONAL for DPoP
>> Proof JWTs).
> 
> Is there a use case for this being present in the DPoP-Proof JWT?  As
> I've implemented DPoP, I didn't see how it was helpful to be sent as a
> `cnf` claim of the Proof?

Discussions with some of the authors makes me think the next draft will change 
this so that there is no longer a distinction between binding and proof, 
instead having the key sent outside the DPoP-Proof JWT while binding (possibly 
as another header). This also allows for more potential flexibility for where 
the client public key comes from.

Ignoring that, cnf in a proof JWT would allow for debugging/errors to indicate 
a key mismatch. I think the current language may be a bit ambiguous as to 
whether a JWK set is allowed as “cnf” as it merely says “JWK format"; if sets 
are allowed then cnf in the proof would be used for key selection.

> Request Headers vs Parameters
>> 5.  Token Request (Binding Tokens to a Public Key)
> 
> Placing the DPoP Binding JWT in the HTTP Header `DPoP-Binding` is
> different than most other OAuth extensions that I am familiar with.
> It is easy in the Go OAuth2 library to add URL / /body params to the
> `/token` endpoint, but it is impossible to add an HTTP Header.  Is
> there a reason that the binding can't be sent as an OAuth Parameter in
> the token request body?

My understanding is that the goal is to have the proof be applied the same for 
accessing the token endpoint as when accessing the resources. 

RFC 6750 describes sending the access token as a query parameter or 
application/x-www-form-urlencoded form parameter, with recommendations of only 
legacy/last resort use and resources having this as MAY to support. I believe 
the default position (without strong evidence) is to not carry these forward by 
defining additional alternatives for DPoP as a query/form parameter.

> 
> HTTP Request Signing
>>  o  "http_method": The HTTP method for the request to which the JWT is
>> attached, in upper case ASCII characters, as defined in [RFC7231]
>> (REQUIRED).
>> 
>>  o  "http_uri": The HTTP URI used for the request, without query and
>> fragment parts (REQUIRED).
> 
> HTTP Request signing may be a quagmire that this spec wishes to avoid,
> but it seems very hard to avoid "fixing" it for at-scale adoption.
> With the Okta-hat on, I think this is one area we would like to see
> some iteration on.  I think it would be ideal to not require multiple
> client sign() and server validate() PKI operations per request, so
> this is where combining DPoP-Proof and a Request Signature start
> making sense to me.

> 
> Keeping it simple, there are two approaches for DPoP for adding
> attesting about the HTTP Request:
> 
>  a) Adding parts of the HTTP Request as claims
>  b) Adding a hash of an HTTP Request as a claim
> 
> For option (b), it seems like part of this could live in a separate spec:
> 1) request canonicalization
> 2) request hashing
> 
>  does
> cover request canonicalization, but the hashing is part of the
> specific signature scheme.  From an implementors POV, layering
> draft-cavage-http-signatures-11 in addition to DPoP is annoying since
> it would take two sign or verify key operations per request.


I believe the position at the moment is to keep this close to the bare minimum 
necessary for operation as proof of possession - the method, origin and path 
which may be used for routing, to allow for a resource server enforce track 
one-time-use.

The intersection of HTTP components that intermediaries should or must leave 
unmodified per spec, and that are useful for http request signing is basically 
empty. Even method and origin/path are often modified by gateways - in such 
environments, either the intermediary should be responsible for verifying DPoP 
or the resource will need knowledge on how to verify the supplied method, 
origin and path (such as via static knowledge or a header added by the 
intermediary)

The proof can be extended with additional parameters, which the AS and resource 
servers in an environment can require. In this manner, DPoP can be extended to 
meet HTTP request signing use cases. HTTP request signing will always be 
environment-specific however, and is a big additional ask for a proof of 
possession specification.

Cavage request verification, for example, could be mapped into a DPoP extension 
by exposing requirements as extensions against the Bearer authorization 
request, and embedding the cavage headers and a hash (not digest or signature) 
of the output into the proof token. Or perhaps the requirements are exposed as 
AS metadata to cover all resources.

-DW

Re: [OAUTH-WG] Recommendations for OAuth 2.0 with Browser-Based Apps

2019-05-06 Thread David Waite
On May 6, 2019, at 1:42 PM, Emond Papegaaij  wrote:
> 
> Hi all,
> 
> For a browser-based app, we try to follow the recommendations set in draft-
> ietf-oauth-browser-based-apps-01. This does allow us to create a secure OAuth 
> 2.0 browser-based application, but at the moment it comes at a cost wrt. user 
> experience when the access token expires. Our current solution forces us to 
> redirect the user to the authorization server for a new authorization code. 
> This will destroy most state the browser-based app has, causing the user to 
> loose data. We are looking for a way to get a new access token in a secure 
> way 
> without disrupting the user.
> As a refresh token is not issued to the app (as it should be), the 
> application 
> is forced to do a front-channel re-authentication for an authorization code. 
> We are thinking of letting this front-channel communication happen in a 
> hidden 
> iframe. Naturally, this can only be done if no user interaction is required, 
> hence we want to use the OIDC prompt=none. Is this a viable way of doing this 
> re-authentication? Can it hurt to open up our authorization server for non-
> interactive authorization requests inside an iframe? At the moment we do not 
> allow iframes at all.

Some AS implementations will block authentication in an iframe, but will allow 
you to use the OIDC prompt=none. This is already used quite often today by 
implicit apps. It is possible that AS implementations may allow iframes in the 
future, by detecting the frame is not covered with any buttons, and having the 
authentication be based on phishing-resistant authentication methods like W3C 
Web Authentication.

You could also trigger re-authorization with a user click, thus allowing 
opening the AS in a new window or tab. Once back on the site via callback, the 
temporary/pop-up window can do things like exchange the code for an access 
token, persist it, postMessage the original window, do window.close, etc.

The iframe and pop-up methods together can be used in lieu of persisting state 
across a redirect to the ISP. Many apps after reaching a sufficient level of 
complexity just wind up persisting the page state in some combination of local 
and remote storage, however. Javascript state is very brittle and will be 
broken by things as simple as a page refresh.

Native apps which opened the system browser had at least the capability of this 
problem as well - the application could be unloaded from memory/quit between 
when authentication started and ended.

On the other hand, refresh tokens IMHO are given quite a bit more fear in 
browser apps than warranted. It really depends on the AS - whether it can tie 
refresh tokens to the user’s authentication, or if they are tied to a long-term 
/ persisted / "offline” authorization independent of an active user 
authentication. Currently, the latter is more common in implementations, and 
doesn’t make sense for browser applications. This doesn’t mean refresh tokens 
are automatically discounted for all environments.

Given the choice between an 8 hour access token, or a 10 minute access token 
and a refresh token that will expire at a maximum of 8 hours, the second 
provides quite a few more options to be more secure. (eg. checking backing user 
session and revocation, checking for updates to client blacklist, the rotation 
of the access token, rotating refresh tokens to prevent use by more than one 
client, expiring access on inactivity based on lag in refreshing, and so on).

If the refresh token is tied to the AS concept of user session, then it mostly 
replaces the ‘hidden iframe’ use above - you’ll only have your refresh token 
expire when the AS is asking for user presence on the front channel, presumably 
for interaction. Although, I suppose in some environments there could be a 
non-interactive reauthentication/factor as well (such as kerberos, MTLS, or 
re-verifying user location via geoip) where a hidden iframe might still provide 
UX benefit.

Browser based apps are significantly more vulnerable to code injection attacks 
than native apps (although don’t believe native apps are immune), so it may 
make sense for an AS to have a stricter default policy for browser-based 
applications than they would have for a native app. It also could make sense to 
allow for more scopes or longer-lived tokens for an audited, first-party 
browser-based application. Restrictions may be opened up even more for 
applications/browsers which also use PoP methods to prevent key exfiltration.

OAuth is nice in that the AS consolidates those responsibilities, however the 
flip side of that is that a client developer is really dependent on an AS to   
provide a combination of features juggling good security and user experience.

> Maybe anybody knows a different way of achieving this? As I cannot believe we 
> are the only ones facing this issue, maybe a recommendation can be put in the 
> spec?

I think so far it has been an omission since this

Re: [OAUTH-WG] MTLS vs. DPOP

2019-05-07 Thread David Waite


> On May 7, 2019, at 8:12 AM, George Fletcher 
>  wrote:
> 
> To compromise an MTLS bound token the attacker has to compromise the private 
> key. To compromise a DPOP bound token, depending on what HTTP request 
> elements are signed, and whether the DPOP is managed as one-time-use etc, 
> there are additional attacks. (Ducks head and waits for all the real security 
> experts to prove me wrong:)

Both should wind up supporting either longer-term, issued keys or ephemeral 
keys - and either exportable or not.

Off the top of my head, if your application is compromised I can’t think of a 
difference in the kinds of abuse that could be performed with equivalent 
policies and key protections.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] MTLS and Native apps Best practices

2019-05-08 Thread David Waite


> On May 7, 2019, at 8:02 AM, John Bradley  wrote:
> 

> I believe that for a native app to use mtls via a chrome custom tab or Safari 
> view controller you need to provision a certificate and private key to the 
> system keystore.  It is not something that can happen dynamically from the 
> app.
> 
> That in practice is generally done by proprietary EMM (Enterprise Mobility 
> Management) systems like mobile Iron etc. 

On iOS you can load a PKCS12 file or use SCEP. You can do so with static 
policies, but nobody does it that way - they use an EMM system. This really 
limits things to enterprise usage or value-added features for small businesses 
that use EMM management integrated into other products like GSuite.

> I think there are also some issues with the app using the same key, it may 
> need to be separately provisioned to the app as well.  

On iOS, such certificates will be used by the system browser, but will not be 
used by an embedded web view or otherwise made available to applications. So, 
code flow and resource access MTLS using a client certificate at the system 
level is right out, unless some app-specific mechanism to negotiate a client 
key pair is used. Mobile apps on iOS will need to use ephemeral keys.

The client certificate may be used by the system browser to identify the 
device, so that the user authentication process can also verify that they are 
accessing from a device that meets corporate policy. So there’s precedent for a 
MTLS negotiation with the front channel being used for a different, non-PoP 
purpose. Not to say that enterprises wouldn’t prefer access be tied to a 
certificate they know was installed on the device and was requested to be 
non-exportable - there is just no standard way to do that today. Well, I 
suppose Kerberos.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Recommendations for OAuth 2.0 with Browser-Based Apps

2019-05-08 Thread David Waite


> On May 8, 2019, at 1:37 AM, Emond Papegaaij  wrote:
> 
> On maandag 6 mei 2019 22:42:09 CEST David Waite wrote:
>> On May 6, 2019, at 1:42 PM, Emond Papegaaij  
> 
>> You could also trigger re-authorization with a user click, thus allowing
>> opening the AS in a new window or tab. Once back on the site via callback,
>> the temporary/pop-up window can do things like exchange the code for an
>> access token, persist it, postMessage the original window, do window.close,
>> etc.
> 
> This would work, but would really be a nuisance to the user. Especially with 
> a 
> token timeout of just one hour. Also, most of the times there would be no 
> interaction, the user would just have to click a button. As a user I wouldn't 
> understand why I have to do that all the time.

You do have this for native apps as well, however: app-auth-sso-ios-11-blog.png 
<https://www.pingidentity.com/content/dam/pic/images/managed/app-auth-sso-ios-11-blog.png>

> 
>> On the other hand, refresh tokens IMHO are given quite a bit more fear in
>> browser apps than warranted. It really depends on the AS - whether it can
>> tie refresh tokens to the user’s authentication, or if they are tied to a
>> long-term / persisted / "offline” authorization independent of an active
>> user authentication. Currently, the latter is more common in
>> implementations, and doesn’t make sense for browser applications. This
>> doesn’t mean refresh tokens are automatically discounted for all
>> environments.
> 
> How would you tie a refresh token to a user session on the AS? This would 
> depend on the user visiting the AS on a regular basis and using a logout 
> button when done. Most people simply close their browser when they're done, 
> leaving the session open. Also, when making a call to the token endpoint to 
> refresh the access token, there is no way of knowing that this call is 
> actually initiated by the user, because it's done on a back channel. Perhaps 
> this can be solved with DPOP with a keypair per browser, but this would 
> really 
> complicate the whole solution.

Yes, there are still no standard solutions for session keep-alive. There’s also 
not AFAIK a clear solution by the browsers on how to do an implicit logout on 
browser closed, now that browsers may persist sessions cookies.

I did pitch something about session keep-alive two years ago around this as 
part of DTVA (see 
https://bitbucket.org/openid/connect/src/f76ffe99c47d4698bc2995c32dc7a402dd6e8c47/distributed-token-validity-api.txt
 
<https://bitbucket.org/openid/connect/src/f76ffe99c47d4698bc2995c32dc7a402dd6e8c47/distributed-token-validity-api.txt>
 ), which unfortunately didn’t go anyplace. For pure API apps participating in 
a session keep-alive system, a separate “user activity present” API to 
periodically poke is probably the best way to go. For managed devices running 
enterprise applications, you can just have a screen lock rather than tracking 
session activity at all.

For handling browsers in shared user environments which lack non-persistent 
cookies, you typically have to rely on session keep-alive/inactivity timeout 
and logout (which in OAuth would map to a token revocation)
 
>> Given the choice between an 8 hour access token, or a 10 minute access token
>> and a refresh token that will expire at a maximum of 8 hours, the second
>> provides quite a few more options to be more secure. (eg. checking backing
>> user session and revocation, checking for updates to client blacklist, the
>> rotation of the access token, rotating refresh tokens to prevent use by
>> more than one client, expiring access on inactivity based on lag in
>> refreshing, and so on).
> 
> I agree with you on this, but the spec currently states clearly that the AS 
> should not issue refresh tokens to an SPA. Do you think this should be 
> changed 
> to something like: Authorization servers SHOULD NOT issue *offline* refresh 
> tokens to browser-based applications. A refresh token should be tied to a 
> user 
> session on the AS.

I would like this language changed as well. It is complex due to so little 
existing token lifetime/policy guidance to reference. Previous conversations 
went a bit circular IMHO because of a lack of ground rules.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Recommended OpenId Connect Flow for SPA with Microservices

2019-06-11 Thread David Waite
On Jun 10, 2019, at 2:06 AM, David Sautter  wrote:

> I understood the following: Using a backend service for doing the exchange of 
> the auth code for the token with the IdP is considered more secure, because 
> one cannot trust the browser to store the tokens securely. The drawback is 
> that you will have to create your own session state between your backend and 
> your frontend SPA (e.g. using a cookie).

Security-wise your risks include both token exfiltration and arbitrary code 
execution. A proxying backend that holds the token might prevent XSS from 
exfiltrating the token, but by default does not limit the impact of someone 
driving the user’s browser directly instead.

There can certainly be other implementation/architectural benefits of this 
approach, however. You can limit actions taken by the SPA, provide a bridge to 
a remote domain for an API that doesn’t advertise CORS support, enforce rate 
limiting of APIs which may be charged upstream, and so on. You could also use 
this to enforce a common processing layer across user-facing components.

> I am in a scenario where I do not have "the one backend", but a bunch of 
> microservices running on Kubernetes, so they could die and respawn at any 
> given time. Do I need a API-Gateway for dealing with the Authorization Code 
> Flow? Which ones would be recommended (standard compliant)?

The standards mostly focus on the protocols between client, AS/IDP, and 
protected resource. The products which say help you implement just the client 
don’t necessarily have standards to leverage between your code and theirs. 
Leveraging OIDC for web access is mostly pain free for traditional 
applications, but SPA applications are more like API clients and take some of 
the passive browser behaviors that you would leverage here (such as potentially 
redirecting for authentication on new page request).

> Or is the alternative of handling the complete Authorization Code Flow + PKCE 
> in the Browser considered a safe scenario?

This is what we are recommending by default for 
https://tools.ietf.org/html/draft-ietf-oauth-browser-based-apps 

Content Security Policy is recommended for preventing XSS in that document. 
Subresource integrity isn’t explicitly called out, but is also invaluable.

To prevent exfiltration, the options are limited. 
- Token Binding will work, but only currently has support in Edge.
- Mutual TLS will work, but has a poor experience unless you are deploying 
alongside group policy.
- DPoP was written specifically for the browser use case (such as letting you 
use WebCrypto for non-exportable tokens). It is an early draft but has some 
initial implementations already.

You can also have risk and fraud analsyis against both. XSS would need to be 
detected by usage behavior, while Exfiltration could use environmental 
detection like address and user agent changes.
> 
> I have been doing a lot of research but having trouble to clarify this. 
> Thanks for your help!

Hope this helps!

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Recommended OpenId Connect Flow for SPA with Microservices

2019-07-04 Thread David Waite


> On Jul 3, 2019, at 1:44 AM, da...@davidsautter.de wrote:



> I understood, that you could also secure this variant of the Authorization 
> Code Flow with PKCE in order to protect the redirect steps. I noticed, that 
> this is rarely discussed "in public" (e.g. blogs, Stackoverflow etc) because 
> some people say PKCE is considered to only be beneficial in native 
> applications. I know it was invented for protecting the IPC steps of those, 
> but why isn't it beneficial to also protect the browser redirect steps?

The PKCE guidance is slowly changing. Previously, PKCE was used mostly for 
native apps as the code could be intercepted in between the front channel and 
back channel calls. This is because operating systems do not restrict 
registration of a custom URI scheme to a single application, so the redirect 
back into the application (with the code) could actually go to another app.

The security-topics draft (and browser apps draft which refers to it) expands 
this to all code flow. The reasoning is that this replaces some of the security 
mitigations pushed onto the state parameter (such as to prevent CSRF) - PKCE is 
both a more obvious place for it, and a client which does not support PKCE 
correctly is detectable by the AS.

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Refresh tokens

2019-07-08 Thread David Waite

> On Jul 8, 2019, at 7:10 PM, Leo Tohill  wrote:
> Re 8. Refresh Tokens
> 
>"For public clients, the risk of a leaked refresh token is much
>greater than leaked access tokens, since an attacker can potentially
>continue using the stolen refresh token to obtain new access without
>being detectable by the authorization server.  "
> 
> (first, note the typo "stoken".)
> 
> Is it always "higher risk"?  I could even argue that leakage of a refresh 
> token is lower risk. As a bearer document, a leaked access token allows 
> access to resources until it expires.  A leaked refresh token, to be useful,  
> requires an exchange with the AS, and the AS would have the opportunity to 
> check whether the refresh token is still valid (has not been revoked).  (of 
> course revocation might NOT have happened, but then again, it might have.) 

I agree (with caveats, of course).

Access tokens and refresh tokens may or may not be attached (by policy) to an 
authentication session lifetime. It is far easier to picture refresh tokens 
which are not attached to an authentication session (sometimes called ‘offline’ 
access) being inappropriate for a browser-based app, which is nearly always a 
client that the resource owner is interacting with.

Variants that may want offline tokens are less easy to imagine - perhaps 
browser extensions?

I believe the language currently there is due to AS implementations 
predominantly treating refresh tokens as being for offline access, and access 
token lifetime being short enough to not outlast an authentication session.

> Furthermore, since the access token is transmitted to other servers, the risk 
> of exposure is greater, due to possible vulnerabilities in those called 
> systems (e.g., logs).  Isn't this the reason that we have refresh tokens? 
> Don't refresh tokens exist because access tokens should have short TTL, 
> because they are widely distributed?

Yes. Once you acknowledge the existence of ‘online’ refresh tokens, they become 
a strong security component:

- Refresh tokens let you shorten the access token lifetime
- A shorter access token lifetime lets you have centralized policy to 
invalidate access without needing to resort to token introspection/revocation
- Token refresh can theoretically be used to represent other policy changes by 
both the client (creating tokens targeting a new resource server or with 
reduced scopes) and server (changing entitlements and attributes/claims 
embedded within a structured token)
- Refresh tokens can be one-time-use, as recommenced by the security BCP. A 
exfiltrated refresh token will result in either the attacker or the user losing 
access on the next refresh, and a double refresh is a detectable security event 
by the AS.

> "Additionally, browser-based applications provide many attack vectors by 
> which a refresh token can be leaked."
> 
> The risks of leaking a refresh token from the browser are identical to the 
> risks of leaking an access token, right?  This sentence could be changed to 
> "... by which a token can be leaked."
> 
> A refresh token is "higher risk" because its TTL is usually greater than the 
> access token's TTL.  But if our advice here leads to people using 
> longer-lived access tokens (because of the problems with getting a new access 
> token without involving the user), then the advice will be counter 
> productive.   The longer life gives more time for the usefulness of a 
> browser-side theft, and more time for the usefulness of a server-side theft.  
> 
> Which scenario is safer?
> A) using an access token with a 10 minute TTL, accompanied by a refresh token 
> with a 1 hour TTL
> B) using an access token with a 1 hour TTL, and no refresh token. 


Given tokens that track authentication lifetime, it is hard to make a case that 
refresh tokens which last for the authentication session are a greater security 
risk than opaque access tokens (requiring token introspection) that will last 
the same time. 

Typically an AS (or OP) would issue a structured access token with a lifetime 
expected to expire before the authentication session, with new tokens issued 
via requests made in an embedded, iframe (hidden, prompt=none). There may be 
benefits here of user cookies (or perhaps managed-device information) against 
an authorization endpoint being used to make decisions that could not be made 
by a refresh against the token endpoint. 

I’d be interested in hearing how strong of an implementation issue this might 
be for deployments - I could see a non-security argument that the BCP should 
only have one recommended approach here, and that there are deployments needing 
the iframe approach.

-DW

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Refresh tokens

2019-07-08 Thread David Waite


> On Jul 8, 2019, at 8:39 PM, Aaron Parecki  wrote:
> 
> These are all very good points! I think the challenge here is figuring out 
> where this kind of guidance is most appropriate.
> 
> It does seem like some of these issues are unique to a browser environment 
> (particularly where the browser itself is managing the access and refresh 
> tokens), so maybe it makes the most sense to include this guidance in the 
> browser based app BCP?

Yes, the location is a challenge - the “offline” distinction is defined 
(arguably under-defined) by OpenID Connect. OAuth (on the other hand) does not 
take a stand on user authentication sessions, since the tokens are for 
delegated access.

For confidential clients, both online and offline options make sense. For 
native apps, the push is usually for long-term access or for a session separate 
from the external user agent. But for browser apps, you typically want to 
mirror user authentication.
 
> If there are situations in which this advice is applicable in other scenarios 
> in addition to browser apps, then I think it would make more sense to include 
> it in the general OAuth security BCP.
> 
> The Security BCP already has some language around refresh tokens, but I 
> haven't reviewed it in a while to see if all of these points might be already 
> covered there.
> 
> If folks think the Browser BCP is the best place for this kind of thing I am 
> definitely open to it, and I can work with David on the specific language to 
> add.
> 
> - Aaron

-DW___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Refresh tokens

2019-07-19 Thread David Waite
hich then may need to be able to get a 
>> new access token, which is effectively offline access.
>> 
>> 
>> Aaron Parecki
>> aaronparecki.com <http://aaronparecki.com/>
>> @aaronpk <http://twitter.com/aaronpk>
>> 
>> 
>> 
>> On Tue, Jul 9, 2019 at 9:16 AM George Fletcher 
>> mailto:40aol@dmarc.ietf.org>> wrote:
>> I'll just add a couple more thoughts around refresh_tokens.
>> 
>> 1. I agree with David that refresh_tokens are valuable in an "online" 
>> scenario and should be used there.
>> 
>> 2. To use a refresh token at the /token endpoint, client authentication is 
>> required. This is where it gets difficult for default SPAs because they are 
>> public clients and the only mechanism to authenticate them is the client_id 
>> which is itself public. For me, this is the real risk of exposing the 
>> refresh_token in the browser. 
>> 
>> 3. If the AS supports rotation of refresh_tokens and an attacker steals one 
>> and uses it, then the SPA will get an error on it's next attempt because 
>> it's refresh_token will now be invalid. If the refresh_tokens are bound to 
>> the user's authentication session, then the user can logout to lockout the 
>> attacker. However, that is a lot of "ifs" and still provides the attacker 
>> with time to leverage access via the compromised refresh_token.
>> 
>> In principle, I agree with the recommendation that SPAs shouldn't have 
>> refresh_tokens in the browser. If it's not possible to easily refresh the 
>> access token via a hidden iframe (becoming more difficult with all the 
>> browser/privacy cookie changes. e.g. ITP2.X) then I'd recommend to use a 
>> simple server component such that the backend for the SPA can use 
>> authorization_code flow and protect a client_secret.
>> 
>> Thanks,
>> George
>> 
>> On 7/8/19 11:17 PM, David Waite wrote:
>>> 
>>>> On Jul 8, 2019, at 7:10 PM, Leo Tohill >>> <mailto:leotoh...@gmail.com>> wrote:
>>>> Re 8. Refresh Tokens
>>>> 
>>>>  "For public clients, the risk of a leaked refresh token is much
>>>> ?? ??greater than leaked access tokens, since an attacker can potentially
>>>> ?? ??continue using the stolen refresh token to obtain new access without
>>>> ?? ??being detectable by the authorization server.?? "
>>>> 
>>>> (first, note the typo "stoken".)
>>>> 
>>>> Is it always "higher risk"??? I could even argue that leakage of a refresh 
>>>> token is lower risk. As a bearer document, a leaked access token allows 
>>>> access to resources until it expires.?? A leaked refresh token, to be 
>>>> useful,?? requires an exchange with the AS, and the AS would have the 
>>>> opportunity to check whether the refresh token is still valid (has not 
>>>> been revoked).?? (of course revocation might NOT have happened, but then 
>>>> again, it might have.) 
>>> 
>>> I agree (with caveats, of course).
>>> 
>>> Access tokens and refresh tokens may or may not be attached (by policy) to 
>>> an authentication session lifetime. It is far easier to picture refresh 
>>> tokens which are not attached to an authentication session (sometimes 
>>> called ???offline??? access) being inappropriate for a browser-based app, 
>>> which is nearly always a client that the resource owner is interacting with.
>>> 
>>> Variants that may want offline tokens are less easy to imagine - perhaps 
>>> browser extensions?
>>> 
>>> I believe the language currently there is due to AS implementations 
>>> predominantly treating refresh tokens as being for offline access, and 
>>> access token lifetime being short enough to not outlast an authentication 
>>> session.
>>> 
>>>> Furthermore, since the access token is transmitted to other servers, the 
>>>> risk of exposure is greater, due to possible vulnerabilities in those 
>>>> called systems (e.g., logs).?? Isn't this the reason that we have refresh 
>>>> tokens? Don't refresh tokens exist because access tokens should have short 
>>>> TTL, because they are widely distributed?
>>> 
>>> Yes. Once you acknowledge the existence of ???online??? refresh tokens, 
>>> they become a strong security component:
>>> 
>>> - Refresh tokens let you shorten the access token lifetime
>>> - A shorte

Re: [OAUTH-WG] Refresh tokens

2019-07-20 Thread David Waite


> On Jul 20, 2019, at 12:38 PM, Leo Tohill  wrote:
> 
> Access tokens and refresh tokens, stored browser-side, are equally vulnerable 
> to theft, because the storage options are identical. 
> 
> We are more concerned about the theft of the refresh token, because it 
> (commonly) has a longer usable lifetime than the access token. 
> 
> Still , its a matter of degree. Since we accept the risk of access token 
> theft,  why can't we accept the risk of refresh token theft?  We ameliorate 
> the access token risk by using short lifetimes, but there is no standard for 
> that value: it is situational.  Why doesn't the same reasoning apply to 
> refresh tokens? 
> 
> This reasoning assumes that refresh tokens also have a limited lifetime.  I 
> am unsure that this is always the case.  When one uses a refresh token to 
> acquire a new access token, AND that operation issues a new refresh token, 
> does the new refresh token get a new lifetime?  If so, then a refresh token 
> can be used to retain access infinitely (or until it is revoked server-side). 
>  In this scenario, the risks associated with browser-side storage of refresh 
> token are much higher. 
> 
> In summary, I'd say that IF the lifetime of a refresh token can be limited, 
> then refresh tokens pose identical risk as access tokens, and so the same 
> considerations apply. 

Agreed

I think there is an unwritten framework for evaluating the security of all 
tokens, regardless of type. For example: access tokens are shared with 
resources, so their security protections in the Security BCP including limiting 
replay against other resources, and optionally against new requests against the 
same resource.

Because it is complex and unwritten, it is hard to do a true evaluation. My 
impression was always that refresh tokens were more ‘risky’ for public clients 
because “offline” refresh tokens may be good for an indeterminate period of 
time, and because lack of credentials means theft of the token is sufficient.

In addition, a public client which does not use its refresh token in an 
“offline” manner may have theft go unnoticed for a considerable period of time, 
and the operational model of public clients usually puts detection of local 
token theft in the hand of the end user and client software, not an 
administrator or organizational security staff.

But those concerns are mostly mitigated if the OP can set policy to control 
refresh token usage in concert with the authentication session.

-DW
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


  1   2   >