Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-19 Thread Filip Skokan
I don't share the same sentiment about at_hash being a pain, we already have 
the tools on the server. And browser side it's a matter of 15loc using 
webcrypto api since, well, the JWS algorithm support there is limited to the 
simple ones ending with the bitsize needed anyway. 

Nevertheless if we're saying sha256 of the key thumbprint is fine i don't see 
why we wouldn't be able to do the same for new AT hash property (no longer 
called at_hash i assume).

But if XSS is game over, let's not bother with trying to patch one particular 
scenario with a hash.

- Filip

Odesláno z iPhonu

> 19. 12. 2020 v 7:00, Vladimir Dzhuvinov :
> 
> 
> Thank you Justin for this honest account of your experience with DPoP.
> 
> To at_hash or not is maybe not solved yet, but at least it's clear there's 
> little enthusiasm about the OIDC style at_hash :)
> 
> Vladimir
> 
> On 15/12/2020 18:40, Justin Richer wrote:
>> I went and implemented this proposal of including a token hash in both an AS 
>> (java) and client (javascript) on a system that was already using DPoP and 
>> OpenID Connect. What I did there was just use the existing code we had on 
>> the AS-side to calculate the “at_hash” in the ID Token from OIDC, which I 
>> also used to verify on the token-accepting portions. I had to implement the 
>> function on the client side, but that was only a couple lines using a crypto 
>> library to do the heavy hash lift.
>> 
>> The most annoying part is dealing with the hash variability in the OIDC 
>> method. As Brian points out, this isn’t particularly robust, and it depends 
>> on the wrapper being JOSE. That’s not a huge deal because DPoP uses JOSE for 
>> its wrapper, but it’s still extra code to deal with — to the point where I 
>> just hardcoded the hash algorithm in my test so that I didn’t have to put 
>> together the switch case over the algorithm. 
>> 
>> So in at least my own experience, the addition is minimal on both client and 
>> server, and whatever we would decide for the hash algorithm would be simple 
>> enough to manage. I have a slight preference for just picking something like 
>> SHA256 and calling it a day (and defining other hashes in the future when 
>> SHA256 is broken), but that’s not a hill I care to die on.
>> 
>>  — Justin
>> 
>>> On Dec 14, 2020, at 4:27 PM, Brian Campbell 
>>>  wrote:
>>> 
>>> 
>>> 
>>> On Sat, Dec 12, 2020 at 1:22 AM Vladimir Dzhuvinov 
>>>  wrote:
 If the current DPoP has code complexity "X", the relative additional 
 complexity to include access token hashes doesn't seem like very much. An 
 app choosing DPoP means accepting the code complexity that comes with 
 dealing with keys, composing the signing inputs for the proofs, signing, 
 the necessary changes to the token and RS requests. On the other hand, for 
 some people that additional access token hash may become the straw that 
 breaks the camel's back, causing them to quit their jobs developing web 
 apps and never look back :)
 
>>> Yeah, the relative additional complexity to include an access token hash 
>>> maybe isn't too much but it's also not not nothing. It's a different kind 
>>> of operation than the other things you listed (yes, I know there's a hash 
>>> as part of the signing but it's abstracted away from the developer in most 
>>> cases) and something that can be quite difficult to troubleshoot when 
>>> different parties arrive at different hash values. Hence my lack of 
>>> conviction on this one way or the other. 
>>>  
 
 Have you thought about letting deployments decide about the access token 
 hash? To say look, there is also the option to bind an access token to the 
 DPoP proof, the security benefits can be such an such, and this is how it 
 can be done.
 
 What I don't like about that proposal: 
 
 It will complicate the spec
 
 The current spec doesn't require implementers / deployments to make any 
 decisions, apart from adopt / not DPoP (okay, also choose a JWS alg) - 
 which is actually a great feature to have
>>> 
>>> I also don't like it for basically the same reasons. I've definitely aimed 
>>> to keep it simple from that perspective of not having a lot of optionality 
>>> or switches. It is a nice feature to have, when possible. 
>>> 
>>>  
 Vladimir
 
 
 
 On 12/12/2020 01:58, Brian Campbell wrote:
> Any type of client could use DPoP and (presumably) benefit from 
> sender-constrained access tokens. So yeah, adding complexity specifically 
> for browser-based applications (that only mitigates one variation of the 
> attacks possible with XSS anyway)  has 'cost' impact to those clients as 
> well. And should be considered in the cost/benefit. Including the AT hash 
> isn't terribly complicated but it's not trivial either. I'm honestly 
> still unsure but am leaning towards it not being worth adding. 
> 
> On Fri, Dec 11, 2020 at 2:14 AM 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-18 Thread Vladimir Dzhuvinov
Thank you Justin for this honest account of your experience with DPoP.

To at_hash or not is maybe not solved yet, but at least it's clear
there's little enthusiasm about the OIDC style at_hash :)

Vladimir

On 15/12/2020 18:40, Justin Richer wrote:
> I went and implemented this proposal of including a token hash in both
> an AS (java) and client (javascript) on a system that was already
> using DPoP and OpenID Connect. What I did there was just use the
> existing code we had on the AS-side to calculate the “at_hash” in the
> ID Token from OIDC, which I also used to verify on the token-accepting
> portions. I had to implement the function on the client side, but that
> was only a couple lines using a crypto library to do the heavy hash lift.
>
> The most annoying part is dealing with the hash variability in the
> OIDC method. As Brian points out, this isn’t particularly robust, and
> it depends on the wrapper being JOSE. That’s not a huge deal because
> DPoP uses JOSE for its wrapper, but it’s still extra code to deal with
> — to the point where I just hardcoded the hash algorithm in my test so
> that I didn’t have to put together the switch case over the algorithm. 
>
> So in at least my own experience, the addition is minimal on both
> client and server, and whatever we would decide for the hash algorithm
> would be simple enough to manage. I have a slight preference for just
> picking something like SHA256 and calling it a day (and defining other
> hashes in the future when SHA256 is broken), but that’s not a hill I
> care to die on.
>
>  — Justin
>
>> On Dec 14, 2020, at 4:27 PM, Brian Campbell
>> > > wrote:
>>
>>
>>
>> On Sat, Dec 12, 2020 at 1:22 AM Vladimir Dzhuvinov
>> mailto:vladi...@connect2id.com>> wrote:
>>
>> If the current DPoP has code complexity "X", the relative
>> additional complexity to include access token hashes doesn't seem
>> like very much. An app choosing DPoP means accepting the code
>> complexity that comes with dealing with keys, composing the
>> signing inputs for the proofs, signing, the necessary changes to
>> the token and RS requests. On the other hand, for some people
>> that additional access token hash may become the straw that
>> breaks the camel's back, causing them to quit their jobs
>> developing web apps and never look back :)
>>
>> Yeah, the relative additional complexity to include an access token
>> hash maybe isn't too much but it's also not not nothing. It's a
>> different kind of operation than the other things you listed (yes, I
>> know there's a hash as part of the signing but it's abstracted away
>> from the developer in most cases) and something that can be quite
>> difficult to troubleshoot when different parties arrive at different
>> hash values. Hence my lack of conviction on this one way or the other. 
>>  
>>
>>
>> Have you thought about letting deployments decide about the
>> access token hash? To say look, there is also the option to bind
>> an access token to the DPoP proof, the security benefits can be
>> such an such, and this is how it can be done.
>>
>> What I don't like about that proposal: 
>>
>>   * It will complicate the spec
>>
>>   * The current spec doesn't require implementers / deployments
>> to make any decisions, apart from adopt / not DPoP (okay,
>> also choose a JWS alg) - which is actually a great feature to
>> have
>>
>>
>> I also don't like it for basically the same reasons. I've definitely
>> aimed to keep it simple from that perspective of not having a lot of
>> optionality or switches. It is a nice feature to have, when possible. 
>>
>>  
>>
>> Vladimir
>>
>>
>> On 12/12/2020 01:58, Brian Campbell wrote:
>>> Any type of client could use DPoP and (presumably) benefit from
>>> sender-constrained access tokens. So yeah, adding complexity
>>> specifically for browser-based applications (that only mitigates
>>> one variation of the attacks possible with XSS anyway)  has
>>> 'cost' impact to those clients as well. And should be considered
>>> in the cost/benefit. Including the AT hash isn't terribly
>>> complicated but it's not trivial either. I'm honestly still
>>> unsure but am leaning towards it not being worth adding. 
>>>
>>> On Fri, Dec 11, 2020 at 2:14 AM Philippe De Ryck
>>> >> > wrote:
>>>
>>> The scenario you describe here is realistic in browser-based
>>> apps with XSS vulnerabilities, but it is pretty complex.
>>> Since there are worse problems when XSS happens, it’s hard
>>> to say whether DPoP should mitigate this. 
>>>
>>> I’m wondering what other types of clients would benefit from
>>> using DPoP for access tokens? Mobile apps? Clients using a
>>> Client Credentials grant?
>>>
>>> How are they impacted by any change made 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-15 Thread Justin Richer
I went and implemented this proposal of including a token hash in both an AS 
(java) and client (javascript) on a system that was already using DPoP and 
OpenID Connect. What I did there was just use the existing code we had on the 
AS-side to calculate the “at_hash” in the ID Token from OIDC, which I also used 
to verify on the token-accepting portions. I had to implement the function on 
the client side, but that was only a couple lines using a crypto library to do 
the heavy hash lift.

The most annoying part is dealing with the hash variability in the OIDC method. 
As Brian points out, this isn’t particularly robust, and it depends on the 
wrapper being JOSE. That’s not a huge deal because DPoP uses JOSE for its 
wrapper, but it’s still extra code to deal with — to the point where I just 
hardcoded the hash algorithm in my test so that I didn’t have to put together 
the switch case over the algorithm. 

So in at least my own experience, the addition is minimal on both client and 
server, and whatever we would decide for the hash algorithm would be simple 
enough to manage. I have a slight preference for just picking something like 
SHA256 and calling it a day (and defining other hashes in the future when 
SHA256 is broken), but that’s not a hill I care to die on.

 — Justin

> On Dec 14, 2020, at 4:27 PM, Brian Campbell 
>  wrote:
> 
> 
> 
> On Sat, Dec 12, 2020 at 1:22 AM Vladimir Dzhuvinov  > wrote:
> If the current DPoP has code complexity "X", the relative additional 
> complexity to include access token hashes doesn't seem like very much. An app 
> choosing DPoP means accepting the code complexity that comes with dealing 
> with keys, composing the signing inputs for the proofs, signing, the 
> necessary changes to the token and RS requests. On the other hand, for some 
> people that additional access token hash may become the straw that breaks the 
> camel's back, causing them to quit their jobs developing web apps and never 
> look back :)
> 
> Yeah, the relative additional complexity to include an access token hash 
> maybe isn't too much but it's also not not nothing. It's a different kind of 
> operation than the other things you listed (yes, I know there's a hash as 
> part of the signing but it's abstracted away from the developer in most 
> cases) and something that can be quite difficult to troubleshoot when 
> different parties arrive at different hash values. Hence my lack of 
> conviction on this one way or the other. 
>  
> 
> Have you thought about letting deployments decide about the access token 
> hash? To say look, there is also the option to bind an access token to the 
> DPoP proof, the security benefits can be such an such, and this is how it can 
> be done.
> 
> What I don't like about that proposal: 
> 
> It will complicate the spec
> 
> The current spec doesn't require implementers / deployments to make any 
> decisions, apart from adopt / not DPoP (okay, also choose a JWS alg) - which 
> is actually a great feature to have
> 
> I also don't like it for basically the same reasons. I've definitely aimed to 
> keep it simple from that perspective of not having a lot of optionality or 
> switches. It is a nice feature to have, when possible. 
> 
>  
> Vladimir
> 
> 
> 
> On 12/12/2020 01:58, Brian Campbell wrote:
>> Any type of client could use DPoP and (presumably) benefit from 
>> sender-constrained access tokens. So yeah, adding complexity specifically 
>> for browser-based applications (that only mitigates one variation of the 
>> attacks possible with XSS anyway)  has 'cost' impact to those clients as 
>> well. And should be considered in the cost/benefit. Including the AT hash 
>> isn't terribly complicated but it's not trivial either. I'm honestly still 
>> unsure but am leaning towards it not being worth adding. 
>> 
>> On Fri, Dec 11, 2020 at 2:14 AM Philippe De Ryck 
>> > > wrote:
>> The scenario you describe here is realistic in browser-based apps with XSS 
>> vulnerabilities, but it is pretty complex. Since there are worse problems 
>> when XSS happens, it’s hard to say whether DPoP should mitigate this. 
>> 
>> I’m wondering what other types of clients would benefit from using DPoP for 
>> access tokens? Mobile apps? Clients using a Client Credentials grant?
>> 
>> How are they impacted by any change made specifically for browser-based 
>> applications?
>> 
>> Philippe
>> 
>> 
>>> On 9 Dec 2020, at 23:57, Brian Campbell >> > wrote:
>>> 
>>> Thanks Philippe, I very much concur with your line of reasoning and the 
>>> important considerations. The scenario I was thinking of is: browser based 
>>> client where XSS is used to exfiltrate the refresh token along with 
>>> pre-computed proofs that would allow for the RT to be exchanged for new 
>>> access tokens and also pre-computed proofs that would work with those 
>>> access tokens for resource access. With 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-14 Thread Brian Campbell
On Sat, Dec 12, 2020 at 1:22 AM Vladimir Dzhuvinov 
wrote:

> If the current DPoP has code complexity "X", the relative additional
> complexity to include access token hashes doesn't seem like very much. An
> app choosing DPoP means accepting the code complexity that comes with
> dealing with keys, composing the signing inputs for the proofs, signing,
> the necessary changes to the token and RS requests. On the other hand, for
> some people that additional access token hash may become the straw that
> breaks the camel's back, causing them to quit their jobs developing web
> apps and never look back :)
>
Yeah, the relative additional complexity to include an access token hash
maybe isn't too much but it's also not not nothing. It's a different kind
of operation than the other things you listed (yes, I know there's a hash
as part of the signing but it's abstracted away from the developer in most
cases) and something that can be quite difficult to troubleshoot when
different parties arrive at different hash values. Hence my lack of
conviction on this one way or the other.


> Have you thought about letting deployments decide about the access token
> hash? To say look, there is also the option to bind an access token to the
> DPoP proof, the security benefits can be such an such, and this is how it
> can be done.
>
> What I don't like about that proposal:
>
>- It will complicate the spec
>
>- The current spec doesn't require implementers / deployments to make
>any decisions, apart from adopt / not DPoP (okay, also choose a JWS alg) -
>which is actually a great feature to have
>
>
I also don't like it for basically the same reasons. I've definitely aimed
to keep it simple from that perspective of not having a lot of optionality
or switches. It is a nice feature to have, when possible.



> Vladimir
>
>
> On 12/12/2020 01:58, Brian Campbell wrote:
>
> Any type of client could use DPoP and (presumably) benefit from
> sender-constrained access tokens. So yeah, adding complexity specifically
> for browser-based applications (that only mitigates one variation of the
> attacks possible with XSS anyway)  has 'cost' impact to those clients as
> well. And should be considered in the cost/benefit. Including the AT hash
> isn't terribly complicated but it's not trivial either. I'm honestly still
> unsure but am leaning towards it not being worth adding.
>
> On Fri, Dec 11, 2020 at 2:14 AM Philippe De Ryck <
> phili...@pragmaticwebsecurity.com> wrote:
>
>> The scenario you describe here is realistic in browser-based apps with
>> XSS vulnerabilities, but it is pretty complex. Since there are worse
>> problems when XSS happens, it’s hard to say whether DPoP should mitigate
>> this.
>>
>> I’m wondering what other types of clients would benefit from using DPoP
>> for access tokens? Mobile apps? Clients using a Client Credentials grant?
>>
>> How are they impacted by any change made specifically for browser-based
>> applications?
>>
>> Philippe
>>
>>
>> On 9 Dec 2020, at 23:57, Brian Campbell 
>> wrote:
>>
>> Thanks Philippe, I very much concur with your line of reasoning and the
>> important considerations. The scenario I was thinking of is: browser based
>> client where XSS is used to exfiltrate the refresh token along with
>> pre-computed proofs that would allow for the RT to be exchanged for new
>> access tokens and also pre-computed proofs that would work with those
>> access tokens for resource access. With the pre-computed proofs that would
>> allow prolonged (as long as the RT is valid) access to protected resources
>> even when the victim is offline. Is that a concrete attack scenario? I
>> mean, kind of. It's pretty convoluted/complex. And while an access token
>> hash would reign it in somewhat (ATs obtained from the stolen RT wouldn't
>> be usable) it's hard to say if the cost is worth the benefit.
>>
>>
>>
>> On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck <
>> phili...@pragmaticwebsecurity.com> wrote:
>>
>>> Yeah, browser-based apps are pure fun, aren’t they? :)
>>>
>>> The reason I covered a couple of (pessimistic) XSS scenarios is that the
>>> discussion started with an assumption that the attacker already
>>> successfully exploited an XSS vulnerability. I pointed out how, at that
>>> point, finetuning DPoP proof contents will have little to no effect to stop
>>> an attack. I believe it is important to make this very clear, to avoid
>>> people turning to DPoP as a security mechanism for browser-based
>>> applications.
>>>
>>>
>>> Specifically to your question on including the hash in the proof, I
>>> think these considerations are important:
>>>
>>> 1. Does the inclusion of the AT hash stop a concrete attack scenario?
>>> 2. Is the “cost” (implementation, getting it right, …) worth the
>>> benefits?
>>>
>>>
>>> Here’s my view on these considerations (*specifically for browser-based
>>> apps, not for other types of applications*):
>>>
>>> 1. The proof precomputation attack is already quite 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-13 Thread Jim Manico

Brian,

I just focus on web security and understand the risk of XSS well. It 
seems to me that many of the designers of OAuth 2 do not have a web 
security background and keep trying to address XSS with add-ons without 
success.


- Jim

On 12/11/20 2:01 PM, Brian Campbell wrote:

I think that puts Jim in the XSS Nihilism camp :)

Implicit type flows are being deprecated/discouraged. But keeping 
tokens out of browsers doesn't seem likely. There is some menton of 
CSP in 
https://tools.ietf.org/html/draft-ietf-oauth-browser-based-apps-07#section-9.7 
 



On Wed, Dec 9, 2020 at 4:10 PM Jim Manico > wrote:


The basic theme from the web attacker community is:

1) XSS is a game over event to web clients. XSS can steal or abuse
(request forgery) tokens, and more.

2) Even if you prevent stolen tokens from being used outside of a
web client, XSS still allows the attacker to force a user to make
any request in a fraudulent way, abusing browser based tokens as a
form of request forgery.

3) There are advanced measures to stop a token from being stolen
from a web client, like a HTTPonly cookies and to a lesser degree,
JS Closures and Webworkers.

4) However, these measures to protect cookies are mostly moot.
Attackers can just force clients to make fraudulent requests.

5) Many recommend the BFF pattern to hide tokens on the back end,
but still, request forgery via XSS allows all kinds of abuse.

XSS is game over no matter how you slice it.

Crypto solutions do not help. Perhaps the world of OAuth can start
suggesting that web clients use CSP 3.0 in specific ways, if you
still plan to support Implicit type flows or tokens in browsers?

Respectfully,

- Jim


On 12/9/20 12:57 PM, Brian Campbell wrote:

Thanks Philippe, I very much concur with your line of reasoning
and the important considerations. The scenario I was thinking of
is: browser based client where XSS is used to exfiltrate the
refresh token along with pre-computed proofs that would allow for
the RT to be exchanged for new access tokens and also
pre-computed proofs that would work with those access tokens for
resource access. With the pre-computed proofs that would allow
prolonged (as long as the RT is valid) access to protected
resources even when the victim is offline. Is that a concrete
attack scenario? I mean, kind of. It's pretty convoluted/complex.
And while an access token hash would reign it in somewhat (ATs
obtained from the stolen RT wouldn't be usable) it's hard to say
if the cost is worth the benefit.



On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck
mailto:phili...@pragmaticwebsecurity.com>> wrote:

Yeah, browser-based apps are pure fun, aren’t they? :)

The reason I covered a couple of (pessimistic) XSS scenarios
is that the discussion started with an assumption that the
attacker already successfully exploited an XSS vulnerability.
I pointed out how, at that point, finetuning DPoP proof
contents will have little to no effect to stop an attack. I
believe it is important to make this very clear, to avoid
people turning to DPoP as a security mechanism for
browser-based applications.


Specifically to your question on including the hash in the
proof, I think these considerations are important:

1. Does the inclusion of the AT hash stop a concrete attack
scenario?
2. Is the “cost” (implementation, getting it right, …) worth
the benefits?


Here’s my view on these considerations (*/specifically for
browser-based apps, not for other types of applications/*):

1. The proof precomputation attack is already quite complex,
and short access token lifetimes already reduce the window of
attack. If the attacker can steal a future AT, they could
also precompute new proofs then.
2. For browser-based apps, it seems that doing this
complicates the implementation, without adding much benefit.
Of course, libraries could handle this, which significantly
reduces the cost.


Note that these comments are specifically to complicating the
spec and implementation. DPoP’s capabilities of using
sender-constrained access tokens are still useful to counter
various other scenarios (e.g., middleboxes or APIs abusing
access tokens). If other applications would significantly
benefit from having the hash in the proof, I’m all for it.

On a final note, I would be happy to help clear up the
details on web-based threats and defenses if necessary.

—
*Pragmatic Web Security*
/Security for developers/
https://pragmaticwebsecurity.com/
   

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-13 Thread Neil Madden

> On 13 Dec 2020, at 09:11, Torsten Lodderstedt  wrote:
> [...]
>> 
>>> - generating (self contained) or using (handles) per URL access tokens 
>>> might be rather expensive. Can you sketch out how you wanna cope with that 
>>> challenge?
>> 
>> A decent HMAC implementation takes about 1-2 microseconds for typical size 
>> of token we’re talking about. 
> 
> The generation of a self contained access token typically requires querying 
> claim values from at least a single data source. That might take more time. 
> For handle based tokens/token introspection, one needs to add the time it 
> takes to obtain the token data, which requires a HTTPS communication. That 
> could be even more time consuming.

This is typically true of identity-based tokens, where access to a resource is 
based on who is accessing it. But in a capability-based model this is not the 
case and the capability itself grants access and is not (usually) tied to an 
individual identity. 

Where you do want to include claims in a token, or tie capabilities to an 
identity, then there are more efficient strategies than looking up those claims 
every time you create a new capability token. For example, in my book I 
implement a variant in which simple capability URIs are used for access but 
these are bound to a traditional identity-based session cookie that can be used 
to look up identity attributes as required. This provides security benefits to 
both the cookie (CSRF protection) and the capability URIs (linked to a HttpOnly 
cookie makes them harder to steal). 

If you use macaroons then typically you’d mint a single token with the claims 
in it and then derive lots of individual tokens from it by appending caveats. 
For example, when generating a directory listing in a Dropbox-like app you’d 
mint a single token with details of the user etc and then derive individual 
tokens to access each file by appending a caveat like “file = 
/path/to/specific/file”. 

— Neil
-- 
ForgeRock values your Privacy 
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-13 Thread Torsten Lodderstedt
Hi Neil,

thanks for your comprehensive answers. Please find my comments inline.

best regards,
Torsten.

> Am 12.12.2020 um 21:11 schrieb Neil Madden :
> 
> 
> Good questions! Answers inline:
> 
>>> On 12 Dec 2020, at 10:07, Torsten Lodderstedt  
>>> wrote:
>>> 
>> 
>> Thanks for sharing, Neil!
>> 
>> I‘ve got some questions:
>> Note: I assume the tokens you are referring in your article are OAuth access 
>> tokens.
> 
> No, probably not. Just auth tokens more generically. 
> 
>> - carrying tokens in URLs wie considered bad practice by the Security BCP 
>> and OAuth 2.1 due to leakage via referrer headers and so on. Why isn’t this 
>> an issue with your approach?
> 
> This is generally safe advice, but it is often over-cautious for three 
> reasons:
> 
> 1. Referer headers (and window.referrer) apply when embedding/linking 
> resources in HTML. But when we’re talking about browser-based apps (eg SPAs), 
> that usually means JavaScript calling some backend API that returns JSON or 
> some other data format. These data formats don’t have links or embedded 
> resources (as far as the browser is concerned), so they don’t leak Referer 
> headers in the same way. When the app loads a resource from a URI in a JSON 
> response the Referer header will contain the URI of the app itself (most 
> likely a generic HTML template page), not the capability URI from which the 
> JSON was loaded. Similar arguments apply to browser history and other typical 
> ways that URIs leak. 
> 
> 2. You can now use the Referrer-Policy header [1] and rel=“noopener 
> noreferrer” to opt out of this leakage, and browsers are moving to doing this 
> by default for cross-origin requests/embeds. (This is already enabled by 
> default in Safari). 
> 
> 3. When you do want to use capability URIs for top-level navigation, there 
> are places in the URI you can put a token that aren’t ever included in 
> Referer headers or window.referrer or ever sent to the server at all - such 
> as the fragment. JavaScript can then extract the token from the fragment (and 
> then wipe it) and send it to the server in an Authorization header or 
> whatever. See [2] for more details and alternatives. 
> 
> [1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy
> [2]: 
> https://neilmadden.blog/2019/01/16/can-you-ever-safely-include-credentials-in-a-url/
> 
>> - generating (self contained) or using (handles) per URL access tokens might 
>> be rather expensive. Can you sketch out how you wanna cope with that 
>> challenge?
> 
> A decent HMAC implementation takes about 1-2 microseconds for typical size of 
> token we’re talking about. 

The generation of a self contained access token typically requires querying 
claim values from at least a single data source. That might take more time. For 
handle based tokens/token introspection, one needs to add the time it takes to 
obtain the token data, which requires a HTTPS communication. That could be even 
more time consuming.

> 
>> - per URL Access tokens are a very consequent Form or audience restriction. 
>> How do you wanna signal the audience to the AS?
> 
> As I said, this isn’t OAuth, but for example you can already do this with the 
> macaroon access tokens in ForgeRock AM 7.0 - issue a single access token and 
> then make copies with specific audience restrictions added as caveats, as 
> discussed in [3]. Such audience restrictions are then returned in the token 
> introspection response and the RS can enforce them. 

> 
> My comment in the article about ideas for future OAuth is really just that 
> the token endpoint should be able to issue multiple fine-grained access 
> tokens in one go, each associated with a particular endpoint (or endpoints). 
> You could either return these as separate items like:
> 
> “access_tokens”: [
> { “token”: “abc...”, 
>“aud”: “https://api.example.com/foo” },
> { “token”: “def...”,
>“aud”: “https://api.example.com/bar” }
> ]

I like the idea (and have liked it for a long time  
https://mailarchive.ietf.org/arch/msg/oauth/JcKGhoKy2S_2gAQ2ilMxCPWbgPw/).

resource indicators or authorization_details (with locations) could basically 
be used for that purpose but OAuth2 lacks multiple tokens support in the token 
endpoint.

> 
> Or just go ahead and combine those into capability URIs. (I think I already 
> mentioned this a long time ago when GNAP was first being discussed). 
> 
> Speaking even more wishfully, what I would really love to see is a new URL 
> scheme for these, something like:
> 
>   bearer://@api.example.com/foo
> 
> Which is equivalent to a HTTPS link, but the browser knows about this format 
> and when clicking on/accessing such a URI it sends the token as an 
> Authorization: Bearer header automatically. Ideally the browser would also 
> not allow the token to be accessible from the DOM. 

Interesting. That would allow to elevate browser support to the level of BASIC.

> 
> Even without browser support I think 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-12 Thread Neil Madden
Good questions! Answers inline:

> On 12 Dec 2020, at 10:07, Torsten Lodderstedt  wrote:
> 
> Thanks for sharing, Neil!
> 
> I‘ve got some questions:
> Note: I assume the tokens you are referring in your article are OAuth access 
> tokens.

No, probably not. Just auth tokens more generically. 

> - carrying tokens in URLs wie considered bad practice by the Security BCP and 
> OAuth 2.1 due to leakage via referrer headers and so on. Why isn’t this an 
> issue with your approach?

This is generally safe advice, but it is often over-cautious for three reasons:

1. Referer headers (and window.referrer) apply when embedding/linking resources 
in HTML. But when we’re talking about browser-based apps (eg SPAs), that 
usually means JavaScript calling some backend API that returns JSON or some 
other data format. These data formats don’t have links or embedded resources 
(as far as the browser is concerned), so they don’t leak Referer headers in the 
same way. When the app loads a resource from a URI in a JSON response the 
Referer header will contain the URI of the app itself (most likely a generic 
HTML template page), not the capability URI from which the JSON was loaded. 
Similar arguments apply to browser history and other typical ways that URIs 
leak. 

2. You can now use the Referrer-Policy header [1] and rel=“noopener noreferrer” 
to opt out of this leakage, and browsers are moving to doing this by default 
for cross-origin requests/embeds. (This is already enabled by default in 
Safari). 

3. When you do want to use capability URIs for top-level navigation, there are 
places in the URI you can put a token that aren’t ever included in Referer 
headers or window.referrer or ever sent to the server at all - such as the 
fragment. JavaScript can then extract the token from the fragment (and then 
wipe it) and send it to the server in an Authorization header or whatever. See 
[2] for more details and alternatives. 

[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy
[2]: 
https://neilmadden.blog/2019/01/16/can-you-ever-safely-include-credentials-in-a-url/

> - generating (self contained) or using (handles) per URL access tokens might 
> be rather expensive. Can you sketch out how you wanna cope with that 
> challenge?

A decent HMAC implementation takes about 1-2 microseconds for typical size of 
token we’re talking about. 

> - per URL Access tokens are a very consequent Form or audience restriction. 
> How do you wanna signal the audience to the AS?

As I said, this isn’t OAuth, but for example you can already do this with the 
macaroon access tokens in ForgeRock AM 7.0 - issue a single access token and 
then make copies with specific audience restrictions added as caveats, as 
discussed in [3]. Such audience restrictions are then returned in the token 
introspection response and the RS can enforce them. 

My comment in the article about ideas for future OAuth is really just that the 
token endpoint should be able to issue multiple fine-grained access tokens in 
one go, each associated with a particular endpoint (or endpoints). You could 
either return these as separate items like:

“access_tokens”: [
{ “token”: “abc...”, 
   “aud”: “https://api.example.com/foo” },
{ “token”: “def...”,
   “aud”: “https://api.example.com/bar” }
]

Or just go ahead and combine those into capability URIs. (I think I already 
mentioned this a long time ago when GNAP was first being discussed). 

Speaking even more wishfully, what I would really love to see is a new URL 
scheme for these, something like:

  bearer://@api.example.com/foo

Which is equivalent to a HTTPS link, but the browser knows about this format 
and when clicking on/accessing such a URI it sends the token as an 
Authorization: Bearer header automatically. Ideally the browser would also not 
allow the token to be accessible from the DOM. 

Even without browser support I think such a URI scheme would be useful to allow 
GitHub and others to more easily recognise capability URIs checked into public 
git repos and perhaps provide a way to automatically revoke them 
(.well-known/token-revocation perhaps).

[3]: 
https://neilmadden.blog/2020/07/29/least-privilege-with-less-effort-macaroon-access-tokens-in-am-7-0/

— Neil

> 
> best regards,
> Torsten.
> 
>> Am 12.12.2020 um 08:26 schrieb Neil Madden :
>> 
>> Not directly related to DPoP or OAuth, but I wrote some notes to help 
>> recovering XSS Nihilists: 
>> https://neilmadden.blog/2020/12/10/xss-doesnt-have-to-be-game-over/
>> 
>> — Neil
>> 
>>> On 12 Dec 2020, at 00:02, Brian Campbell 
>>>  wrote:
>>> 
>>> I think that puts Jim in the XSS Nihilism camp :) 
>>> 
>>> Implicit type flows are being deprecated/discouraged. But keeping tokens 
>>> out of browsers doesn't seem likely. There is some menton of CSP in 
>>> https://tools.ietf.org/html/draft-ietf-oauth-browser-based-apps-07#section-9.7
>>>  
>>> 
>>> On Wed, Dec 9, 2020 at 4:10 PM Jim Manico  wrote:
 The 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-12 Thread Torsten Lodderstedt
Thanks for sharing, Neil!

I‘ve got some questions:
Note: I assume the tokens you are referring in your article are OAuth access 
tokens.
- carrying tokens in URLs wie considered bad practice by the Security BCP and 
OAuth 2.1 due to leakage via referrer headers and so on. Why isn’t this an 
issue with your approach?
- generating (self contained) or using (handles) per URL access tokens might be 
rather expensive. Can you sketch out how you wanna cope with that challenge?
- per URL Access tokens are a very consequent Form or audience restriction. How 
do you wanna signal the audience to the AS?

best regards,
Torsten.

> Am 12.12.2020 um 08:26 schrieb Neil Madden :
> 
> 
> Not directly related to DPoP or OAuth, but I wrote some notes to help 
> recovering XSS Nihilists: 
> https://neilmadden.blog/2020/12/10/xss-doesnt-have-to-be-game-over/
> 
> — Neil
> 
>>> On 12 Dec 2020, at 00:02, Brian Campbell 
>>>  wrote:
>>> 
>> 
>> I think that puts Jim in the XSS Nihilism camp :) 
>> 
>> Implicit type flows are being deprecated/discouraged. But keeping tokens out 
>> of browsers doesn't seem likely. There is some menton of CSP in 
>> https://tools.ietf.org/html/draft-ietf-oauth-browser-based-apps-07#section-9.7
>>  
>> 
>>> On Wed, Dec 9, 2020 at 4:10 PM Jim Manico  wrote:
>>> The basic theme from the web attacker community is:
>>> 
>>> 1) XSS is a game over event to web clients. XSS can steal or abuse (request 
>>> forgery) tokens, and more.
>>> 
>>> 2) Even if you prevent stolen tokens from being used outside of a web 
>>> client, XSS still allows the attacker to force a user to make any request 
>>> in a fraudulent way, abusing browser based tokens as a form of request 
>>> forgery.
>>> 
>>> 3) There are advanced measures to stop a token from being stolen from a web 
>>> client, like a HTTPonly cookies and to a lesser degree, JS Closures and 
>>> Webworkers. 
>>> 
>>> 4) However, these measures to protect cookies are mostly moot. Attackers 
>>> can just force clients to make fraudulent requests.
>>> 
>>> 5) Many recommend the BFF pattern to hide tokens on the back end, but 
>>> still, request forgery via XSS allows all kinds of abuse.
>>> 
>>> XSS is game over no matter how you slice it.
>>> 
>>> Crypto solutions do not help. Perhaps the world of OAuth can start 
>>> suggesting that web clients use CSP 3.0 in specific ways, if you still plan 
>>> to support Implicit type flows or tokens in browsers?
>>> 
>>> Respectfully,
>>> 
>>> - Jim
>>> 
>>> 
>>> 
>>> On 12/9/20 12:57 PM, Brian Campbell wrote:
 Thanks Philippe, I very much concur with your line of reasoning and the 
 important considerations. The scenario I was thinking of is: browser based 
 client where XSS is used to exfiltrate the refresh token along with 
 pre-computed proofs that would allow for the RT to be exchanged for new 
 access tokens and also pre-computed proofs that would work with those 
 access tokens for resource access. With the pre-computed proofs that would 
 allow prolonged (as long as the RT is valid) access to protected resources 
 even when the victim is offline. Is that a concrete attack scenario? I 
 mean, kind of. It's pretty convoluted/complex. And while an access token 
 hash would reign it in somewhat (ATs obtained from the stolen RT wouldn't 
 be usable) it's hard to say if the cost is worth the benefit.
 
 
 
 On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck 
  wrote:
> Yeah, browser-based apps are pure fun, aren’t they? :)
> 
> The reason I covered a couple of (pessimistic) XSS scenarios is that the 
> discussion started with an assumption that the attacker already 
> successfully exploited an XSS vulnerability. I pointed out how, at that 
> point, finetuning DPoP proof contents will have little to no effect to 
> stop an attack. I believe it is important to make this very clear, to 
> avoid people turning to DPoP as a security mechanism for browser-based 
> applications.
> 
> 
> Specifically to your question on including the hash in the proof, I think 
> these considerations are important:
> 
> 1. Does the inclusion of the AT hash stop a concrete attack scenario?
> 2. Is the “cost” (implementation, getting it right, …) worth the benefits?
> 
> 
> Here’s my view on these considerations (specifically for browser-based 
> apps, not for other types of applications):
> 
> 1. The proof precomputation attack is already quite complex, and short 
> access token lifetimes already reduce the window of attack. If the 
> attacker can steal a future AT, they could also precompute new proofs 
> then. 
> 2. For browser-based apps, it seems that doing this complicates the 
> implementation, without adding much benefit. Of course, libraries could 
> handle this, which significantly reduces the cost. 
> 
> 
> Note that these comments are specifically to 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-12 Thread Vladimir Dzhuvinov
If the current DPoP has code complexity "X", the relative additional
complexity to include access token hashes doesn't seem like very much.
An app choosing DPoP means accepting the code complexity that comes with
dealing with keys, composing the signing inputs for the proofs, signing,
the necessary changes to the token and RS requests. On the other hand,
for some people that additional access token hash may become the straw
that breaks the camel's back, causing them to quit their jobs developing
web apps and never look back :)

Have you thought about letting deployments decide about the access token
hash? To say look, there is also the option to bind an access token to
the DPoP proof, the security benefits can be such an such, and this is
how it can be done.

What I don't like about that proposal:

  * It will complicate the spec

  * The current spec doesn't require implementers / deployments to make
any decisions, apart from adopt / not DPoP (okay, also choose a JWS
alg) - which is actually a great feature to have


Vladimir


On 12/12/2020 01:58, Brian Campbell wrote:
> Any type of client could use DPoP and (presumably) benefit from
> sender-constrained access tokens. So yeah, adding complexity
> specifically for browser-based applications (that only mitigates one
> variation of the attacks possible with XSS anyway)  has 'cost' impact
> to those clients as well. And should be considered in the
> cost/benefit. Including the AT hash isn't terribly complicated but
> it's not trivial either. I'm honestly still unsure but am leaning
> towards it not being worth adding.
>
> On Fri, Dec 11, 2020 at 2:14 AM Philippe De Ryck
>  > wrote:
>
> The scenario you describe here is realistic in browser-based apps
> with XSS vulnerabilities, but it is pretty complex. Since there
> are worse problems when XSS happens, it’s hard to say whether DPoP
> should mitigate this. 
>
> I’m wondering what other types of clients would benefit from using
> DPoP for access tokens? Mobile apps? Clients using a Client
> Credentials grant?
>
> How are they impacted by any change made specifically for
> browser-based applications?
>
> Philippe
>
>
>> On 9 Dec 2020, at 23:57, Brian Campbell
>> mailto:bcampb...@pingidentity.com>>
>> wrote:
>>
>> Thanks Philippe, I very much concur with your line of reasoning
>> and the important considerations. The scenario I was thinking of
>> is: browser based client where XSS is used to exfiltrate the
>> refresh token along with pre-computed proofs that would allow for
>> the RT to be exchanged for new access tokens and also
>> pre-computed proofs that would work with those access tokens for
>> resource access. With the pre-computed proofs that would allow
>> prolonged (as long as the RT is valid) access to protected
>> resources even when the victim is offline. Is that a concrete
>> attack scenario? I mean, kind of. It's pretty convoluted/complex.
>> And while an access token hash would reign it in somewhat (ATs
>> obtained from the stolen RT wouldn't be usable) it's hard to say
>> if the cost is worth the benefit.
>>
>>
>>
>> On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck
>> > > wrote:
>>
>> Yeah, browser-based apps are pure fun, aren’t they? :)
>>
>> The reason I covered a couple of (pessimistic) XSS scenarios
>> is that the discussion started with an assumption that the
>> attacker already successfully exploited an XSS vulnerability.
>> I pointed out how, at that point, finetuning DPoP proof
>> contents will have little to no effect to stop an attack. I
>> believe it is important to make this very clear, to avoid
>> people turning to DPoP as a security mechanism for
>> browser-based applications.
>>
>>
>> Specifically to your question on including the hash in the
>> proof, I think these considerations are important:
>>
>> 1. Does the inclusion of the AT hash stop a concrete attack
>> scenario?
>> 2. Is the “cost” (implementation, getting it right, …) worth
>> the benefits?
>>
>>
>> Here’s my view on these considerations (*/specifically for
>> browser-based apps, not for other types of applications/*):
>>
>> 1. The proof precomputation attack is already quite complex,
>> and short access token lifetimes already reduce the window of
>> attack. If the attacker can steal a future AT, they could
>> also precompute new proofs then. 
>> 2. For browser-based apps, it seems that doing this
>> complicates the implementation, without adding much benefit.
>> Of course, libraries could handle this, which significantly
>> reduces the cost. 
>>
>>
>> Note that these comments are specifically to 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-11 Thread Neil Madden
Not directly related to DPoP or OAuth, but I wrote some notes to help 
recovering XSS Nihilists: 
https://neilmadden.blog/2020/12/10/xss-doesnt-have-to-be-game-over/

— Neil

> On 12 Dec 2020, at 00:02, Brian Campbell 
>  wrote:
> 
> 
> I think that puts Jim in the XSS Nihilism camp :) 
> 
> Implicit type flows are being deprecated/discouraged. But keeping tokens out 
> of browsers doesn't seem likely. There is some menton of CSP in 
> https://tools.ietf.org/html/draft-ietf-oauth-browser-based-apps-07#section-9.7
>  
> 
>> On Wed, Dec 9, 2020 at 4:10 PM Jim Manico  wrote:
>> The basic theme from the web attacker community is:
>> 
>> 1) XSS is a game over event to web clients. XSS can steal or abuse (request 
>> forgery) tokens, and more.
>> 
>> 2) Even if you prevent stolen tokens from being used outside of a web 
>> client, XSS still allows the attacker to force a user to make any request in 
>> a fraudulent way, abusing browser based tokens as a form of request forgery.
>> 
>> 3) There are advanced measures to stop a token from being stolen from a web 
>> client, like a HTTPonly cookies and to a lesser degree, JS Closures and 
>> Webworkers. 
>> 
>> 4) However, these measures to protect cookies are mostly moot. Attackers can 
>> just force clients to make fraudulent requests.
>> 
>> 5) Many recommend the BFF pattern to hide tokens on the back end, but still, 
>> request forgery via XSS allows all kinds of abuse.
>> 
>> XSS is game over no matter how you slice it.
>> 
>> Crypto solutions do not help. Perhaps the world of OAuth can start 
>> suggesting that web clients use CSP 3.0 in specific ways, if you still plan 
>> to support Implicit type flows or tokens in browsers?
>> 
>> Respectfully,
>> 
>> - Jim
>> 
>> 
>> 
>> On 12/9/20 12:57 PM, Brian Campbell wrote:
>>> Thanks Philippe, I very much concur with your line of reasoning and the 
>>> important considerations. The scenario I was thinking of is: browser based 
>>> client where XSS is used to exfiltrate the refresh token along with 
>>> pre-computed proofs that would allow for the RT to be exchanged for new 
>>> access tokens and also pre-computed proofs that would work with those 
>>> access tokens for resource access. With the pre-computed proofs that would 
>>> allow prolonged (as long as the RT is valid) access to protected resources 
>>> even when the victim is offline. Is that a concrete attack scenario? I 
>>> mean, kind of. It's pretty convoluted/complex. And while an access token 
>>> hash would reign it in somewhat (ATs obtained from the stolen RT wouldn't 
>>> be usable) it's hard to say if the cost is worth the benefit.
>>> 
>>> 
>>> 
>>> On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck 
>>>  wrote:
 Yeah, browser-based apps are pure fun, aren’t they? :)
 
 The reason I covered a couple of (pessimistic) XSS scenarios is that the 
 discussion started with an assumption that the attacker already 
 successfully exploited an XSS vulnerability. I pointed out how, at that 
 point, finetuning DPoP proof contents will have little to no effect to 
 stop an attack. I believe it is important to make this very clear, to 
 avoid people turning to DPoP as a security mechanism for browser-based 
 applications.
 
 
 Specifically to your question on including the hash in the proof, I think 
 these considerations are important:
 
 1. Does the inclusion of the AT hash stop a concrete attack scenario?
 2. Is the “cost” (implementation, getting it right, …) worth the benefits?
 
 
 Here’s my view on these considerations (specifically for browser-based 
 apps, not for other types of applications):
 
 1. The proof precomputation attack is already quite complex, and short 
 access token lifetimes already reduce the window of attack. If the 
 attacker can steal a future AT, they could also precompute new proofs 
 then. 
 2. For browser-based apps, it seems that doing this complicates the 
 implementation, without adding much benefit. Of course, libraries could 
 handle this, which significantly reduces the cost. 
 
 
 Note that these comments are specifically to complicating the spec and 
 implementation. DPoP’s capabilities of using sender-constrained access 
 tokens are still useful to counter various other scenarios (e.g., 
 middleboxes or APIs abusing access tokens). If other applications would 
 significantly benefit from having the hash in the proof, I’m all for it.
 
 On a final note, I would be happy to help clear up the details on 
 web-based threats and defenses if necessary.
 
 —
 Pragmatic Web Security
 Security for developers
 https://pragmaticwebsecurity.com/
 
 
> On 8 Dec 2020, at 22:47, Brian Campbell  
> wrote:
> 
> Danial recently added some text to the working copy of the draft with 
> https://github.com/danielfett/draft-dpop/commit/f4b42058 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-11 Thread Brian Campbell
I think that puts Jim in the XSS Nihilism camp :)

Implicit type flows are being deprecated/discouraged. But keeping tokens
out of browsers doesn't seem likely. There is some menton of CSP in
https://tools.ietf.org/html/draft-ietf-oauth-browser-based-apps-07#section-9.7

On Wed, Dec 9, 2020 at 4:10 PM Jim Manico  wrote:

> The basic theme from the web attacker community is:
>
> 1) XSS is a game over event to web clients. XSS can steal or abuse
> (request forgery) tokens, and more.
>
> 2) Even if you prevent stolen tokens from being used outside of a web
> client, XSS still allows the attacker to force a user to make any request
> in a fraudulent way, abusing browser based tokens as a form of request
> forgery.
>
> 3) There are advanced measures to stop a token from being stolen from a
> web client, like a HTTPonly cookies and to a lesser degree, JS Closures and
> Webworkers.
>
> 4) However, these measures to protect cookies are mostly moot. Attackers
> can just force clients to make fraudulent requests.
>
> 5) Many recommend the BFF pattern to hide tokens on the back end, but
> still, request forgery via XSS allows all kinds of abuse.
>
> XSS is game over no matter how you slice it.
>
> Crypto solutions do not help. Perhaps the world of OAuth can start
> suggesting that web clients use CSP 3.0 in specific ways, if you still plan
> to support Implicit type flows or tokens in browsers?
>
> Respectfully,
>
> - Jim
>
>
> On 12/9/20 12:57 PM, Brian Campbell wrote:
>
> Thanks Philippe, I very much concur with your line of reasoning and the
> important considerations. The scenario I was thinking of is: browser based
> client where XSS is used to exfiltrate the refresh token along with
> pre-computed proofs that would allow for the RT to be exchanged for new
> access tokens and also pre-computed proofs that would work with those
> access tokens for resource access. With the pre-computed proofs that would
> allow prolonged (as long as the RT is valid) access to protected resources
> even when the victim is offline. Is that a concrete attack scenario? I
> mean, kind of. It's pretty convoluted/complex. And while an access token
> hash would reign it in somewhat (ATs obtained from the stolen RT wouldn't
> be usable) it's hard to say if the cost is worth the benefit.
>
>
>
> On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck <
> phili...@pragmaticwebsecurity.com> wrote:
>
>> Yeah, browser-based apps are pure fun, aren’t they? :)
>>
>> The reason I covered a couple of (pessimistic) XSS scenarios is that the
>> discussion started with an assumption that the attacker already
>> successfully exploited an XSS vulnerability. I pointed out how, at that
>> point, finetuning DPoP proof contents will have little to no effect to stop
>> an attack. I believe it is important to make this very clear, to avoid
>> people turning to DPoP as a security mechanism for browser-based
>> applications.
>>
>>
>> Specifically to your question on including the hash in the proof, I think
>> these considerations are important:
>>
>> 1. Does the inclusion of the AT hash stop a concrete attack scenario?
>> 2. Is the “cost” (implementation, getting it right, …) worth the benefits?
>>
>>
>> Here’s my view on these considerations (*specifically for browser-based
>> apps, not for other types of applications*):
>>
>> 1. The proof precomputation attack is already quite complex, and short
>> access token lifetimes already reduce the window of attack. If the attacker
>> can steal a future AT, they could also precompute new proofs then.
>> 2. For browser-based apps, it seems that doing this complicates the
>> implementation, without adding much benefit. Of course, libraries could
>> handle this, which significantly reduces the cost.
>>
>>
>> Note that these comments are specifically to complicating the spec and
>> implementation. DPoP’s capabilities of using sender-constrained access
>> tokens are still useful to counter various other scenarios (e.g.,
>> middleboxes or APIs abusing access tokens). If other applications would
>> significantly benefit from having the hash in the proof, I’m all for it.
>>
>> On a final note, I would be happy to help clear up the details on
>> web-based threats and defenses if necessary.
>>
>> —
>> *Pragmatic Web Security*
>> *Security for developers*
>> https://pragmaticwebsecurity.com/
>>
>>
>> On 8 Dec 2020, at 22:47, Brian Campbell 
>> wrote:
>>
>> Danial recently added some text to the working copy of the draft with
>> https://github.com/danielfett/draft-dpop/commit/f4b42058 that I think
>> aims to better convey the "nutshell: XSS = Game over" sentiment and maybe
>> dissuade folks from looking to DPoP as a cure-all for browser based
>> applications. Admittedly a lot of the initial impetus behind producing the
>> draft in the first place was born out of discussions around browser based
>> apps. But it's neither specific to browser based apps nor a panacea for
>> them. I hope the language in the document and how it's recently 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-11 Thread Brian Campbell
Any type of client could use DPoP and (presumably) benefit from
sender-constrained access tokens. So yeah, adding complexity specifically
for browser-based applications (that only mitigates one variation of the
attacks possible with XSS anyway)  has 'cost' impact to those clients as
well. And should be considered in the cost/benefit. Including the AT hash
isn't terribly complicated but it's not trivial either. I'm honestly still
unsure but am leaning towards it not being worth adding.

On Fri, Dec 11, 2020 at 2:14 AM Philippe De Ryck <
phili...@pragmaticwebsecurity.com> wrote:

> The scenario you describe here is realistic in browser-based apps with XSS
> vulnerabilities, but it is pretty complex. Since there are worse problems
> when XSS happens, it’s hard to say whether DPoP should mitigate this.
>
> I’m wondering what other types of clients would benefit from using DPoP
> for access tokens? Mobile apps? Clients using a Client Credentials grant?
>
> How are they impacted by any change made specifically for browser-based
> applications?
>
> Philippe
>
>
> On 9 Dec 2020, at 23:57, Brian Campbell 
> wrote:
>
> Thanks Philippe, I very much concur with your line of reasoning and the
> important considerations. The scenario I was thinking of is: browser based
> client where XSS is used to exfiltrate the refresh token along with
> pre-computed proofs that would allow for the RT to be exchanged for new
> access tokens and also pre-computed proofs that would work with those
> access tokens for resource access. With the pre-computed proofs that would
> allow prolonged (as long as the RT is valid) access to protected resources
> even when the victim is offline. Is that a concrete attack scenario? I
> mean, kind of. It's pretty convoluted/complex. And while an access token
> hash would reign it in somewhat (ATs obtained from the stolen RT wouldn't
> be usable) it's hard to say if the cost is worth the benefit.
>
>
>
> On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck <
> phili...@pragmaticwebsecurity.com> wrote:
>
>> Yeah, browser-based apps are pure fun, aren’t they? :)
>>
>> The reason I covered a couple of (pessimistic) XSS scenarios is that the
>> discussion started with an assumption that the attacker already
>> successfully exploited an XSS vulnerability. I pointed out how, at that
>> point, finetuning DPoP proof contents will have little to no effect to stop
>> an attack. I believe it is important to make this very clear, to avoid
>> people turning to DPoP as a security mechanism for browser-based
>> applications.
>>
>>
>> Specifically to your question on including the hash in the proof, I think
>> these considerations are important:
>>
>> 1. Does the inclusion of the AT hash stop a concrete attack scenario?
>> 2. Is the “cost” (implementation, getting it right, …) worth the benefits?
>>
>>
>> Here’s my view on these considerations (*specifically for browser-based
>> apps, not for other types of applications*):
>>
>> 1. The proof precomputation attack is already quite complex, and short
>> access token lifetimes already reduce the window of attack. If the attacker
>> can steal a future AT, they could also precompute new proofs then.
>> 2. For browser-based apps, it seems that doing this complicates the
>> implementation, without adding much benefit. Of course, libraries could
>> handle this, which significantly reduces the cost.
>>
>>
>> Note that these comments are specifically to complicating the spec and
>> implementation. DPoP’s capabilities of using sender-constrained access
>> tokens are still useful to counter various other scenarios (e.g.,
>> middleboxes or APIs abusing access tokens). If other applications would
>> significantly benefit from having the hash in the proof, I’m all for it.
>>
>> On a final note, I would be happy to help clear up the details on
>> web-based threats and defenses if necessary.
>>
>> —
>> *Pragmatic Web Security*
>> *Security for developers*
>> https://pragmaticwebsecurity.com/
>>
>>
>> On 8 Dec 2020, at 22:47, Brian Campbell 
>> wrote:
>>
>> Danial recently added some text to the working copy of the draft with
>> https://github.com/danielfett/draft-dpop/commit/f4b42058 that I think
>> aims to better convey the "nutshell: XSS = Game over" sentiment and maybe
>> dissuade folks from looking to DPoP as a cure-all for browser based
>> applications. Admittedly a lot of the initial impetus behind producing the
>> draft in the first place was born out of discussions around browser based
>> apps. But it's neither specific to browser based apps nor a panacea for
>> them. I hope the language in the document and how it's recently been
>> presented is reflective of that reality.
>>
>> The more specific discussions/recommendations around in-browser apps are
>> valuable (if somewhat over my head) but might be more appropriate in the 
>> OAuth
>> 2.0 for Browser-Based Apps
>> 
>> draft.
>>
>> With respect to the 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-11 Thread Philippe De Ryck
The scenario you describe here is realistic in browser-based apps with XSS 
vulnerabilities, but it is pretty complex. Since there are worse problems when 
XSS happens, it’s hard to say whether DPoP should mitigate this. 

I’m wondering what other types of clients would benefit from using DPoP for 
access tokens? Mobile apps? Clients using a Client Credentials grant?

How are they impacted by any change made specifically for browser-based 
applications?

Philippe


> On 9 Dec 2020, at 23:57, Brian Campbell  wrote:
> 
> Thanks Philippe, I very much concur with your line of reasoning and the 
> important considerations. The scenario I was thinking of is: browser based 
> client where XSS is used to exfiltrate the refresh token along with 
> pre-computed proofs that would allow for the RT to be exchanged for new 
> access tokens and also pre-computed proofs that would work with those access 
> tokens for resource access. With the pre-computed proofs that would allow 
> prolonged (as long as the RT is valid) access to protected resources even 
> when the victim is offline. Is that a concrete attack scenario? I mean, kind 
> of. It's pretty convoluted/complex. And while an access token hash would 
> reign it in somewhat (ATs obtained from the stolen RT wouldn't be usable) 
> it's hard to say if the cost is worth the benefit.
> 
> 
> 
> On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck 
>  > wrote:
> Yeah, browser-based apps are pure fun, aren’t they? :)
> 
> The reason I covered a couple of (pessimistic) XSS scenarios is that the 
> discussion started with an assumption that the attacker already successfully 
> exploited an XSS vulnerability. I pointed out how, at that point, finetuning 
> DPoP proof contents will have little to no effect to stop an attack. I 
> believe it is important to make this very clear, to avoid people turning to 
> DPoP as a security mechanism for browser-based applications.
> 
> 
> Specifically to your question on including the hash in the proof, I think 
> these considerations are important:
> 
> 1. Does the inclusion of the AT hash stop a concrete attack scenario?
> 2. Is the “cost” (implementation, getting it right, …) worth the benefits?
> 
> 
> Here’s my view on these considerations (specifically for browser-based apps, 
> not for other types of applications):
> 
> 1. The proof precomputation attack is already quite complex, and short access 
> token lifetimes already reduce the window of attack. If the attacker can 
> steal a future AT, they could also precompute new proofs then. 
> 2. For browser-based apps, it seems that doing this complicates the 
> implementation, without adding much benefit. Of course, libraries could 
> handle this, which significantly reduces the cost. 
> 
> 
> Note that these comments are specifically to complicating the spec and 
> implementation. DPoP’s capabilities of using sender-constrained access tokens 
> are still useful to counter various other scenarios (e.g., middleboxes or 
> APIs abusing access tokens). If other applications would significantly 
> benefit from having the hash in the proof, I’m all for it.
> 
> On a final note, I would be happy to help clear up the details on web-based 
> threats and defenses if necessary.
> 
> —
> Pragmatic Web Security
> Security for developers
> https://pragmaticwebsecurity.com/ 
> 
> 
>> On 8 Dec 2020, at 22:47, Brian Campbell > > wrote:
>> 
>> Danial recently added some text to the working copy of the draft with 
>> https://github.com/danielfett/draft-dpop/commit/f4b42058 
>>  that I think aims 
>> to better convey the "nutshell: XSS = Game over" sentiment and maybe 
>> dissuade folks from looking to DPoP as a cure-all for browser based 
>> applications. Admittedly a lot of the initial impetus behind producing the 
>> draft in the first place was born out of discussions around browser based 
>> apps. But it's neither specific to browser based apps nor a panacea for 
>> them. I hope the language in the document and how it's recently been 
>> presented is reflective of that reality. 
>> 
>> The more specific discussions/recommendations around in-browser apps are 
>> valuable (if somewhat over my head) but might be more appropriate in the 
>> OAuth 2.0 for Browser-Based Apps 
>>  
>> draft.
>> 
>> With respect to the contents of the DPoP draft, I am still keen to try and 
>> flush out some consensus around the question posed in the start of this 
>> thread, which is effectively whether or not to include a hash of the access 
>> token in the proof.  Acknowledging that "XSS = Game over" does sort of evoke 
>> a tendency to not even bother with such incremental protections (what I've 
>> tried to humorously coin as "XSS Nihilism" with no success). And as such, I 
>> do think 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-09 Thread Jim Manico

The basic theme from the web attacker community is:

1) XSS is a game over event to web clients. XSS can steal or abuse 
(request forgery) tokens, and more.


2) Even if you prevent stolen tokens from being used outside of a web 
client, XSS still allows the attacker to force a user to make any 
request in a fraudulent way, abusing browser based tokens as a form of 
request forgery.


3) There are advanced measures to stop a token from being stolen from a 
web client, like a HTTPonly cookies and to a lesser degree, JS Closures 
and Webworkers.


4) However, these measures to protect cookies are mostly moot. Attackers 
can just force clients to make fraudulent requests.


5) Many recommend the BFF pattern to hide tokens on the back end, but 
still, request forgery via XSS allows all kinds of abuse.


XSS is game over no matter how you slice it.

Crypto solutions do not help. Perhaps the world of OAuth can start 
suggesting that web clients use CSP 3.0 in specific ways, if you still 
plan to support Implicit type flows or tokens in browsers?


Respectfully,

- Jim


On 12/9/20 12:57 PM, Brian Campbell wrote:
Thanks Philippe, I very much concur with your line of reasoning and 
the important considerations. The scenario I was thinking of is: 
browser based client where XSS is used to exfiltrate the refresh token 
along with pre-computed proofs that would allow for the RT to be 
exchanged for new access tokens and also pre-computed proofs that 
would work with those access tokens for resource access. With the 
pre-computed proofs that would allow prolonged (as long as the RT is 
valid) access to protected resources even when the victim is offline. 
Is that a concrete attack scenario? I mean, kind of. It's pretty 
convoluted/complex. And while an access token hash would reign it in 
somewhat (ATs obtained from the stolen RT wouldn't be usable) it's 
hard to say if the cost is worth the benefit.




On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck 
> wrote:


Yeah, browser-based apps are pure fun, aren’t they? :)

The reason I covered a couple of (pessimistic) XSS scenarios is
that the discussion started with an assumption that the attacker
already successfully exploited an XSS vulnerability. I pointed out
how, at that point, finetuning DPoP proof contents will have
little to no effect to stop an attack. I believe it is important
to make this very clear, to avoid people turning to DPoP as a
security mechanism for browser-based applications.


Specifically to your question on including the hash in the proof,
I think these considerations are important:

1. Does the inclusion of the AT hash stop a concrete attack scenario?
2. Is the “cost” (implementation, getting it right, …) worth the
benefits?


Here’s my view on these considerations (*/specifically for
browser-based apps, not for other types of applications/*):

1. The proof precomputation attack is already quite complex, and
short access token lifetimes already reduce the window of attack.
If the attacker can steal a future AT, they could also precompute
new proofs then.
2. For browser-based apps, it seems that doing this complicates
the implementation, without adding much benefit. Of course,
libraries could handle this, which significantly reduces the cost.


Note that these comments are specifically to complicating the spec
and implementation. DPoP’s capabilities of using
sender-constrained access tokens are still useful to counter
various other scenarios (e.g., middleboxes or APIs abusing access
tokens). If other applications would significantly benefit from
having the hash in the proof, I’m all for it.

On a final note, I would be happy to help clear up the details on
web-based threats and defenses if necessary.

—
*Pragmatic Web Security*
/Security for developers/
https://pragmaticwebsecurity.com/ 



On 8 Dec 2020, at 22:47, Brian Campbell
mailto:bcampb...@pingidentity.com>>
wrote:

Danial recently added some text to the working copy of the draft
with https://github.com/danielfett/draft-dpop/commit/f4b42058
 that I
think aims to better convey the "nutshell: XSS = Game over"
sentiment and maybe dissuade folks from looking to DPoP as a
cure-all for browser based applications. Admittedly a lot of the
initial impetus behind producing the draft in the first place was
born out of discussions around browser based apps. But it's
neither specific to browser based apps nor a panacea for them. I
hope the language in the document and how it's recently been
presented is reflective of that reality.

The more specific discussions/recommendations around in-browser
apps are valuable (if somewhat over my head) but might be more

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-09 Thread Brian Campbell
Thanks Philippe, I very much concur with your line of reasoning and the
important considerations. The scenario I was thinking of is: browser based
client where XSS is used to exfiltrate the refresh token along with
pre-computed proofs that would allow for the RT to be exchanged for new
access tokens and also pre-computed proofs that would work with those
access tokens for resource access. With the pre-computed proofs that would
allow prolonged (as long as the RT is valid) access to protected resources
even when the victim is offline. Is that a concrete attack scenario? I
mean, kind of. It's pretty convoluted/complex. And while an access token
hash would reign it in somewhat (ATs obtained from the stolen RT wouldn't
be usable) it's hard to say if the cost is worth the benefit.



On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck <
phili...@pragmaticwebsecurity.com> wrote:

> Yeah, browser-based apps are pure fun, aren’t they? :)
>
> The reason I covered a couple of (pessimistic) XSS scenarios is that the
> discussion started with an assumption that the attacker already
> successfully exploited an XSS vulnerability. I pointed out how, at that
> point, finetuning DPoP proof contents will have little to no effect to stop
> an attack. I believe it is important to make this very clear, to avoid
> people turning to DPoP as a security mechanism for browser-based
> applications.
>
>
> Specifically to your question on including the hash in the proof, I think
> these considerations are important:
>
> 1. Does the inclusion of the AT hash stop a concrete attack scenario?
> 2. Is the “cost” (implementation, getting it right, …) worth the benefits?
>
>
> Here’s my view on these considerations (*specifically for browser-based
> apps, not for other types of applications*):
>
> 1. The proof precomputation attack is already quite complex, and short
> access token lifetimes already reduce the window of attack. If the attacker
> can steal a future AT, they could also precompute new proofs then.
> 2. For browser-based apps, it seems that doing this complicates the
> implementation, without adding much benefit. Of course, libraries could
> handle this, which significantly reduces the cost.
>
>
> Note that these comments are specifically to complicating the spec and
> implementation. DPoP’s capabilities of using sender-constrained access
> tokens are still useful to counter various other scenarios (e.g.,
> middleboxes or APIs abusing access tokens). If other applications would
> significantly benefit from having the hash in the proof, I’m all for it.
>
> On a final note, I would be happy to help clear up the details on
> web-based threats and defenses if necessary.
>
> —
> *Pragmatic Web Security*
> *Security for developers*
> https://pragmaticwebsecurity.com/
>
>
> On 8 Dec 2020, at 22:47, Brian Campbell 
> wrote:
>
> Danial recently added some text to the working copy of the draft with
> https://github.com/danielfett/draft-dpop/commit/f4b42058 that I think
> aims to better convey the "nutshell: XSS = Game over" sentiment and maybe
> dissuade folks from looking to DPoP as a cure-all for browser based
> applications. Admittedly a lot of the initial impetus behind producing the
> draft in the first place was born out of discussions around browser based
> apps. But it's neither specific to browser based apps nor a panacea for
> them. I hope the language in the document and how it's recently been
> presented is reflective of that reality.
>
> The more specific discussions/recommendations around in-browser apps are
> valuable (if somewhat over my head) but might be more appropriate in the OAuth
> 2.0 for Browser-Based Apps
> 
> draft.
>
> With respect to the contents of the DPoP draft, I am still keen to try and
> flush out some consensus around the question posed in the start of this
> thread, which is effectively whether or not to include a hash of the access
> token in the proof.  Acknowledging that "XSS = Game over" does sort of
> evoke a tendency to not even bother with such incremental protections (what
> I've tried to humorously coin as "XSS Nihilism" with no success). And as
> such, I do think that leaving it how it is (no AT hash in the proof) is not
> unreasonable. But, as Filip previously articulated, including the AT hash
> in the proof would prevent potentially prolonged access to protected
> resources even when the victim is offline. And that seems maybe worthwhile
> to have in the protocol, given that it's not a huge change to the spec. But
> it's a trade-off either way and I'm personally on the fence about it.
>
> Including an RT hash in the proof seems more niche. Best I can tell, it
> would guard against prolonged offline access to protected resources when
> access tokens are bearer and the RT was DPoP-bound and also gets rotated.
> The trade-off there seems less worth it (I think an RT hash would be more
> awkward in the protocol too).
>
>
>
>
>

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-09 Thread Vladimir Dzhuvinov
Do we have deployments in the field and client-side developers giving
feedback / comments about the current DPoP, implementing it, and perhaps
those concerns about the access token?

Vladimir

On 08/12/2020 23:47, Brian Campbell wrote:
> Danial recently added some text to the working copy of the draft with
> https://github.com/danielfett/draft-dpop/commit/f4b42058 that I think
> aims to better convey the "nutshell: XSS = Game over" sentiment and
> maybe dissuade folks from looking to DPoP as a cure-all for browser
> based applications. Admittedly a lot of the initial impetus behind
> producing the draft in the first place was born out of discussions
> around browser based apps. But it's neither specific to browser based
> apps nor a panacea for them. I hope the language in the document and
> how it's recently been presented is reflective of that reality.
>
> The more specific discussions/recommendations around in-browser apps
> are valuable (if somewhat over my head) but might be more appropriate
> in the OAuth 2.0 for Browser-Based Apps
> 
> draft.
>
> With respect to the contents of the DPoP draft, I am still keen to try
> and flush out some consensus around the question posed in the start of
> this thread, which is effectively whether or not to include a hash of
> the access token in the proof.  Acknowledging that "XSS = Game over"
> does sort of evoke a tendency to not even bother with such incremental
> protections (what I've tried to humorously coin as "XSS Nihilism" with
> no success). And as such, I do think that leaving it how it is (no AT
> hash in the proof) is not unreasonable. But, as Filip previously
> articulated, including the AT hash in the proof would prevent
> potentially prolonged access to protected resources even when the
> victim is offline. And that seems maybe worthwhile to have in the
> protocol, given that it's not a huge change to the spec. But it's a
> trade-off either way and I'm personally on the fence about it.
>
> Including an RT hash in the proof seems more niche. Best I can tell,
> it would guard against prolonged offline access to protected resources
> when access tokens are bearer and the RT was DPoP-bound and also gets
> rotated. The trade-off there seems less worth it (I think an RT hash
> would be more awkward in the protocol too).
>
>
>
>
>
>
>
> On Fri, Dec 4, 2020 at 5:40 AM Philippe De Ryck
>  > wrote:
>
>
>> The suggestion to use a web worker to ensure that proofs cannot
>> be pre-computed is a good one I think. (You could also use a
>> sandboxed iframe for a separate sub/sibling-domain -
>> dpop.example.com ).
>
> An iframe with a different origin would also work (not really
> sandboxing, as that implies the use of the sandbox attribute to
> enforce behavioral restrictions). The downside of an iframe is the
> need to host additional HTML, vs a script file for the worker, but
> the effect is indeed the same.
>
>> For scenario 4, I think this only works if the attacker can
>> trick/spoof the AS into using their redirect_uri? Otherwise the
>> AC will go to the legitimate app which will reject it due to
>> mismatched state/PKCE. Or are you thinking of XSS on the
>> redirect_uri itself? I think probably a good practice is that the
>> target of a redirect_uri should be a very minimal and locked down
>> page to avoid this kind of possibility. (Again, using a separate
>> sub-domain to handle tokens and DPoP seems like a good idea).
>
> My original thought was to use a silent flow with Web Messaging.
> The scenario would go as follows:
>
> 1. Setup a Web Messaging listener to receive the incoming code
> 2. Create a hidden iframe with the DOM APIs
> 3. Create an authorization request such as
> 
> “//authorize?response_type=code_id=..._uri=https%3A%2F%example.com
> 
> =..._challenge=7-ffnU1EzHtMfxOAdlkp_WixnAM_z9tMh3JxgjazXAk_challenge_method=S256=none_mode=web_message/”
> 4. Load this URL in the iframe, and wait for the result
> 5. Retrieve code in the listener, and use PKCE (+ DPoP if needed)
> to exchange it for tokens
>
> This puts the attacker in full control over every aspect of the
> flow, so no need to manipulate any of the parameters.
>
>
> After your comment, I also believe an attacker can run the same
> scenario without the “/response_mode=web_message/”. This would go
> as follows:
>
> 1. Create a hidden iframe with the DOM APIs
> 2. Setup polling to read the URL (this will be possible for
> same-origin pages, not for cross-origin pages)
> 3. Create an authorization request such as
> 
> “//authorize?response_type=code_id=..._uri=https%3A%2F%example.com
> 
> 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-08 Thread Philippe De Ryck
Yeah, browser-based apps are pure fun, aren’t they? :)

The reason I covered a couple of (pessimistic) XSS scenarios is that the 
discussion started with an assumption that the attacker already successfully 
exploited an XSS vulnerability. I pointed out how, at that point, finetuning 
DPoP proof contents will have little to no effect to stop an attack. I believe 
it is important to make this very clear, to avoid people turning to DPoP as a 
security mechanism for browser-based applications.


Specifically to your question on including the hash in the proof, I think these 
considerations are important:

1. Does the inclusion of the AT hash stop a concrete attack scenario?
2. Is the “cost” (implementation, getting it right, …) worth the benefits?


Here’s my view on these considerations (specifically for browser-based apps, 
not for other types of applications):

1. The proof precomputation attack is already quite complex, and short access 
token lifetimes already reduce the window of attack. If the attacker can steal 
a future AT, they could also precompute new proofs then. 
2. For browser-based apps, it seems that doing this complicates the 
implementation, without adding much benefit. Of course, libraries could handle 
this, which significantly reduces the cost. 


Note that these comments are specifically to complicating the spec and 
implementation. DPoP’s capabilities of using sender-constrained access tokens 
are still useful to counter various other scenarios (e.g., middleboxes or APIs 
abusing access tokens). If other applications would significantly benefit from 
having the hash in the proof, I’m all for it.

On a final note, I would be happy to help clear up the details on web-based 
threats and defenses if necessary.

—
Pragmatic Web Security
Security for developers
https://pragmaticwebsecurity.com/


> On 8 Dec 2020, at 22:47, Brian Campbell  wrote:
> 
> Danial recently added some text to the working copy of the draft with 
> https://github.com/danielfett/draft-dpop/commit/f4b42058 
>  that I think aims 
> to better convey the "nutshell: XSS = Game over" sentiment and maybe dissuade 
> folks from looking to DPoP as a cure-all for browser based applications. 
> Admittedly a lot of the initial impetus behind producing the draft in the 
> first place was born out of discussions around browser based apps. But it's 
> neither specific to browser based apps nor a panacea for them. I hope the 
> language in the document and how it's recently been presented is reflective 
> of that reality. 
> 
> The more specific discussions/recommendations around in-browser apps are 
> valuable (if somewhat over my head) but might be more appropriate in the 
> OAuth 2.0 for Browser-Based Apps 
>  draft.
> 
> With respect to the contents of the DPoP draft, I am still keen to try and 
> flush out some consensus around the question posed in the start of this 
> thread, which is effectively whether or not to include a hash of the access 
> token in the proof.  Acknowledging that "XSS = Game over" does sort of evoke 
> a tendency to not even bother with such incremental protections (what I've 
> tried to humorously coin as "XSS Nihilism" with no success). And as such, I 
> do think that leaving it how it is (no AT hash in the proof) is not 
> unreasonable. But, as Filip previously articulated, including the AT hash in 
> the proof would prevent potentially prolonged access to protected resources 
> even when the victim is offline. And that seems maybe worthwhile to have in 
> the protocol, given that it's not a huge change to the spec. But it's a 
> trade-off either way and I'm personally on the fence about it.
> 
> Including an RT hash in the proof seems more niche. Best I can tell, it would 
> guard against prolonged offline access to protected resources when access 
> tokens are bearer and the RT was DPoP-bound and also gets rotated. The 
> trade-off there seems less worth it (I think an RT hash would be more awkward 
> in the protocol too). 
> 
> 
> 
> 
> 
> 
> 
> On Fri, Dec 4, 2020 at 5:40 AM Philippe De Ryck 
>  > wrote:
> 
>> The suggestion to use a web worker to ensure that proofs cannot be 
>> pre-computed is a good one I think. (You could also use a sandboxed iframe 
>> for a separate sub/sibling-domain - dpop.example.com 
>> ).
> 
> An iframe with a different origin would also work (not really sandboxing, as 
> that implies the use of the sandbox attribute to enforce behavioral 
> restrictions). The downside of an iframe is the need to host additional HTML, 
> vs a script file for the worker, but the effect is indeed the same.
> 
>> For scenario 4, I think this only works if the attacker can trick/spoof the 
>> AS into using their redirect_uri? Otherwise the AC will go to the legitimate 
>> app which will reject it 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-08 Thread Brian Campbell
Danial recently added some text to the working copy of the draft with
https://github.com/danielfett/draft-dpop/commit/f4b42058 that I think aims
to better convey the "nutshell: XSS = Game over" sentiment and maybe
dissuade folks from looking to DPoP as a cure-all for browser based
applications. Admittedly a lot of the initial impetus behind producing the
draft in the first place was born out of discussions around browser based
apps. But it's neither specific to browser based apps nor a panacea for
them. I hope the language in the document and how it's recently been
presented is reflective of that reality.

The more specific discussions/recommendations around in-browser apps are
valuable (if somewhat over my head) but might be more appropriate in the OAuth
2.0 for Browser-Based Apps

draft.

With respect to the contents of the DPoP draft, I am still keen to try and
flush out some consensus around the question posed in the start of this
thread, which is effectively whether or not to include a hash of the access
token in the proof.  Acknowledging that "XSS = Game over" does sort of
evoke a tendency to not even bother with such incremental protections (what
I've tried to humorously coin as "XSS Nihilism" with no success). And as
such, I do think that leaving it how it is (no AT hash in the proof) is not
unreasonable. But, as Filip previously articulated, including the AT hash
in the proof would prevent potentially prolonged access to protected
resources even when the victim is offline. And that seems maybe worthwhile
to have in the protocol, given that it's not a huge change to the spec. But
it's a trade-off either way and I'm personally on the fence about it.

Including an RT hash in the proof seems more niche. Best I can tell, it
would guard against prolonged offline access to protected resources when
access tokens are bearer and the RT was DPoP-bound and also gets rotated.
The trade-off there seems less worth it (I think an RT hash would be more
awkward in the protocol too).







On Fri, Dec 4, 2020 at 5:40 AM Philippe De Ryck <
phili...@pragmaticwebsecurity.com> wrote:

>
> The suggestion to use a web worker to ensure that proofs cannot be
> pre-computed is a good one I think. (You could also use a sandboxed iframe
> for a separate sub/sibling-domain - dpop.example.com).
>
>
> An iframe with a different origin would also work (not really sandboxing,
> as that implies the use of the sandbox attribute to enforce behavioral
> restrictions). The downside of an iframe is the need to host additional
> HTML, vs a script file for the worker, but the effect is indeed the same.
>
> For scenario 4, I think this only works if the attacker can trick/spoof
> the AS into using their redirect_uri? Otherwise the AC will go to the
> legitimate app which will reject it due to mismatched state/PKCE. Or are
> you thinking of XSS on the redirect_uri itself? I think probably a good
> practice is that the target of a redirect_uri should be a very minimal and
> locked down page to avoid this kind of possibility. (Again, using a
> separate sub-domain to handle tokens and DPoP seems like a good idea).
>
>
> My original thought was to use a silent flow with Web Messaging. The
> scenario would go as follows:
>
> 1. Setup a Web Messaging listener to receive the incoming code
> 2. Create a hidden iframe with the DOM APIs
> 3. Create an authorization request such as 
> “*/authorize?response_type=code_id=..._uri=https%3A%2F%example.com
> =..._challenge=7-ffnU1EzHtMfxOAdlkp_WixnAM_z9tMh3JxgjazXAk_challenge_method=S256=none_mode=web_message*
> ”
> 4. Load this URL in the iframe, and wait for the result
> 5. Retrieve code in the listener, and use PKCE (+ DPoP if needed) to
> exchange it for tokens
>
> This puts the attacker in full control over every aspect of the flow, so
> no need to manipulate any of the parameters.
>
>
> After your comment, I also believe an attacker can run the same scenario
> without the “*response_mode=web_message*”. This would go as follows:
>
> 1. Create a hidden iframe with the DOM APIs
> 2. Setup polling to read the URL (this will be possible for same-origin
> pages, not for cross-origin pages)
> 3. Create an authorization request such as 
> “*/authorize?response_type=code_id=..._uri=https%3A%2F%example.com
> =..._challenge=7-ffnU1EzHtMfxOAdlkp_WixnAM_z9tMh3JxgjazXAk_challenge_method=S256*
> ”
> 4. Load this URL in the iframe, and keep polling
> 5. Detect the redirect back to the application with the code in the URL,
> retrieve code, and use PKCE (+ DPoP if needed) to exchange it for tokens
>
> In step 5, the application is likely to also try to exchange the code.
> This will fail due to a mismatching PKCE verifier. While noisy, I don’t
> think it affects the scenario.
>
>
> IMO, the online attack scenario (i.e., proxying malicious requests through
> the victim’s browser) is quite appealing to an 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-04 Thread Philippe De Ryck

> The suggestion to use a web worker to ensure that proofs cannot be 
> pre-computed is a good one I think. (You could also use a sandboxed iframe 
> for a separate sub/sibling-domain - dpop.example.com 
> ).

An iframe with a different origin would also work (not really sandboxing, as 
that implies the use of the sandbox attribute to enforce behavioral 
restrictions). The downside of an iframe is the need to host additional HTML, 
vs a script file for the worker, but the effect is indeed the same.

> For scenario 4, I think this only works if the attacker can trick/spoof the 
> AS into using their redirect_uri? Otherwise the AC will go to the legitimate 
> app which will reject it due to mismatched state/PKCE. Or are you thinking of 
> XSS on the redirect_uri itself? I think probably a good practice is that the 
> target of a redirect_uri should be a very minimal and locked down page to 
> avoid this kind of possibility. (Again, using a separate sub-domain to handle 
> tokens and DPoP seems like a good idea).

My original thought was to use a silent flow with Web Messaging. The scenario 
would go as follows:

1. Setup a Web Messaging listener to receive the incoming code
2. Create a hidden iframe with the DOM APIs
3. Create an authorization request such as 
“/authorize?response_type=code_id=..._uri=https%3A%2F%example.com=..._challenge=7-ffnU1EzHtMfxOAdlkp_WixnAM_z9tMh3JxgjazXAk_challenge_method=S256=none_mode=web_message”
4. Load this URL in the iframe, and wait for the result
5. Retrieve code in the listener, and use PKCE (+ DPoP if needed) to exchange 
it for tokens

This puts the attacker in full control over every aspect of the flow, so no 
need to manipulate any of the parameters.


After your comment, I also believe an attacker can run the same scenario 
without the “response_mode=web_message”. This would go as follows:

1. Create a hidden iframe with the DOM APIs
2. Setup polling to read the URL (this will be possible for same-origin pages, 
not for cross-origin pages)
3. Create an authorization request such as 
“/authorize?response_type=code_id=..._uri=https%3A%2F%example.com=..._challenge=7-ffnU1EzHtMfxOAdlkp_WixnAM_z9tMh3JxgjazXAk_challenge_method=S256”
4. Load this URL in the iframe, and keep polling
5. Detect the redirect back to the application with the code in the URL, 
retrieve code, and use PKCE (+ DPoP if needed) to exchange it for tokens

In step 5, the application is likely to also try to exchange the code. This 
will fail due to a mismatching PKCE verifier. While noisy, I don’t think it 
affects the scenario. 


> IMO, the online attack scenario (i.e., proxying malicious requests through 
> the victim’s browser) is quite appealing to an attacker, despite the apparent 
> inconvenience:
> 
>  - the victim’s browser may be inside a corporate firewall or VPN, allowing 
> the attacker to effectively bypass these restrictions
>  - the attacker’s traffic is mixed in with the user’s own requests, making 
> them harder to distinguish or to block
> 
> Overall, DPoP can only protect against XSS to the same level as HttpOnly 
> cookies. This is not nothing, but it means it only prevents relatively naive 
> attacks. Given the association of public key signatures with strong 
> authentication, people may have overinflated expectations if DPoP is pitched 
> as an XSS defence.

Yes, in the cookie world this is known as “Session Riding”. Having the worker 
for token isolation would make it possible to enforce a coarse-grained policy 
on outgoing requests to prevent total abuse of the AT.

My main concern here is the effort of doing DPoP in a browser versus the 
limited gains. It may also give a false sense of security. 



With all this said, I believe that the AS can lock down its configuration to 
reduce these attack vectors. A few initial ideas:

1. Disable silent flows for SPAs using RT rotation
2. Use the sec-fetch headers to detect and reject non-silent iframe-based flows

For example,  an OAuth 2.0 flow in an iframe in Brave/Chrome carries these 
headers:
sec-fetch-dest: iframe
sec-fetch-mode: navigate
sec-fetch-site: cross-site
sec-fetch-user: ?1


Philippe

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-04 Thread Neil Madden
Thanks Philippe, this is a good analysis.

The suggestion to use a web worker to ensure that proofs cannot be pre-computed 
is a good one I think. (You could also use a sandboxed iframe for a separate 
sub/sibling-domain - dpop.example.com ).

For scenario 4, I think this only works if the attacker can trick/spoof the AS 
into using their redirect_uri? Otherwise the AC will go to the legitimate app 
which will reject it due to mismatched state/PKCE. Or are you thinking of XSS 
on the redirect_uri itself? I think probably a good practice is that the target 
of a redirect_uri should be a very minimal and locked down page to avoid this 
kind of possibility. (Again, using a separate sub-domain to handle tokens and 
DPoP seems like a good idea).

IMO, the online attack scenario (i.e., proxying malicious requests through the 
victim’s browser) is quite appealing to an attacker, despite the apparent 
inconvenience:

 - the victim’s browser may be inside a corporate firewall or VPN, allowing the 
attacker to effectively bypass these restrictions
 - the attacker’s traffic is mixed in with the user’s own requests, making them 
harder to distinguish or to block

Overall, DPoP can only protect against XSS to the same level as HttpOnly 
cookies. This is not nothing, but it means it only prevents relatively naive 
attacks. Given the association of public key signatures with strong 
authentication, people may have overinflated expectations if DPoP is pitched as 
an XSS defence.

— Neil

> On 4 Dec 2020, at 09:22, Philippe De Ryck  
> wrote:
> 
> Hi all,
> 
> This is a very useful discussion, and there are some merits to using DPoP in 
> this way. However, the attacker's capabilities are stronger than often 
> assumed, so it may not matter in the end. I've been wanting to write this out 
> for a while now, so I've added a couple of scenarios below. Note that I just 
> came up with the scenario names on the fly, so these may not be the best ones 
> for future use ...
> 
> (This got a lot longer than I expected, so here's a TOC)
> - Attack assumption
> - Scenario 1: offline XSS against existing tokens
> - Scenario 2: passive online XSS against existing tokens
> - Scenario 3: active online XSS against existing tokens
> - Scenario 4 (!): obtaining fresh tokens
> - Mitigation: DPoP in a Web Worker
> - Conclusion (TL;DR)
> 
> I hope this all makes sense!
> 
> Philippe
> 
> 
> 
> 
> Assumption
> 
> The attacker has the ability to execute JS code in the application's context 
> (e.g., through XSS, a malicious ad, ...). For simplicity, I'll just refer to 
> the attack as "XSS".
> 
> 
> 
> Scenario 1: offline XSS against existing tokens
> 
> In this scenario, the malicious code executes and immediately performs a 
> malicious action. The attacker is not necessarily present or actively 
> participating in the attack (i.e., abuse of stolen tokens is done at a later 
> time). 
> 
> A common example would be stealing tokens from localStorage and sending them 
> to an attacker-controlled server for later abuse. Existing mitigations 
> include short AT lifetimes and RT rotation.
> 
> The attacker could determine that DPoP is being used, and also extract 
> precomputed proofs for any of these tokens. The use of DPoP makes token abuse 
> a bit harder (large window = lots of proofs), but does not really strengthen 
> the defense beyond existing mitigations (Short AT lifetimes and RT rotation). 
> 
> 
> 
> Scenario 2: passive online XSS against existing tokens
> 
> In this scenario, the malicious code executes and sets up a long-term attack. 
> The attacker (i.e., a malicious application running on a server) is passive 
> until certain criteria are met. 
> 
> An attack could be to manipulate the JS execution context, so that the 
> attacker can detect new tokens being obtained by the client. (e.g., by 
> overriding a listener or changing core function prototypes). Each time new 
> tokens are issued (AT + RT), the attacker sends them to the malicious server. 
> The moment the attacker detects that the user closes the application, the 
> malicious server continues the RT rotation chain. Since the application is no 
> longer active, the AS will not detect this. The attacker now has access for 
> as long as the RT chain can be kept alive.
> 
> When DPoP is used, the attacker will need proofs to present to the AS when 
> running a refresh token flow. If the proofs are independent of the RT being 
> used, these can be precomputed. When the RT is part of the proof, as per 
> Filip's suggestion, the attacker can only run a RT flow once (with the stolen 
> RT + proof). This attack scenario is fairly well mitigated when DPoP proofs 
> include the RT (hash).
> 
> 
> 
> Scenario 3: active online XSS against existing tokens
> 
> In this scenario, the malicious code executes and sets up a long-term attack. 
> The attacker is actively controlling the behavior of the malicious code. 
> 
> The attack vectors are the same as 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-04 Thread Philippe De Ryck
Hi all,

This is a very useful discussion, and there are some merits to using DPoP in 
this way. However, the attacker's capabilities are stronger than often assumed, 
so it may not matter in the end. I've been wanting to write this out for a 
while now, so I've added a couple of scenarios below. Note that I just came up 
with the scenario names on the fly, so these may not be the best ones for 
future use ...

(This got a lot longer than I expected, so here's a TOC)
- Attack assumption
- Scenario 1: offline XSS against existing tokens
- Scenario 2: passive online XSS against existing tokens
- Scenario 3: active online XSS against existing tokens
- Scenario 4 (!): obtaining fresh tokens
- Mitigation: DPoP in a Web Worker
- Conclusion (TL;DR)

I hope this all makes sense!

Philippe




Assumption

The attacker has the ability to execute JS code in the application's context 
(e.g., through XSS, a malicious ad, ...). For simplicity, I'll just refer to 
the attack as "XSS".



Scenario 1: offline XSS against existing tokens

In this scenario, the malicious code executes and immediately performs a 
malicious action. The attacker is not necessarily present or actively 
participating in the attack (i.e., abuse of stolen tokens is done at a later 
time). 

A common example would be stealing tokens from localStorage and sending them to 
an attacker-controlled server for later abuse. Existing mitigations include 
short AT lifetimes and RT rotation.

The attacker could determine that DPoP is being used, and also extract 
precomputed proofs for any of these tokens. The use of DPoP makes token abuse a 
bit harder (large window = lots of proofs), but does not really strengthen the 
defense beyond existing mitigations (Short AT lifetimes and RT rotation). 



Scenario 2: passive online XSS against existing tokens

In this scenario, the malicious code executes and sets up a long-term attack. 
The attacker (i.e., a malicious application running on a server) is passive 
until certain criteria are met. 

An attack could be to manipulate the JS execution context, so that the attacker 
can detect new tokens being obtained by the client. (e.g., by overriding a 
listener or changing core function prototypes). Each time new tokens are issued 
(AT + RT), the attacker sends them to the malicious server. The moment the 
attacker detects that the user closes the application, the malicious server 
continues the RT rotation chain. Since the application is no longer active, the 
AS will not detect this. The attacker now has access for as long as the RT 
chain can be kept alive.

When DPoP is used, the attacker will need proofs to present to the AS when 
running a refresh token flow. If the proofs are independent of the RT being 
used, these can be precomputed. When the RT is part of the proof, as per 
Filip's suggestion, the attacker can only run a RT flow once (with the stolen 
RT + proof). This attack scenario is fairly well mitigated when DPoP proofs 
include the RT (hash).



Scenario 3: active online XSS against existing tokens

In this scenario, the malicious code executes and sets up a long-term attack. 
The attacker is actively controlling the behavior of the malicious code. 

The attack vectors are the same as scenario 2. Once in control, the attacker 
can use the same mechanism as the application does to send requests to any 
endpoint. There is no need to obtain an RT (which may not even be possible), 
since the attacker can just abuse the AT directly.

When DPoP is used, little changes here. The attacker can use the application's 
DPoP mechanism to obtain legitimate proofs. DPoP does nothing to mitigate this 
type of attack (as already stated in Daniel's threat model: 
https://danielfett.de/2020/05/04/dpop-attacker-model/).



Scenario 4: obtaining fresh tokens

In this scenario, the malicious code executes and immediately launches the 
attack. In this attack, the malicious code loads a hidden iframe in the 
application's DOM. In that iframe, the attacker starts a silent flow with AS to 
obtain an authorization code (AC). If the user has an active session, this will 
succeed (existing cookie + all origins match). The attacker extracts this AC 
and exchanges it for tokens with the AS. 

At this point, the attacker has a fresh set of tokens that grant access to 
resources in the name of the user. Short AT lifetimes and RT rotation are 
useless, since the attacker is in full control of the tokens.

Using DPoP in this scenario does not help at all. The attacker can use their 
own private key to generate the necesary DPoP proofs, starting with the code 
exchange.

One solution is to turn off silent flows for SPAs, since they have become quite 
unreliable with third-party cookie blocking restrictions.



Mitigation: DPoP in a Web Worker

Isolating sensitive features from malicious JS is virtually impossible when the 
application's legitimate JS code needs access to them. One solution that can 
work is the use of a Web Worker. 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-03 Thread Torsten Lodderstedt
I understand. Thanks! 

I think RT rotation + RT hash in the proof would also stop the attack.  

> Am 03.12.2020 um 13:19 schrieb Filip Skokan :
> 
> I'm failing to understand why binding the proof to the access token ensures 
> freshness of the proof.
> 
> Because when access tokens issued to public browser based clients have a 
> short duration you need continued access to the private key to issue new 
> proofs. When I exfiltrate the RT and pre-generate tons of proofs while the 
> user is active on the page through XSS I can then use the RT and my prepared 
> proofs to talk to the AS to keep on refreshing the AT and use it against the 
> RS. When the value of the token is part of the proof, i cannot pre-generate 
> them for future issued access tokens. Short `iat` based windows don't help 
> here.
> 
> S pozdravem,
> Filip Skokan
> 
> 
> On Thu, 3 Dec 2020 at 12:59, Torsten Lodderstedt  
> wrote:
> Hi, 
> 
> I'm failing to understand why binding the proof to the access token ensures 
> freshness of the proof. I would rather think if the client is forced to 
> create proofs with a reasonable short lifetime, chances for replay could be 
> reduced. 
> 
> Beside that as far as I remember the primary replay counter measure is the 
> inclusion of the endpoint URL and HTTP method in the proof, since it reduces 
> the attack surface to a particular URL. So in the context of freshness, we 
> are talking about using the same proof with the same URL again. 
> 
> best regards,
> Torsten. 
> 
> > Am 03.12.2020 um 10:20 schrieb Filip Skokan :
> > 
> > Hi Brian, everyone,
> > 
> > While the attack vector allows direct use, there is the option where a 
> > smarter attacker will not abuse the gained artifacts straight away. Think 
> > public client browser scenario with the non-extractable private key stored 
> > in IndexedDB (the only place to persist them really), they wouldn't use the 
> > tokens but instead, exfiltrate them, together with a bunch of pre-generated 
> > DPoP proofs. They'll get the refresh token and a bunch of DPoP proofs for 
> > both the RS and AS. With those they'll be able to get a fresh AT and use it 
> > with pre-generated Proofs after the end-user leaves the site. No available 
> > protection (e.g. RT already rotated) will be able to kick in until the 
> > end-user opens the page again.
> > 
> > OTOH with a hash of the AT in the Proof only direct use remains.
> > 
> > If what I describe above is something we don't want to deal with because of 
> > direct use already allowing access to protected resources, it's 
> > sufficiently okay as is (option #1). However, if this scenario, one 
> > allowing prolonged access to protected resources, is not acceptable, it's 
> > option #2.
> > 
> > Ad #2a vs #2b vs #2c. My pre-emptive answer is #2a, simply because we 
> > already have the tools needed to generate and validate these hashes. But 
> > further thinking about it, it would feel awkward if this JWS algorithm 
> > driven at_hash digest selection wouldn't get stretched to the 
> > confirmations, when this are placed in a JWT access token, cool - we can do 
> > that, but when these are put in a basic token introspection response it's 
> > unfortunately not an option. So, #2b (just use sha-256 just like the 
> > confirmations do).
> > 
> > Best,
> > Filip
> > 
> > 
> > On Wed, 2 Dec 2020 at 21:50, Brian Campbell 
> >  wrote:
> > There were a few items discussed somewhat during the recent interim that I 
> > committed to bringing back to the list. The slide below (also available as 
> > slide #17 from the interim presentation) is the first one of them, which is 
> > difficult to summarize but kinda boils down to how much assurance there is 
> > that the DPoP proof was 'freshly' created and that can dovetail into the 
> > question of whether the token is covered by the signature of the proof. 
> > There are many directions a "resolution" here could go but my sense of the 
> > room during the meeting was that the contending options were:
> >   •  It's sufficiently okay as it is
> >   •  Include a hash of the access token in the DPoP proof (when an 
> > access token is present)
> > 
> > Going with #2 would mean the draft would also have to define how the 
> > hashing is done and deal with or at least speak to algorithm agility. 
> > Options (that I can think of) include:
> >   • 2a) Use the at_hash claim defined in OIDC core 
> > https://openid.net/specs/openid-connect-core-1_0.html#CodeIDToken. Using 
> > something that already exists is appealing. But its hash alg selection 
> > routine can be a bit of a pain. And the algorithm agility based on the 
> > signature that it's supposed to provide hasn't worked out as well as hoped 
> > in practice for "new" JWS signatures 
> > https://bitbucket.org/openid/connect/issues/1125/_hash-algorithm-for-eddsa-id-tokens
> >   • 2b) Define a new claim ("ah", "ath", "atd", "ad" or something like 
> > that maybe) and just use SHA-256. Explain why it's 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-03 Thread Filip Skokan
>
> I'm failing to understand why binding the proof to the access token
> ensures freshness of the proof.


Because when access tokens issued to public browser based clients have a
short duration you need continued access to the private key to issue new
proofs. When I exfiltrate the RT and pre-generate tons of proofs while the
user is active on the page through XSS I can then use the RT and my
prepared proofs to talk to the AS to keep on refreshing the AT and use it
against the RS. When the value of the token is part of the proof, i cannot
pre-generate them for future issued access tokens. Short `iat` based
windows don't help here.

S pozdravem,
*Filip Skokan*


On Thu, 3 Dec 2020 at 12:59, Torsten Lodderstedt 
wrote:

> Hi,
>
> I'm failing to understand why binding the proof to the access token
> ensures freshness of the proof. I would rather think if the client is
> forced to create proofs with a reasonable short lifetime, chances for
> replay could be reduced.
>
> Beside that as far as I remember the primary replay counter measure is the
> inclusion of the endpoint URL and HTTP method in the proof, since it
> reduces the attack surface to a particular URL. So in the context of
> freshness, we are talking about using the same proof with the same URL
> again.
>
> best regards,
> Torsten.
>
> > Am 03.12.2020 um 10:20 schrieb Filip Skokan :
> >
> > Hi Brian, everyone,
> >
> > While the attack vector allows direct use, there is the option where a
> smarter attacker will not abuse the gained artifacts straight away. Think
> public client browser scenario with the non-extractable private key stored
> in IndexedDB (the only place to persist them really), they wouldn't use the
> tokens but instead, exfiltrate them, together with a bunch of pre-generated
> DPoP proofs. They'll get the refresh token and a bunch of DPoP proofs for
> both the RS and AS. With those they'll be able to get a fresh AT and use it
> with pre-generated Proofs after the end-user leaves the site. No available
> protection (e.g. RT already rotated) will be able to kick in until the
> end-user opens the page again.
> >
> > OTOH with a hash of the AT in the Proof only direct use remains.
> >
> > If what I describe above is something we don't want to deal with because
> of direct use already allowing access to protected resources, it's
> sufficiently okay as is (option #1). However, if this scenario, one
> allowing prolonged access to protected resources, is not acceptable, it's
> option #2.
> >
> > Ad #2a vs #2b vs #2c. My pre-emptive answer is #2a, simply because we
> already have the tools needed to generate and validate these hashes. But
> further thinking about it, it would feel awkward if this JWS algorithm
> driven at_hash digest selection wouldn't get stretched to the
> confirmations, when this are placed in a JWT access token, cool - we can do
> that, but when these are put in a basic token introspection response it's
> unfortunately not an option. So, #2b (just use sha-256 just like the
> confirmations do).
> >
> > Best,
> > Filip
> >
> >
> > On Wed, 2 Dec 2020 at 21:50, Brian Campbell  40pingidentity@dmarc.ietf.org> wrote:
> > There were a few items discussed somewhat during the recent interim that
> I committed to bringing back to the list. The slide below (also available
> as slide #17 from the interim presentation) is the first one of them, which
> is difficult to summarize but kinda boils down to how much assurance there
> is that the DPoP proof was 'freshly' created and that can dovetail into the
> question of whether the token is covered by the signature of the proof.
> > There are many directions a "resolution" here could go but my sense of
> the room during the meeting was that the contending options were:
> >   •  It's sufficiently okay as it is
> >   •  Include a hash of the access token in the DPoP proof (when an
> access token is present)
> >
> > Going with #2 would mean the draft would also have to define how the
> hashing is done and deal with or at least speak to algorithm agility.
> Options (that I can think of) include:
> >   • 2a) Use the at_hash claim defined in OIDC core
> https://openid.net/specs/openid-connect-core-1_0.html#CodeIDToken. Using
> something that already exists is appealing. But its hash alg selection
> routine can be a bit of a pain. And the algorithm agility based on the
> signature that it's supposed to provide hasn't worked out as well as hoped
> in practice for "new" JWS signatures
> https://bitbucket.org/openid/connect/issues/1125/_hash-algorithm-for-eddsa-id-tokens
> >   • 2b) Define a new claim ("ah", "ath", "atd", "ad" or something
> like that maybe) and just use SHA-256. Explain why it's good enough for now
> and the foreseeable future. Also include some text about introducing a new
> claim in the future if/when SHA-256 proves to be insufficient. Note that
> this is effectively the same as how the confirmation claim value is
> currently defined in this document and in 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-03 Thread Torsten Lodderstedt
Hi, 

I'm failing to understand why binding the proof to the access token ensures 
freshness of the proof. I would rather think if the client is forced to create 
proofs with a reasonable short lifetime, chances for replay could be reduced. 

Beside that as far as I remember the primary replay counter measure is the 
inclusion of the endpoint URL and HTTP method in the proof, since it reduces 
the attack surface to a particular URL. So in the context of freshness, we are 
talking about using the same proof with the same URL again. 

best regards,
Torsten. 

> Am 03.12.2020 um 10:20 schrieb Filip Skokan :
> 
> Hi Brian, everyone,
> 
> While the attack vector allows direct use, there is the option where a 
> smarter attacker will not abuse the gained artifacts straight away. Think 
> public client browser scenario with the non-extractable private key stored in 
> IndexedDB (the only place to persist them really), they wouldn't use the 
> tokens but instead, exfiltrate them, together with a bunch of pre-generated 
> DPoP proofs. They'll get the refresh token and a bunch of DPoP proofs for 
> both the RS and AS. With those they'll be able to get a fresh AT and use it 
> with pre-generated Proofs after the end-user leaves the site. No available 
> protection (e.g. RT already rotated) will be able to kick in until the 
> end-user opens the page again.
> 
> OTOH with a hash of the AT in the Proof only direct use remains.
> 
> If what I describe above is something we don't want to deal with because of 
> direct use already allowing access to protected resources, it's sufficiently 
> okay as is (option #1). However, if this scenario, one allowing prolonged 
> access to protected resources, is not acceptable, it's option #2.
> 
> Ad #2a vs #2b vs #2c. My pre-emptive answer is #2a, simply because we already 
> have the tools needed to generate and validate these hashes. But further 
> thinking about it, it would feel awkward if this JWS algorithm driven at_hash 
> digest selection wouldn't get stretched to the confirmations, when this are 
> placed in a JWT access token, cool - we can do that, but when these are put 
> in a basic token introspection response it's unfortunately not an option. So, 
> #2b (just use sha-256 just like the confirmations do).
> 
> Best,
> Filip
> 
> 
> On Wed, 2 Dec 2020 at 21:50, Brian Campbell 
>  wrote:
> There were a few items discussed somewhat during the recent interim that I 
> committed to bringing back to the list. The slide below (also available as 
> slide #17 from the interim presentation) is the first one of them, which is 
> difficult to summarize but kinda boils down to how much assurance there is 
> that the DPoP proof was 'freshly' created and that can dovetail into the 
> question of whether the token is covered by the signature of the proof. 
> There are many directions a "resolution" here could go but my sense of the 
> room during the meeting was that the contending options were:
>   •  It's sufficiently okay as it is
>   •  Include a hash of the access token in the DPoP proof (when an access 
> token is present)
> 
> Going with #2 would mean the draft would also have to define how the hashing 
> is done and deal with or at least speak to algorithm agility. Options (that I 
> can think of) include:
>   • 2a) Use the at_hash claim defined in OIDC core 
> https://openid.net/specs/openid-connect-core-1_0.html#CodeIDToken. Using 
> something that already exists is appealing. But its hash alg selection 
> routine can be a bit of a pain. And the algorithm agility based on the 
> signature that it's supposed to provide hasn't worked out as well as hoped in 
> practice for "new" JWS signatures 
> https://bitbucket.org/openid/connect/issues/1125/_hash-algorithm-for-eddsa-id-tokens
>   • 2b) Define a new claim ("ah", "ath", "atd", "ad" or something like 
> that maybe) and just use SHA-256. Explain why it's good enough for now and 
> the foreseeable future. Also include some text about introducing a new claim 
> in the future if/when SHA-256 proves to be insufficient. Note that this is 
> effectively the same as how the confirmation claim value is currently defined 
> in this document and in RFC8705.
>   • 2c) Define a new claim with its own hash algorithm agility scheme 
> (likely similar to how the Digest header value or Subresource Integrity 
> string is done).
> 
> I'm requesting that interested WG participants indicate their preference for 
> #1 or #2. And among a, b, and c, if the latter. 
> 
> I also acknowledge that an ECDH approach could/would ameliorate the issues in 
> a fundamentally different way. But that would be a distinct protocol. If 
> there's interest in pursuing the ECDH idea, I'm certainly open to it and even 
> willing to work on it. But as a separate effort and not at the expense of 
> derailing DPoP in its general current form. 
> 
> 
> 
> CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
> material for the