Re: [OAUTH-WG] OAuth for Browser-Based Apps

2024-03-24 Thread Philippe De Ryck
Hi Justin,

Thank you for your detailed review. 

> §9+ this draft should add privacy considerations, particularly for BFF 
> pattern's proxy architecture.e

I wanted to ask for a bit more context on this comment. I understand that 
having a proxy as a separate entity would expose all requests/responses to this 
entity. However, in the context of a BFF, the frontend and the BFF belong 
together (i.e., they are one application deployed as two components). The 
frontend and BFF are deployed and operated by the same party, so I’m not sure 
if this comment effectively applies. 

Looking forward to hearing from you.

Philippe
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Cookies & headers in OAuth 2.0 Security Best Current Practice?

2023-11-06 Thread Philippe De Ryck

Answers inline to add the proper nuance. 

> Would you not agree that a "good" CSP config is a good line of defense 
> against XSS attacks?

A good CSP policy can indeed help to stop the exploitation of an XSS 
vulnerability, should one exist in the application. I do not agree with the 
wording “a good line of defense”, as it can imply that a good CSP policy can be 
used as the only defense. XSS vulnerabilities are first and foremost avoided by 
adopting secure coding guidelines. CSP acts as a second line of defense, in 
case an XSS vulnerability still slips through. 


> I agree that the OAuth BCP should not provide details on CSP config. I do 
> think we should call out having a considered CSP config is a best practice.
> 
> I differentiate between headers and cookies and SQL injection etc. in that 
> the headers and cookies are part of the HTTP requests, which is the protocol 
> OAuth is built on, so weaknesses there weaken the protocol. 

Just because CSP can be configured as a header (CSP can also be set with a 
 tag) does not make it part of the protocol OAuth is built on. CSP is 
100% an application-level feature. And as useful as CSP can be, I would not 
consider “not having CSP” as a protocol weakness.

I’m not strongly against mentioning CSP, I’m just wondering how much good it 
does given the complexity of configuring CSP correctly. It suffices to include 
a CDN that hosts a version of AngularJS 1.x to have a bypassable (and thereby 
mostly useless) CSP policy. 
 
Philippe


> 
> On Mon, Nov 6, 2023 at 11:25 AM Philippe De Ryck 
>  <mailto:phili...@pragmaticwebsecurity.com>> wrote:
> I went back to the Security BCP and combed through the fine details, and 
> there is indeed some guidance on CSP. But your initial remark that this is 
> "vague" is definitely true, and this section is actually a good illustration 
> of what I was trying to say. Let me unpack the details a bit …
> 
> In section 4.16, the security BCP talks about how to restrict framing to 
> avoid clickjacking/UI redressing attacks. Defending against such attacks 
> cannot be done with secure coding, but must be done with specific framing 
> restrictions. The best mechanism to achieve this is by setting security 
> headers: the legacy X-Frame-Options header or the more modern CSP 
> frame-ancestors directive. Given that this security requirement is closely 
> linked to OAuth and is not something that “happens naturally”, but must be 
> explicitly added, I totally agree that this should be part of the security 
> BCP. 
> 
> Now, in paragraph 5 of the section, things get somewhat confusing (included 
> below for reference). So far, every mention of "CSP" was used as a synonym 
> for the "frame-ancestors" directive to restrict framing. However, all the way 
> at the end of that paragraph, the text suddenly recommends using the 
> "script-src" directive to restrict sources of JS that can execute on the 
> page. The paragraph then points to a sample header, with the configuration of 
> "script-src 'self'". 
> 
> Using CSP allows authorization servers to specify multiple origins in a 
> single response header field and to constrain these using flexible patterns 
> (see [W3C.CSP-2 <https://www.w3.org/TR/CSP2>] for details). Level 2 of this 
> standard provides a robust mechanism for protecting against clickjacking by 
> using policies that restrict the origin of frames (using frame-ancestors) 
> together with those that restrict the sources of scripts allowed to execute 
> on an HTML page (by using script-src).
> 
> Unfortunately, this advice is too simplistic to be useful, as it prevents the 
> loading of JS from any other origin, including CDNs, or third-party services. 
> Additionally, it violates modern best practices for CSP, which recommend the 
> use of hashes, nonces, and trust propagation (with nonce propagation or 
> 'strict-dynamic'). If you’re interested in the details, I’ve done a few guest 
> blog posts about CSP for Auth0 that cover this: 
> https://auth0.com/blog/authors/philippe-de-rick/ 
> <https://auth0.com/blog/authors/philippe-de-rick/>
> 
> 
> What I'm trying to say here is that a detailed CSP config (apart from the 
> "frame-ancestors" directive) is not essential for a secure OAuth 
> implementation or deployment. It can and should act as a second line of 
> defense against content injection attacks, but not having such a CSP config 
> does not automatically create a vulnerability. Therefore, my recommendation 
> is to focus on the details directly relevant to OAuth security.
> 
> For security guidelines for configuring cookies, I believe this would be more 
> directly related and more useful, as I mentioned before.
> 
> Finally, I can tota

Re: [OAUTH-WG] Cookies & headers in OAuth 2.0 Security Best Current Practice?

2023-11-06 Thread Philippe De Ryck
I went back to the Security BCP and combed through the fine details, and there 
is indeed some guidance on CSP. But your initial remark that this is "vague" is 
definitely true, and this section is actually a good illustration of what I was 
trying to say. Let me unpack the details a bit …

In section 4.16, the security BCP talks about how to restrict framing to avoid 
clickjacking/UI redressing attacks. Defending against such attacks cannot be 
done with secure coding, but must be done with specific framing restrictions. 
The best mechanism to achieve this is by setting security headers: the legacy 
X-Frame-Options header or the more modern CSP frame-ancestors directive. Given 
that this security requirement is closely linked to OAuth and is not something 
that “happens naturally”, but must be explicitly added, I totally agree that 
this should be part of the security BCP. 

Now, in paragraph 5 of the section, things get somewhat confusing (included 
below for reference). So far, every mention of "CSP" was used as a synonym for 
the "frame-ancestors" directive to restrict framing. However, all the way at 
the end of that paragraph, the text suddenly recommends using the "script-src" 
directive to restrict sources of JS that can execute on the page. The paragraph 
then points to a sample header, with the configuration of "script-src 'self'". 

Using CSP allows authorization servers to specify multiple origins in a single 
response header field and to constrain these using flexible patterns (see 
[W3C.CSP-2 <https://www.w3.org/TR/CSP2>] for details). Level 2 of this standard 
provides a robust mechanism for protecting against clickjacking by using 
policies that restrict the origin of frames (using frame-ancestors) together 
with those that restrict the sources of scripts allowed to execute on an HTML 
page (by using script-src).

Unfortunately, this advice is too simplistic to be useful, as it prevents the 
loading of JS from any other origin, including CDNs, or third-party services. 
Additionally, it violates modern best practices for CSP, which recommend the 
use of hashes, nonces, and trust propagation (with nonce propagation or 
'strict-dynamic'). If you’re interested in the details, I’ve done a few guest 
blog posts about CSP for Auth0 that cover this: 
https://auth0.com/blog/authors/philippe-de-rick/


What I'm trying to say here is that a detailed CSP config (apart from the 
"frame-ancestors" directive) is not essential for a secure OAuth implementation 
or deployment. It can and should act as a second line of defense against 
content injection attacks, but not having such a CSP config does not 
automatically create a vulnerability. Therefore, my recommendation is to focus 
on the details directly relevant to OAuth security.

For security guidelines for configuring cookies, I believe this would be more 
directly related and more useful, as I mentioned before.

Finally, I can totally see that the community could benefit from more in-depth 
security best practices that go beyond OAuth-specific risks. Apart from CSP, 
there's a whole bunch more response headers that can be configured (as you and 
others have mentioned). On top of that, modern browsers send a lot of metadata 
in a request (e.g., the Sec-Fetch Metadata headers) that could be used by the 
AS to reject illegitimate requests. However, given the rapid development of 
these features and lack of widespread support, I would envision such 
recommendations to live in a more "dynamic" document than an RFC.

Philippe

—
Pragmatic Web Security
Security for developers
https://pragmaticwebsecurity.com

> On 6 Nov 2023, at 18:07, Dick Hardt  wrote:
> 
> That's a surprising response Philippe. The BCP already has 
> Content-Security-Policy and Referrer-Policy headers recommendations. The core 
> of my feedback is to add Cookie and Header best practices to Section 2, and 
> point to one or more living documents. 
> 
> On Mon, Nov 6, 2023 at 8:45 AM Philippe De Ryck 
>  <mailto:phili...@pragmaticwebsecurity.com>> wrote:
>> While I understand the idea of pointing to additional security resources, 
>> I’m not sure if it is the role of the security BCP (or other specs) to take 
>> on the responsibility to address these issues. In my point of view, the 
>> security BCP should focus on OAuth aspects, and discuss security topics that 
>> are directly relevant to that purpose. 
>> 
>> Concretely for the security mechanisms discussed here, I can see how cookie 
>> configurations could be relevant (the session with the AS is inherent to 
>> OAuth), but I don’t see defenses such as CSP as relevant in that scope. If 
>> these are in scope, should we then also provide advice or pointers on 
>> avoiding server-side implementation vulnerabilities, such as SQL injection 
>> or SSRF?
>

Re: [OAUTH-WG] Cookies & headers in OAuth 2.0 Security Best Current Practice?

2023-11-06 Thread Philippe De Ryck
While I understand the idea of pointing to additional security resources, I’m 
not sure if it is the role of the security BCP (or other specs) to take on the 
responsibility to address these issues. In my point of view, the security BCP 
should focus on OAuth aspects, and discuss security topics that are directly 
relevant to that purpose. 

Concretely for the security mechanisms discussed here, I can see how cookie 
configurations could be relevant (the session with the AS is inherent to 
OAuth), but I don’t see defenses such as CSP as relevant in that scope. If 
these are in scope, should we then also provide advice or pointers on avoiding 
server-side implementation vulnerabilities, such as SQL injection or SSRF?

Additionally, many of these security mechanisms are quite complex and 
non-trivial to deploy. For example, adding a generic pointer stating “you 
should add CSP” does not say much, as CSP can control more than a dozen 
features. 

To summarize, I would keep the scope of these specs as narrow as possible and 
avoid aiming to address security concerns that are beyond the realm of OAuth.

Philippe

—
Pragmatic Web Security
Security for developers
https://pragmaticwebsecurity.com

> On 6 Nov 2023, at 15:39, Dick Hardt  wrote:
> 
> +1 to referring to calling out that cookies / headers should follow best 
> security practice, and pointing to living documents
> 
> On Mon, Nov 6, 2023 at 6:21 AM Giuseppe De Marco  > wrote:
>> Hi,
>> 
>> everytime I have implemented SAML2, OAuth 2.0, OpenID Connect, for different 
>> projects and orgs, I have included secured web cookie in the recipe.
>> For the implementation profile of OpenID4VP I did the same, where the 
>> secured and httponly cookie is used an in particular as a basic security 
>> requirement for the cross device flow [1].
>> 
>> Even if I got perfectly Daniel's and Aaron's editorial strategy and I agree, 
>> I think that Dick's proposal and your confirmation on that, Neil, is 
>> something to take into account to bring developers awareness during the 
>> implementation phases.
>> A ref to living OWASP specs surrounded by a generic refs to the user agent 
>> security, even if out of scope, I think that should be in the specs.
>> 
>> [1] 
>> https://italia.github.io/eudi-wallet-it-docs/versione-corrente/en/relying-party-solution.html#remote-protocol-flow
>> 
>> Il giorno lun 6 nov 2023 alle ore 15:11 Neil Madden > > ha scritto:
>>> Although I think we could add some basic advice, the list of security 
>>> headers to use is still evolving. For example, there were several headers 
>>> added after Spectre to limit cross-site interactions. And then there’s 
>>> things like the “X-XSS-Protection” header, which was best practice to add 
>>> to responses not too long ago but has now been universally removed from 
>>> browsers as it enabled certain content disclosure attacks.
>>> 
>>> Cookie security attributes are perhaps a bit more stable, but in general we 
>>> probably just want to point people at “living” guidance like OWASP.
>>> 
>>> — Neil
>>> 
 On 5 Nov 2023, at 19:28, Dick Hardt >>> > wrote:
 
 The cookie and header recommendations I am thinking of would be for the AS 
 as well as the client. 
 
 A number of XSS attacks can be thwarted by a modern browser and the right 
 HTTP headers.
 
 My question is: Did the authors consider adding cookie and header 
 recommendations, and decided it was too general? 
 
 Cookie and header best security practices have been around for years -- 
 I'm not suggesting we make anything up -- I'm suggesting we raise 
 awareness. 
 
 I consider myself to be fairly security aware, and I was not aware of some 
 of the HTTP headers that are best security practice. 
 
 /Dick
 
 
 On Sun, Nov 5, 2023 at 11:19 AM Aaron Parecki 
 mailto:40parecki@dmarc.ietf.org>> 
 wrote:
> I don't think it's necessary to say "do the right things with cookies" in 
> the Security BCP. The Browser Apps BCP has a much deeper discussion of 
> how different browser-based architectures work with cookies so that seems 
> like a better place to actually have a real discussion about it.
> 
> Also +1 to what Daniel said about not continuing to add little things. 
> Plus I think it's too late anyway, publication has already been requested 
> for the Security BCP.
> 
> Aaron
> 
> On Sun, Nov 5, 2023 at 11:14 AM Daniel Fett 
>  > wrote:
>> I agree with Aaron! 
>> 
>> Also we should be very careful about any additions to the Security BCP 
>> at this point. It is very easy to re-start the "one more thing" loop 
>> we've been stuck in for the last years. There may be more useful things 
>> to say, but we should put them on the list for a future second 

Re: [OAUTH-WG] WGLC for Browser-based Apps

2023-08-28 Thread Philippe De Ryck
e once the FRAME becomes same-origin, 
hence the aggressive timer. 
The moment the URL becomes same-origin, it becomes readable, and the ATTACK 
code receives a string value that looks like this: 
https://example.com/callback?code=ABC123 (you can see it coming, right?)
Next, the attacker stops the frame from loading. This means that the browser 
will either not send a request to https://example.com/callback?code=ABC123 OR 
the browser will abort the ongoing request. Either way, it has two very 
important consequences: (1) the page from APP does not load, and no new SW will 
be registered, (2) the page from APP does not load, and the authorization code 
will not be exchanged by APP
The attacker is now free to extract the authorization code and exchange it for 
tokens

I have demonstrated step 9 at OSW2023. The code for doing this is ridiculously 
simple, which further highlights the dangerous false sense of security that 
this approach presents. For completeness, you can find the demo code I used 
below. Note that this is a quick and dirty PoC, which can certainly be improved.


function stealCode(frame) {
let counter = 0;
function doTheStealing() {
try {
const url = frame.contentWindow.location.href
if(url.includes("code")) {
frame.contentWindow.stop()
clearInterval(interval)

const code = url.split("code=")[1].split("&")[0]
forwardAuthorizationCode(code)
}

// Fallback to avoid eternal loops
if(counter++ > 2) {
clearInterval(interval)
}
}
catch(e) {} // Ignore errors when the frame is cross-origin
}

const interval = setInterval(doTheStealing, 1)
}


To summarize, this chronological story illustrates that the single-threaded 
nature of JS is absolutely irrelevant. It also shows that “preventing secrets 
from being leaked” sounds cool, but is a non-sensical statement. After all, how 
would a SW prevent the attacker’s code running in APP to inspect the URL of 
FRAME? 


> Hence my proposal: instead of a demonstration where you test a possibly 
> incomplete implementation (which, as far as I can see, doesn't have the fine 
> details that make it fool-proof), I propose to deliver a proof-of-contest 
> that would follow these guidelines. Before admitting that "you cannot secure 
> browser-flows only", I'd still want to actually see that you can do this 
> (which isn't the case from the explanation I read this far). This whole story 
> can perfectly be all wrong, but it's work checking first. Let's be pragmatic, 
> right?

The process of dropping a proposal that you do not use in practice, and then 
demanding that we keep convincing you over and over again that it is a severely 
flawed proposal shows an enormous lack of respect. I have shown you the 
courtesy of trying to convince you, but since it’s clear that you are unwilling 
to even critically inspect your own idea, it’s time to wrap this up. 

I hope that the rest of the community at least values my contributions. Note 
that I am in fact working with Aaron to ensure that the browser-based apps BCP 
accurately reflects the security properties of the various approaches.

Kind regards

Philippe

[1] 
https://github.com/Valuya/servicewauther/blob/e3e4a3db5a77b272380ad7c44547ae842fc719a1/documentation/serviceworker_sequence.png

—
Pragmatic Web Security
Security for developers
https://pragmaticwebsecurity.com/



> 
> Le lun. 28 août 2023 à 14:15, Jim Manico  <mailto:j...@manicode.com>> a écrit :
>> *applause*
>> 
>> Sucks you need to explain yourself several times but this is very helpful 
>> for the community.
>> 
>>> On Aug 28, 2023, at 7:52 AM, Philippe De Ryck 
>>> >> <mailto:phili...@pragmaticwebsecurity.com>> wrote:
>>> 
>>> Responses inline.
>>> 
>>>> Still, there is some initial incorrect point that makes the rest of the 
>>>> discussion complicated, and partly wrong.
>>> 
>>> I believe the key to make the discussion less complicated is to acknowledge 
>>> that there are two separate issues:
>>> 
>>> 1. An attacker can potentially obtain tokens from the legitimate application
>>> 2. An attacker can obtain a set of tokens from the AS directly, completely 
>>> independent of any application behavior
>>> 
>>> Given that the goal is to prevent an attacker from obtaining tokens, 
>>> scenario 1 becomes irrelevant when scenario 2 is a possibility. It would be 
>>> really helpful to analyze the SW approach with this in mind. I’ll add 
>>> comments inline to highlight why this matters.
>>> 
>>>> 
>>>> Specifically, §

Re: [OAUTH-WG] WGLC for Browser-based Apps

2023-08-28 Thread Philippe De Ryck
Responses inline.

> Still, there is some initial incorrect point that makes the rest of the 
> discussion complicated, and partly wrong.

I believe the key to make the discussion less complicated is to acknowledge 
that there are two separate issues:

1. An attacker can potentially obtain tokens from the legitimate application
2. An attacker can obtain a set of tokens from the AS directly, completely 
independent of any application behavior

Given that the goal is to prevent an attacker from obtaining tokens, scenario 1 
becomes irrelevant when scenario 2 is a possibility. It would be really helpful 
to analyze the SW approach with this in mind. I’ll add comments inline to 
highlight why this matters.

> 
> Specifically, §6.4.2.1 says this: The service worker MUST NOT transmit 
> tokens, authorization codes or PKCE code verifier to the frontend application.
> 
> Wording should be refined, but the idea is that the service worker is to 
> actually restrict authorization codes from even reaching the frontend. Of 
> course, easier said than done, but that part happens to be quite easy to 
> implement. 

This is related to both scenarios. If the SW is running, you can indeed hide 
tokens from the main browsing context, which helps to support scenario 1. For 
scenario 2, you need the guarantee that the SW will intercept all new flows, 
otherwise the attacker  can run a silent flow. As long as the SW is running in 
the main context, I would assume that the attacker can indeed not reach the 
authorization endpoint directly. 

The key part above is “as long as the SW is running”. An attacker with the 
ability to run malicious JS can unregister the SW that prevents the attacker 
from reaching the authorization endpoint. 

I have raised this issue before, and the response back then was that the SW is 
only actually removed after the browsing context reloads, which is true. So 
from the main context, the attacker cannot launch the attack. However, when the 
attacker instantiates a new browsing context (i.e., an iframe), the 
unregistered SW is no longer present, and is thereby not able to restrict 
access to the authorization endpoint. 

I address this concern in the talk I have referenced before. This link with the 
time code included (https://youtu.be/OpFN6gmct8c?feature=shared=1973) points 
you to the exact demo scenario, where I illustrate how an unregistered SW 
cannot prevent access to an endpoint in an iframe. Admittedly, I have not 
implemented a full OAuth client as a SW, but the minimal PoC you see here 
suffices to illustrate the ineffectiveness of this approach.

With this information, the attack scenario becomes the following:
The attacker unregisters the SW in the main browsing context, preventing it 
from being used in any new browsing context
The attacker injects a new iframe and points it to the authorization endpoint
The AS responds with a redirect with the authorization code
The attacker detects the redirect, copies the authorization code, and aborts 
the page from loading (so that the authorization code is never exchanged or the 
SW is never reloaded)
The attacker extracts the authorization code and exchanges it for tokens


TL;DR: a SW is not a security mechanism, and the browser cannot guarantee that 
a SW permanently prevents requests to a certain endpoint.


> This has further impact on much of the other statements:
> > The main problem with a browser-only client is that the attacker with 
> > control over the client has the ability to run a silent Authorization Code 
> > flow, which provides them with an independent set of tokens
> [...]
> > The security differences between a BFF and a browser-only app are not about 
> > token storage, but about the attacker being able to run a new flow to 
> > obtain tokens.
> [...]
> > Again, the security benefits of a BFF are not about stoken storage. Even if 
> > you find the perfect storage solution for non-extractable tokens in the 
> > browser, an attacker still controls the client application and can simply 
> > request a new set of tokens. 
> 
> Truth is: no, you can't start a new authentication flow and get the 
> authorization code back in the main thread. I'm talking about the redirection 
> scenario, which I'm the most familiar with, but it would probably apply to 
> the "message" one as well (which is new to me and seems to be ashtoningly 
> legit due to vague "for example" wording in the OAuth2 spec :-) ).

The attack scenario above does not run the redirect scenario in the main 
browsing context, but in an iframe. Opening an iframe instantiates a new nested 
browsing context, where unregistered SWs are not available. 


> The service worker, according to 
> https://developer.mozilla.org/en-US/docs/Web/API/ServiceWorkerGlobalScope/fetch_event#description
>  , just intercepts the authorization code, gets a token, and never sends it 
> back to the main code.

This point is not relevant, since your SW is no longer active when the 
attacker’s 

Re: [OAUTH-WG] WGLC for Browser-based Apps

2023-08-26 Thread Philippe De Ryck
My responses inline.


> Hi everyone,
> 
> The document is about "OAuth 2.0 for Browser-Based Apps". Its abstract 
> further explains that it "details the security considerations and best 
> practices that must be taken into account when developing browser-based 
> applications that use OAuth 2.0.".
> 
> As such, detailing security considerations is important. I share the point of 
> view that basing web applications on proven concepts is important. The 
> approaches detailed in the document have all their advantages and 
> disadvantages.

We have discussed the topic of browser-based apps in depth at the OAuth 
Security Workshop last week. I am also working with Aaron Parecki on updating 
the specification to more accurately reflect these advantages and 
disadvantages. Updates will go out in the coming days/weeks, so we more than 
welcome concrete feedback on the content there.

> There are 2 main approaches to browser-based applications security. One of 
> them is to store security credentials at the frontend. The other one is to 
> use cookies and a BFF. Though common practice, there is nothing fundamentally 
> more secure about them in a demonstrable way. Different approaches, different 
> characteristics and security assumptions. Nobody can prove that either 
> approach is better, just that there are different concerns.
> 
> Handling security in BFFs relies on cookies that cannot be read by the 
> javascript application. This mechanism provides some reliable protection 
> about the cookie itself that is used as a kind of credential to access 
> confidential web resources. It obviously demands some additional layers in 
> the flow (proxy or light server). You also need a mechanism to share session 
> information, either at the server side, or for example by having the cookie 
> itself hold that information. A bigger concern to me is that you basically 
> give up standard mechanisms for securing the flow between the frontend and 
> the backend: the security between the two is a custom solution (based on 
> cookies, in a specific, custom way, this part being in no way OAuth or 
> standard). This solves the problem by not using OAuth at all in the browser 
> part of the application, basically making the client application purely 
> backend. However, the fact that browser-based applications cannot be secured 
> with OAuth isn't universally true, and strongly depends on one's definition 
> of "secure", and basically comes down to what the security issue is.

The updated specification will clearly outline the security considerations when 
making the browser-based application a public OAuth client. 

The main problem with a browser-only client is that the attacker with control 
over the client has the ability to run a silent Authorization Code flow, which 
provides them with an independent set of tokens. These tokens give the attacker 
long-term and unrestricted access in the name of the user. A BFF-based 
architecture does not suffer from this issue, since the OAuth client is a 
confidential client. Regardless of one’s definition of “secure”, this is a 
clear difference on the achievable level of security. 

Of course, as stated multiple times before, the use of a BFF does not eliminate 
the presence of the malicious JS, nor does it solve all abuse scenarios. 



> Storing tokens at the frontend has advantages: it solves my concern above 
> about a standard based flow between the frontend and the backend.

The use of cookies is a core building block of the web, and is quite standard. 

> It's simpler from an operational point of view. And it's been used in the 
> wild for ages.

Anyone using a browser-only client should be informed about the clear and 
significant dangers of this approach, which the updated specification will do. 


> Both flows have been compromised numerous times. This doesn't mean they are 
> not right by design, but that the specific security concerns have to be 
> addressed.

If you have specific security concerns about a BFF, I’d suggest raising them. 
Until now, I have only seen arguments that highlight the additional effort it 
takes to implement a BFF, but nothing to undermine its security. Plenty of 
highly sensitive applications in the healthcare and financial industry opt for 
a BFF for its improved security properties and consider this trade-off to be 
favorable.


> Now, the concerns we are really discussing is, what happens in case of XSS or 
> any form of malicious javascript.
> 
> In this case, for all known flows, session riding is the first real issue. 
> Whether the injected code calls protected web resources through the BFF or 
> using the stored tokens, is irrelevant: the evil is done. Seeing different 
> threat levels between token abuse and session riding is a logical shortcut: 
> in many cases, the impact will be exactly the same.

Stating that using stolen tokens is the same as sending requests through a 
compromised client in the user’s browser (client hijacking) 

Re: [OAUTH-WG] WGLC for Browser-based Apps

2023-08-14 Thread Philippe De Ryck
I’m going to respond inline and re-organize the previous message a bit. 


> It's worth noting that it didn't get so much traction up to this time, and 
> that I stopped using it in multiple applications myself.

That’s exactly what I meant with my statement of an “unproven approach”. If 
you, the creator of this pattern, are not even using it in a production 
application, I fail to see how this is a recommended best practice for securing 
frontends. After all, the goal of this document is defined as follows:

This specification details the security considerations and best practices that 
must be taken into account when developing browser-based applications that use 
OAuth 2.0


BFFs, on the other hand, are widely used in applications where security is a 
high priority (e.g., financial, healthcare). 


> Not because it's not a worthwhile pattern to solve the mentioned problem, but 
> because token leak is typically not the real issue. XSS is the first one to 
> solve, and I disagree it's unavoidable (and certainly at any level: extra 
> care for application start code can get you very far).

It’s about more than XSS. Malicious JS ends up in applications through a 
variety of ways. In theory, you could argue that we know how to solve XSS or 
the issue of malicious JS. In practice, this is unfortunately absolutely false. 

Even if we just focus on XSS, where developers have full control over the code, 
I can easily argue that you cannot guarantee the absence of a vulnerability. 
For example, a well-established best practice  for outputting data with benign 
HTML is running the unsafe data through an HTML sanitiser. A few years ago, 
both Google’s internal sanitiser and DOMPurify, a sanitiser built by the 
world’s best XSS experts, contained a bypass vulnerability 
(https://securityaffairs.com/83199/hacking/google-search-xss-flaw.html 
 & 
https://portswigger.net/research/bypassing-dompurify-again-with-mutation-xss). 
In essence, this means that every project relying on these (which is a best 
practice) was potentially vulnerable to XSS.


It remains true that you should aim to avoid having malicious JS in your app 
anyway, and do everything in your power to make this happen. However, the 
assumption in the spec is that malicious JS will happen, since plenty of 
countermeasures aim to reduce the impact of such an attack.


> Regarding the demonstration in the video, I don't think it would compromise 
> my current implementation.
> 
> The current draft says this in §6.4.2.1 :
> * The application MUST register the Service Worker before running any code 
> interacting with the user.
> 
> This adds an important constraint: an attacker would not only need to 
> compromise any part of the application, but should also make sure the very 
> first thing it does (registering the service worker) is also compromised, 
> which is much harder.
> 
> Step by step, the code I saw would:
> - open an iframe
> - redirect to the authorization server
> - get redirected to the registered redirect_uri (which is an additional 
> important constraint here)
> - register the service worker to run immediately
> ... which would stop the attack here, the main application not even seeing 
> the auth code and not being able to call the /token endpoint

No, because running a silent flow in an iframe typically uses a web message 
response. In essence, the callback is not the redirect URI, but a minimal JS 
page that sends the code to the main application context using the web 
messaging mechanism. The message will have the origin of the authorization 
server as a sender. 


> BFFs are not a proven better level of protection. Session riding in case of 
> XSS is still the same.
> 
> One such claim is about BFFs being more secure because they are backed by 
> unstealable session cookies.

This view on threats is not the position that this working group has taken 
before. For example, for DPoP, there’s a detailed threat model that aims to 
counter a variety of attacks, except for session riding, which is explicitly 
considered out of scope (https://danielfett.de/2020/05/04/dpop-attacker-model/ 
).

I am not saying (nor have I ever said) that a BFF is the holy grail that solves 
everything. If malicious JS code runs, you/the user are still in major trouble. 
However, with a BFF, that trouble can be reduced from unfettered abuse of 
access/refresh tokens to session riding (aka online XSS from DPoP). The fact 
that a BFF uses cookies helps to obtain these properties, but it is not the 
foundation of the security benefits. The main benefit is that the OAuth client 
application is a backend web application instead of a frontend web application.


> They also have debatable points: you either need some third party or custom 
> software (with their own threat to security)

Why do you think a BFF is third-party or 

Re: [OAUTH-WG] WGLC for Browser-based Apps

2023-08-13 Thread Philippe De Ryck

> I have a different interpretation of the objective of using a service worker, 
> and it aligns with descriptions in most of those links -- minimize the risk 
> of the access token and refresh token exfiltration from the application by 
> malicious JS code.  Service workers, when implemented properly, isolate the 
> access token, refresh token, PKCE from code in the DOM, similar to how 
> properly created cookies are isolated from the DOM JS.
> 
> This lowers the security risk of using OAuth to protect a server's resources 
> to be similar to the risks of using cookies.  I think this is an improvement 
> in the security of the application, and does not create a false sense of 
> security as you claim.

It is not just the use of a service worker that matters, it is the way the 
service worker is deployed. The examples you gave earlier all use a service 
worker to attach a token to an outgoing request. They still obtain the token 
from the main application (see [1] at the bottom), so tokens are still exposed 
to the main application. So if the goal is to prevent token exfiltration from 
the main app, this setup is not effective. The attacker could just run a new 
flow in an iframe, obtain fresh tokens and exfiltrate them.

The proposal of using service workers in this draft aims to provide this 
security by also shielding the authorization code and token exchange. I have 
not seen a practical implementation of this pattern. Furthermore, even if you 
implement this, an attacker with XSS capabilities can still unregister the 
worker and then obtain tokens to exfiltrate. 


> If an attacker has the ability to run malicious JS code in the application's 
> origin, the attacker can do anything against the underlying web resources 
> regardless of using OAuth or cookies.
> Do you have an alternative approach to isolating the OAuth credentials from 
> the DOM code? 

Yes, that is exactly why it is not possible to prevent token exfiltration when 
malicious JS runs. The BFF pattern as described in this document reduces the 
impact to its minimum level: a session riding attack. In that scenario, the 
attacker will never be able to obtain access/refresh tokens, and can only 
“tunnel” requests through the user’s online browser. That’s also why I always 
recommend to focus on defending against malicious JS instead of just trying to 
hide tokens. However, history shows that even with the best efforts, malicious 
JS will eventually end up in the application.

Unfortunately, the spec in its current from argues that techniques like refresh 
token rotation or the service worker help prevent abuse in case of token 
exfiltration. Since these techniques can (easily) be circumvented, they create 
a false sense of security. I clearly demonstrate this in the video I referenced 
earlier (https://www.youtube.com/watch?v=OpFN6gmct8c), as well as in this older 
post 
(https://mailarchive.ietf.org/arch/msg/oauth/s68mcQCC1NNe_Y2Q9Qu6x0VuLrI/). 
Note that this post is more than 3 years old, so my insights and way of 
explaining has changed since then.


> FWIW: If you want someone to understand previous posts, I'd suggest providing 
> a link to the post, and even better also include a small extract. Also, while 
> it is more effort on your part, I find concise, crisp responses more 
> constructive for email dialog, and similarly, don't expect that the email 
> list has, or will, take the time to watch your video. Have you watched any of 
> mine?

I fully understand that watching a video is not the most straightforward way of 
consuming content, but in this case, the video adds a tremendous amount of 
value. It allows me to use graphics to explain the issues, and it includes 
actual demo attacks to obtain tokens, even with recommended security mechanisms 
in place. If you would be arguing a point and refer to a video that clearly 
explains your thoughts, I would definitely watch it.






[1]

>>> A quick Google on oauth service workers returned a number of articles and 
>>> descriptions of using service workers:
>>> 
>>> https://github.com/ForgeRock/appAuthHelper/blob/master/service_workers.md

No mention of how tokens end up in the worker. A brief look at the code 
mentions frames and seems to refer to the AppAuth library, so this does not 
seem to happen in the worker.

>>> https://gaurav-techgeek.medium.com/re-architecting-authentication-with-service-workers-ff8fbbbfbdeb

The tokens are obtained from the main application. See the section "Now let us 
get the right token."

>>> https://itnext.io/using-service-worker-as-an-auth-relay-5abc402878dd

The tokens are obtained from the main application. I quote: getAuthTokenHeader 
method will communicate with js executed in a page to get current token 

>>> https://about.grabyo.com/service-workers-jwt-tokens/

No mention of how tokens end up in the worker.

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] WGLC for Browser-based Apps

2023-08-10 Thread Philippe De Ryck
Hi Dick,

The solutions you list here focus on using a service worker to intercept an 
outgoing call to a resource server. During interception, the service worker 
attaches the access token. This pattern is mainly used to avoid inserting 
access token logic into the application code. The SW attaches the access token, 
and if it has a refresh token, it can even run the RT flow to get a new access 
token.

Note that the SW is used for convenience here, not for security. The 
Browser-Based Apps draft recently added a SW pattern as a security mechanism 
(section 6.4.2). The idea is that the SW not only augments calls to the RS, but 
also handles the communication with the authorization server. 

Based on my understanding, this pattern was specifically added to address an 
attack scenario I have described a while ago on this mailing list (and also 
demonstrate in the video linked in my previous mail). In this scenario, the 
attacker has the ability to run malicious JS code in the application’s origin. 
The attacker uses that ability to run a silent Authorization Code flow in an 
iframe, extracts the code, and exchanges it for a new set of tokens. 

The SW pattern in the spec aims to prevent the application from calling the AS 
directly, since all calls would be intercepted by the SW. This approach is 
ineffective, since an attacker can always unregister an existing service 
worker. The spec states that an unregistered worker remains active until the 
browsing context reloads (after which it would re-register the worker before 
the attacker’s code runs). However, after unregistering a worker, new contexts 
will no longer use this worker. As demonstrated in the video I linked to 
before, an attacker can unregister a worker and then run a flow in a frame 
without involving the worker. 

In essence, it boils down to Brock’s statement of “a false sense of security”. 
While someone may view this as sufficiently secure for their specific use 
cases, I really object to having this as one of the “recommended approaches” in 
an RFC.

Hope this helps

Philippe


> On 11 Aug 2023, at 02:56, Dick Hardt  wrote:
> 
> 
> Philippe: would you expand on your comment:
> 
> On Wed, Aug 9, 2023 at 11:51 PM Philippe De Ryck 
>  <mailto:phili...@pragmaticwebsecurity.com>> wrote:
> - Remove unproven and overly complicated solutions (i.e., the service worker 
> approach)
> 
> A quick Google on oauth service workers returned a number of articles and 
> descriptions of using service workers:
> 
> https://github.com/ForgeRock/appAuthHelper/blob/master/service_workers.md
> 
> https://gaurav-techgeek.medium.com/re-architecting-authentication-with-service-workers-ff8fbbbfbdeb
> 
> https://itnext.io/using-service-worker-as-an-auth-relay-5abc402878dd
> 
> https://about.grabyo.com/service-workers-jwt-tokens/
> 
> 
> 
> On Wed, Aug 9, 2023 at 11:51 PM Philippe De Ryck 
>  <mailto:phili...@pragmaticwebsecurity.com>> wrote:
>> In my opinion, this document is not ready to be published as an RFC. 
>> 
>> In fact, I will be at the OAuth Security Workshop in two weeks to discuss 
>> exactly this (See "The insecurity of OAuth 2.0 in frontends" here: 
>> https://oauth.secworkshop.events/osw2023/agenda-thursday). My hope is that 
>> my presentation can spark the necessary discussion to identify a path 
>> forward to make the RFC useful for practitioners building browser-based apps.
>> 
>> I don't have the resources available to write a lengthy email detailing my 
>> objections. I just want to point out that I've raised these points on the 
>> mailing list in the past, and there have been a couple of threads on this 
>> very list suggesting how to move this document forward (e.g., identify 
>> concrete threat models). I've also given a talk at NDC Security earlier this 
>> year (https://www.youtube.com/watch?v=OpFN6gmct8c) about how the security 
>> mechanisms proposed in this document fall short. This video has been posted 
>> to this list before as well.
>> 
>> Here are a couple of suggestions that I believe would improve this document:
>> 
>> - Clearly identify the danger of malicious JS (exfiltrating existing tokens 
>> is only one threat, and the most trivial one at that)
>> - State the baseline achievable level of security in light of existing XSS 
>> vulnerabilities (i.e., session riding, where the attacker controls the 
>> frontend)
>> - Identify different desired levels of security for a client application 
>> (e.g., a "public recipe app" vs "eHealth"). Existing work can help, such as 
>> the OWASP ASVS levels 
>> (https://github.com/OWASP/ASVS/blob/master/4.0/en/0x03-Using-ASVS.md)
>> - Define which levels of security certain mechanisms can offer (e.g., 

Re: [OAUTH-WG] WGLC for Browser-based Apps

2023-08-10 Thread Philippe De Ryck
In my opinion, this document is not ready to be published as an RFC. 

In fact, I will be at the OAuth Security Workshop in two weeks to discuss 
exactly this (See "The insecurity of OAuth 2.0 in frontends" here: 
https://oauth.secworkshop.events/osw2023/agenda-thursday). My hope is that my 
presentation can spark the necessary discussion to identify a path forward to 
make the RFC useful for practitioners building browser-based apps.

I don't have the resources available to write a lengthy email detailing my 
objections. I just want to point out that I've raised these points on the 
mailing list in the past, and there have been a couple of threads on this very 
list suggesting how to move this document forward (e.g., identify concrete 
threat models). I've also given a talk at NDC Security earlier this year 
(https://www.youtube.com/watch?v=OpFN6gmct8c) about how the security mechanisms 
proposed in this document fall short. This video has been posted to this list 
before as well.

Here are a couple of suggestions that I believe would improve this document:

- Clearly identify the danger of malicious JS (exfiltrating existing tokens is 
only one threat, and the most trivial one at that)
- State the baseline achievable level of security in light of existing XSS 
vulnerabilities (i.e., session riding, where the attacker controls the frontend)
- Identify different desired levels of security for a client application (e.g., 
a "public recipe app" vs "eHealth"). Existing work can help, such as the OWASP 
ASVS levels 
(https://github.com/OWASP/ASVS/blob/master/4.0/en/0x03-Using-ASVS.md)
- Define which levels of security certain mechanisms can offer (e.g., RTR for 
level 1, TMI-BFF for level 2, BFF for level 3)
- Remove unproven and overly complicated solutions (i.e., the service worker 
approach)

As stated before, I'll be at OSW in London in 2 weeks and would be happy to 
discuss this further.

Kind regards

Philippe

—
Pragmatic Web Security
Security for developers
https://pragmaticwebsecurity.com

> On 30 Jul 2023, at 17:46, Rifaat Shekh-Yusef  wrote:
> 
> All,
> 
> This is a WG Last Call for the Browser-based Apps draft.
> https://www.ietf.org/archive/id/draft-ietf-oauth-browser-based-apps-14.html
> 
> Please, review this document and reply on the mailing list if you have any 
> comments or concerns, by August 11th.
> 
> Regards,
>  Rifaat & Hannes
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Web apps BCP feedback

2021-09-26 Thread Philippe De Ryck
That’s why cookies should be set with the __Host- prefix. 

In a carefully-designed API, CORS will function as a CSRF defense, even when 
the attacker is controlling a subdomain or sibling domain. 

Overall, I think the first part of 6.1 makes sense, but I don’t think the 
document should try to draw out such an architecture in 1 or 2 paragraphs at 
the end of that section.

Philippe

—
Pragmatic Web Security
Security for developers
https://pragmaticwebsecurity.com

> On 26 Sep 2021, at 00:15, Jim Manico  wrote:
> 
> Hi Neil! =)
> 
> I get your point! 
> I would suggest this text be written as something along the lines of:
> 
> "Additionally, the SameSite cookie attribute can be used to   
>  prevent CSRF attacks but the application and API should  also
>  be written to use anti-CSRF tokens for stateful session-based 
> applications 
>   or use of the double-cookie submit pattern for stateless 
> applications.”'
> 
> PS: If an adversary controls a subdomain can't they clobber and over-write 
> root level cookies anyhow? I do not think CSRF defense will defeat an 
> adversarial subdomains ability to over-write a cookie and circumvent 
> double-cookie-submit. 
> 
> On 9/25/21 8:10 AM, Neil Madden wrote:
>> Technically yes, CSRF refers to cross-site attacks. However, there is a 
>> class of attacks that are cross-*origin* but not cross-site and which are 
>> otherwise identical to CSRF. SameSite doesn’t protect against these attacks 
>> but other traditional CSRF defences *do*. For example, synchronizer tokens 
>> in hidden form fields or even just requiring a custom header on requests 
>> both provide some protection against such attacks, as they both use 
>> mechanisms that are subject to the same origin policy rather than same-site. 
>> 
>> — Neil
>> 
>>> On 25 Sep 2021, at 18:20, Jim Manico  
>>>  wrote:
>>> 
>>>  If someone has taken over a subdomain in the ways described, that is not 
>>> cross site request forgery since the attack is occurring from within your 
>>> site. It’s more likely XSS that allows for cookie clobbering or similar, or 
>>> just malicious code injected by the malicious controller of your subdomain. 
>>> This is not strictly CSRF nor are these problems protected from any other 
>>> standard form of CSRF defense.
>>> 
>>> CSRF is Cross Site attack where the attack is hosted on a different domain. 
>>> 
>>> --
>>> Jim Manico
>>> 
 On Sep 25, 2021, at 1:07 AM, Dominick Baier  
  wrote:
 
 
 In 6.1 it says
 
 "Additionally, the SameSite cookie attribute can be used to
   prevent CSRF attacks, or alternatively, the application and API 
 could
   be written to use anti-CSRF tokens.”
 
 “Prevent” is a bit strong.
 
 SameSite only restricts cookies sent across site boundaries Iit does not 
 prevent CSRF attacks from within a site boundary. Scenarios could be a 
 compromised sub-domain, like sub-domain takeover or just some vulnerable 
 application co-located on the same site.
 
 thanks
 ———
 Dominick Baier
 ___
 OAuth mailing list
 OAuth@ietf.org 
 https://www.ietf.org/mailman/listinfo/oauth 
 
>>> ___
>>> OAuth mailing list
>>> OAuth@ietf.org 
>>> https://www.ietf.org/mailman/listinfo/oauth 
>>> 
>> 
>> Manage My Preferences , Unsubscribe 
>> 
> -- 
> Jim Manico
> Manicode Security
> https://www.manicode.com 
> ___
> OAuth mailing list
> OAuth@ietf.org 
> https://www.ietf.org/mailman/listinfo/oauth 
> 

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] TMI BFF - html meta tags over/alternative to discovery

2021-05-16 Thread Philippe De Ryck
Without having the full context of the interim meeting, this feels really off 
to me. I see the need for making this configurable, but I have doubts about 
using HTML elements for this purpose.

As far as I understand, this mechanism is supposed to be used for modern 
frontends (Angular, React, …). Having to add variables to the index.html in 
these applications is likely to conflict with their development paradigms. It 
would be much easier to add these variables to the environment file, so you can 
differentiate between dev and prod when necessary.

Additionally, querying the DOM for API endpoints sounds like a lot of trouble. 
I don’t think that injection is that big of a risk, but I might be wrong (I’m 
sure someone said that about the base tag as well). However, using DOM APIs 
like this will cause headaches for server-side rendering, where these DOM APIs 
are typically not available (e.g., 
https://stackoverflow.com/questions/62896223/is-there-a-way-to-access-dom-serverside).

Kind regards

Philippe

—
Pragmatic Web Security
Security for developers
https://pragmaticwebsecurity.com/


> On 15 May 2021, at 17:35, Filip Skokan  wrote:
> 
> Hello Vittorio, Brian, everyone
> 
> This is a followup to my feedback in the TMI BFF interim meeting on April 
> 26th where I mentioned I'd bring this to the list for discussion.
> 
> I proposed an alternative to using fixed endpoint locations and/or discovery. 
> HTML  Tags .
> 
> These would be in the returned page HTML's head tag, e.g.
> 
> 
> 
> 
> The javascript SDK handing TMI BFF would know to look for these defined meta 
> tags to source the location of the different endpoints. I think this could be 
> the primary place an SDK would look at as it doesn't require any upfront 
> external requests.
> 
> For the SDK this is as simple as
> 
> var bffTokenPath = 
> document.querySelector('meta[name="oauth-bff-token"]').content;
> 
> If this was the only mechanism defined by the document (to be bashed) I think 
> it can save the group a lot of time defining a client discovery document 
> which would be otherwise needed. If discovery as an alternative solution is 
> indeed inevitable, it can be a second in line mechanism the javascript SDK 
> would know to use.
> 
> As discussed in the interim, a well known set of endpoints (or even a single 
> root client discovery document) might not always be available for control to 
> the webpage depending on where and how it is hosted, on the other hand the 
> HTML it serves always, I hope, is.
> 
> Best,
> Filip Skokan
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Re-creation of Access Token on Single Page Application

2021-03-13 Thread Philippe De Ryck
> On 13 Mar 2021, at 07:52, Tatsuya Karino  wrote:
> 
> By the way, I also wonder what is the better option to use OAuth2.0 on SPA 
> Client (3rd party) with good UIUX.
> In my understanding, there are two options to achieve it.
> 1. Using response_momde=web_message or 2.Using Refresh Token with fixed 
> maximum lifetime.
> 
> But I have a concern on a practical use.
> For 1, Some browser could be restricted to send credential cookie to the 
> authorization server from iframe.
> For 2, The Refresh Token must be saved on the browser, but it could be 
> deleted on 7days in safari.
> 
> Is there any workaround? or Is there any misunderstanding on my concerns?

As you stated, option 1 does not work in cross-site scenarios in Safari & Brave 
at the moment. Other browsers are likely to follow the same pattern in the 
future.

Option 2 only works if there are already tokens available, which is typically 
not the case at first load. Also, keeping long-lived refresh tokens in a 
browser is not always the best idea.

One workaround goes as follows:
1. When the SPA loads the first time, check if it has token material. If yes, 
done, if not, go to step 2.
2. Redirect the browser to the authorization server to run a new authorization 
code flow with PKCE, by setting prompt=none. This will prevent any user 
interaction and immediately returns either an authz code or an error.
3. The SPA loads again with the autz code/error. If it is a code, it exchanges 
it for tokens and all is good. If it is an error, the SPA simply shows the 
unauthenticated state (here, the user can start a new flow with interaction by 
clicking the login button)

Note that step 2 will include cookies, so it can resume an existing session 
between the browser and the authorization server. This cookie is always present 
since a top-level redirect is not a third-party scenario, so third-party cookie 
blocking does not apply.

Hope this helps

Philippe


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Token Mediating and session Information Backend For Frontend (TMI BFF)

2021-02-18 Thread Philippe De Ryck

> On 18 Feb 2021, at 13:08, Neil Madden  wrote:
> 
> Thanks for following up, Brian. Responses below.
> 
>> On 17 Feb 2021, at 22:48, Brian Campbell > > wrote:
>> 
>> Always appreciate (and often learn from) your insights, Neil. I'd like to 
>> dig into the CSRF thing a bit more though to understand better and hopefully 
>> do the right thing in the draft. 
>> 
>> It seems to me that a GET at the bff-token endpoint is "safe" in that it's 
>> effectively just a read. 
> 
> Well it’s a read that returns an access token. It’s “safe” in the sense of 
> side-effects, but we absolutely want to preserve the confidentiality of what 
> is returned and only allow it to be accessed by authorized clients (the 
> legitimate frontend). At the moment the only thing keeping that safe is the 
> JSON content type. For example, imagine a world in which the token-bff 
> endpoint instead returned the access token as HTML:
> 
> abcd
> 
> Then as an attacker I can simply embed an iframe on my site that refers to 
> your bff-endpoint and then parse the access token out of the DOM. The browser 
> will happily load that iframe and send along the cookie when it makes the 
> request.

You are overlooking basic browser security measures like the Same-Origin Policy 
here. The browser will only allow access to an iframe if it has the same origin 
as the context accessing the frame. If an attacker embeds this frame in their 
site, it will be a cross-origin frame, and access will be denied.

FYI, simple CORS requests follow the same security pattern (when headers are 
missing, browsers do not expose the response). Preflighted CORS requests cover 
"new features" (i.e., stuff you traditionally could not do with HTML elements) 
and ask permission before sending a request. 

Also, if you're worried about framing, it's much simpler to require the token 
endpoint to send "X-Frame-Options: DENY" and "Content-Security-Policy: 
frame-ancestors 'none'" response headers. This denies framing altogether 
without going into complicated CORS territory.

Philippe

—
Pragmatic Web Security
Security for developers
https://pragmaticwebsecurity.com/___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Token Mediating and session Information Backend For Frontend (TMI BFF)

2021-02-15 Thread Philippe De Ryck
> 
> On 15 Feb 2021, at 11:50, Neil Madden  wrote:
> 
>> On 15 Feb 2021, at 10:26, Philippe De Ryck 
>>  wrote:
>> 
>>> On 15 Feb 2021, at 11:14, Neil Madden >> <mailto:neil.mad...@forgerock.com>> wrote:
>>> 
>>>> On 15 Feb 2021, at 08:32, Philippe De Ryck 
>>>> >>> <mailto:phili...@pragmaticwebsecurity.com>> wrote:
>>>> 
>>>> [...]
>>>> 
>>>> Compared to using a worker for handling RTs, I believe the TMI-BFF only 
>>>> adds a single security benefit: an attacker is no longer able to run a 
>>>> silent flow to obtain a fresh set of tokens (since the client is now a 
>>>> confidential client). 
>>> 
>>> But they can just call the bff-token endpoint to do the same. If there is a 
>>> security advantage, IMO it is as a defence in depth against open redirects, 
>>> unicode normalisation attacks (ie not validating the redirect_uri correctly 
>>> at the AS), etc. 
>> 
>> A Web Worker and the TMI-BFF both encapsulate the RT and only expose the 
>> (short-lived) AT.
> 
> I don’t think this distinction matters at all from a security point of view. 
> It’s the AT that attackers are after - why bother with a RT if I can just 
> call the bff-token endpoint to get a new AT every time?

Getting an AT from the BFF (or a worker) is an “online” attack, which only 
works as long as the application/malicious code is loaded in the browser of the 
user. 

Stealing a working refresh token (e.g., with a silent flow) is an “offline” 
attack, which gives long-term access (lifetime of the RT), independent of the 
state of the application in the user’s browser.

There is a clear distinction, but whether that matters is a different 
discussion. It depends on how the application used, and how token lifetimes are 
configured. FWIW, the DPoP threat model makes the same distinction ("Stolen 
token (XSS)” vs “XSS (Victim is online)”) here: 
https://danielfett.de/2020/05/04/dpop-attacker-model/ 
<https://danielfett.de/2020/05/04/dpop-attacker-model/>

Philippe
 

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Token Mediating and session Information Backend For Frontend (TMI BFF)

2021-02-15 Thread Philippe De Ryck

> On 15 Feb 2021, at 11:14, Neil Madden  wrote:
> 
>> On 15 Feb 2021, at 08:32, Philippe De Ryck 
>>  wrote:
>> 
>> [...]
>> 
>> Compared to using a worker for handling RTs, I believe the TMI-BFF only adds 
>> a single security benefit: an attacker is no longer able to run a silent 
>> flow to obtain a fresh set of tokens (since the client is now a confidential 
>> client). 
> 
> But they can just call the bff-token endpoint to do the same. If there is a 
> security advantage, IMO it is as a defence in depth against open redirects, 
> unicode normalisation attacks (ie not validating the redirect_uri correctly 
> at the AS), etc. 

A Web Worker and the TMI-BFF both encapsulate the RT and only expose the 
(short-lived) AT.

With the worker-based approach, the client is a public client that completes 
the code exchange without authentication. This allows an attacker to run an 
independent silent flow in an iframe within the legitimate application. This 
flow relies on the existing cookie-based session with the AS to obtain an AT 
and RT, independent of the tokens of the client application. A confidential 
client does not suffer from this problem (a stolen code cannot be exchanged 
without client authN, and when done through the BFF, the RT is not exposed). 

And as you state, there are other benefits as well.

Philipp

—
Pragmatic Web Security
Security for developers
https://pragmaticwebsecurity.com/___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Token Mediating and session Information Backend For Frontend (TMI BFF)

2021-02-15 Thread Philippe De Ryck

> On 15 Feb 2021, at 08:50, Vittorio Bertocci  
> wrote:
> 
> Thank you Philippe for your comments! Some considerations:
> It also aims to avoid the need for a reverse proxy-based BFF, but comes up 
> short compared to such a BFF.
> That isn’t a goal. If the developer can use a reverse proxy, they should 
> definitely go for it. Not getting tokens in the user agent at all is the 
> safest option. TMI-BFF is for the cases where it is not a viable option- thru 
> the threads I mentioned some of those cases.

OK, seems that I got thrown off by the introduction. Adding a section to 
describe these architectural patterns and their trade-offs could definitely 
help to scope this spec.


> Section 6.1 states not to use localStorage (I assume because of malicious JS 
> code),…
> We might have been naïve there- a common discussion point even when 
> considering code+PKCE is that local storage is somehow vulnerable to more 
> exploits than memory.
> If there are attacks that only work with storage but not with memory, this 
> would be an incremental improvement. If every attack that can access the 
> storage can also access memory, then I agree it’s a moot point- but then the 
> current public discourse needs a severe reset given that the two options are 
> presented as having very different security properties.

Welcome to my life :)

Let me try to give an analogy to frame this problem a bit. You're worried about 
burglars breaking into your home, so instead of leaving jewellery on the 
kitchen table, you lock it away in a safe. Rookie burglars will break in, and 
not find anything on the kitchen table, and be stumped on how to proceed. 
However, a veteran burglar probably knows where to find your safe. If they 
can't unlock the safe, they can just hide behind the door until you open your 
safe, hit you on the head with a bat, and take your jewellery. In essence, 
removing valuables from the kitchen table only addresses a consequence, but not 
the underlying problem.

The same applies to malicious JS and localStorage. Going after data in 
localStorage is an easy and simple attack. It's what a script kiddie would do 
when they have no idea what they are really doing. However, a targeted attack 
will not be hampered by that. They can apply the following techniques:
Override any JS function used by the legitimate application for handling the 
token (e.g., setRequestHeader)
Send a request to the backend to obtain an access token (this would be 
indistinguishable from a legitimate request)

Without the use of a TMI-BFF, the attacker would also be able to run a silent 
flow to obtain a new set of tokens from the AS. Moving to a confidential 
client, as the spec proposes, addresses this major vulnerability.

Note that "in memory" here means "in memory of the application's execution 
context". A web worker runs in a separate context and is isolated by design.

> you lose the ability to use sender constrained tokens, which a proxy BFF 
> would be able to do
> Absolutely. If the reverse proxy is an option, the security improves for 
> sure. TMI-BFF is meant to help when it is not an option.

See the note above about a separate section on the trade-offs. 


> Keeping refresh tokens on the backend is good, but not necessarily better 
> than keeping them in a web worker, as Auth0’s JS SDK
> A web worker is definitely good practice, but it’s still on the local 
> machine. One might argue it’s enough security for the scenario, but not the 
> same as not having the tokens available at all in the local box. But as you 
> say, the thrust behind TMI-BFF is more about dev experience- I’d be happy 
> with equivalent security with code+PKCE+RTs

I get the impression that this spec lacks a concrete threat model. A worker is 
an isolated environment in the browser. If the local machine is considered 
untrusted, then what about the user's credentials that are entered on that 
machine, or the U2F/WebAuthN authentication used for MFA? The local machine is 
part of the TCB here, and that includes the browser and its extensions. So I'd 
say that RTs in such an environment are properly secured. Also, since workers 
live in memory only, RTs for a frontend web should have a similar lifetime as a 
traditional session (e.g., 12 hours).

Compared to using a worker for handling RTs, I believe the TMI-BFF only adds a 
single security benefit: an attacker is no longer able to run a silent flow to 
obtain a fresh set of tokens (since the client is now a confidential client). 

Hope this helps

Philippe


>  
> From: Philippe De Ryck 
> Date: Sunday, February 14, 2021 at 22:45
> To: Vittorio Bertocci 
> Cc: Warren Parad , "oauth@ietf.org" 
> Subject: Re: [OAUTH-WG] Token Mediating and session Information Backend For 
> Frontend (TMI BFF)
>  
> A couple of notes from my end:
>  
> Dev

Re: [OAUTH-WG] Token Mediating and session Information Backend For Frontend (TMI BFF)

2021-02-14 Thread Philippe De Ryck
A couple of notes from my end:

Developers building an application that consists of a frontend, a backend, and 
APIs indeed often struggle with identifying the correct client, especially with 
the combination of OAuth 2.0 and OIDC. Having a standardized way of handling 
such cases is definitely useful. 

That said, the current spec is a bit all over the place. Vittorio stated here 
that the main goal is to make it easier for developers, but the spec makes 
quite a bit of (vague) security statements. It also aims to avoid the need for 
a reverse proxy-based BFF, but comes up short compared to such a BFF. 

A few detailed security notes:

1) Section 6.1 states not to use localStorage (I assume because of malicious JS 
code), which makes no sense. If malicious code is running in the frontend, it 
can simply call the bff-token endpoint and grab the token from there. Other 
attacks typically also allow to steal tokens from memory (e.g., prototype 
pollution attacks as I discuss here: 
https://pragmaticwebsecurity.com/articles/oauthoidc/localstorage-xss.html 
). 

2) The use of confidential clients is a big plus, but since the access token is 
used in the frontend, you lose the ability to use sender constrained tokens, 
which a proxy BFF would be able to do. A proxy-based BFF can also limit the API 
endpoints it exposes to a specific frontend and can apply traffic analysis to 
detect malicious behavior from a compromised frontend.

3) Keeping refresh tokens on the backend is good, but not necessarily better 
than keeping them in a web worker, as Auth0’s JS SDK (and presumably others, 
haven’t checked) does by default. 

Overall, my recommendation is to focus on the specific use case of handling 
tokens in a “frontend with backend” application type, and forget about trying 
to also solve security issues in frontends caused by malicious JS.

Philippe


—
Pragmatic Web Security
Security for developers
https://pragmaticwebsecurity.com/


> On 14 Feb 2021, at 22:31, Vittorio Bertocci 
>  wrote:
> 
> Draft [..] essentially suggests that every app needs to run a BFF to do user 
> token generation because an AS is no longer afforded the capability for some 
> reason
> I believe this might be the crux of the problem, my impression is that you 
> are attributing to the draft a way larger scope than it is intended to have. 
> The draft makes no such suggestion. The draft says nothing about AS losing 
> capabilities. The draft is not trying to solve an AS problem. And above all, 
> the draft does not target every app. 
>  
> The draft is an optimization aimed at a very specific, albeit very common, 
> application topology where there is a frontend and a backend, and the 
> developer wants to perform API calls from the frontend directly to the RS. 
> This is only a specific topology, and the proposal is scoped down to that. 
> All other topologies are unaffected. Also, the draft isn’t pushing this 
> topology as the preferred one. It’s best to keep tokens out of the frontend 
> altogether. But if the developer is adamant in performing API calls direct 
> from their JS, and if they already have a backend, and only in that case, the 
> current proposal has less moving parts and less requirements than code+PKCE .
>  
> Code+PKCE already has the expressive power to handle the scenario described 
> here, and is applicable to a wider range of scenarios. The main point the 
> draft brings to the table is the ability for a frontend to delegate to a 
> backend a lot of logic that in the code+PKCE case executes in an environment 
> that is naturally more constrained and prone to attacks than a backend. That 
> doesn’t mean developers whose app has a backend should automatically choose 
> the new model over code+PKCE, more that the model discussed here might be 
> viable and require less capabilities/moving parts to achieve the same 
> expressive power, especially if the SDKs used in the solution have a standard 
> way to handle it. That is far from every app, but it is a situation I 
> encountered in the wild often enough to prompt the discussion with Brian and 
> the draft.
>  
> As a side note, the initial reactions of practitioners have been very 
> positive. I am really hoping the discussion will lead to identifying and 
> weeding out security issues, or land on security flaws in the model so grave 
> that they can be properly used to discourage its use. No matter what way it 
> will go, I am very glad the discussion is taking place.
>  
> From: Warren Parad  >
> Date: Sunday, February 14, 2021 at 12:59
> To: Vittorio Bertocci  >
> Cc: Neil Madden  >, "oauth@ietf.org " 
> mailto:oauth@ietf.org>>
> Subject: Re: [OAUTH-WG] Token Mediating and session Information Backend For 
> Frontend (TMI BFF)
>  
> To restate, the TMI-BFF 

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-11 Thread Philippe De Ryck
The scenario you describe here is realistic in browser-based apps with XSS 
vulnerabilities, but it is pretty complex. Since there are worse problems when 
XSS happens, it’s hard to say whether DPoP should mitigate this. 

I’m wondering what other types of clients would benefit from using DPoP for 
access tokens? Mobile apps? Clients using a Client Credentials grant?

How are they impacted by any change made specifically for browser-based 
applications?

Philippe


> On 9 Dec 2020, at 23:57, Brian Campbell  wrote:
> 
> Thanks Philippe, I very much concur with your line of reasoning and the 
> important considerations. The scenario I was thinking of is: browser based 
> client where XSS is used to exfiltrate the refresh token along with 
> pre-computed proofs that would allow for the RT to be exchanged for new 
> access tokens and also pre-computed proofs that would work with those access 
> tokens for resource access. With the pre-computed proofs that would allow 
> prolonged (as long as the RT is valid) access to protected resources even 
> when the victim is offline. Is that a concrete attack scenario? I mean, kind 
> of. It's pretty convoluted/complex. And while an access token hash would 
> reign it in somewhat (ATs obtained from the stolen RT wouldn't be usable) 
> it's hard to say if the cost is worth the benefit.
> 
> 
> 
> On Tue, Dec 8, 2020 at 11:47 PM Philippe De Ryck 
>  <mailto:phili...@pragmaticwebsecurity.com>> wrote:
> Yeah, browser-based apps are pure fun, aren’t they? :)
> 
> The reason I covered a couple of (pessimistic) XSS scenarios is that the 
> discussion started with an assumption that the attacker already successfully 
> exploited an XSS vulnerability. I pointed out how, at that point, finetuning 
> DPoP proof contents will have little to no effect to stop an attack. I 
> believe it is important to make this very clear, to avoid people turning to 
> DPoP as a security mechanism for browser-based applications.
> 
> 
> Specifically to your question on including the hash in the proof, I think 
> these considerations are important:
> 
> 1. Does the inclusion of the AT hash stop a concrete attack scenario?
> 2. Is the “cost” (implementation, getting it right, …) worth the benefits?
> 
> 
> Here’s my view on these considerations (specifically for browser-based apps, 
> not for other types of applications):
> 
> 1. The proof precomputation attack is already quite complex, and short access 
> token lifetimes already reduce the window of attack. If the attacker can 
> steal a future AT, they could also precompute new proofs then. 
> 2. For browser-based apps, it seems that doing this complicates the 
> implementation, without adding much benefit. Of course, libraries could 
> handle this, which significantly reduces the cost. 
> 
> 
> Note that these comments are specifically to complicating the spec and 
> implementation. DPoP’s capabilities of using sender-constrained access tokens 
> are still useful to counter various other scenarios (e.g., middleboxes or 
> APIs abusing access tokens). If other applications would significantly 
> benefit from having the hash in the proof, I’m all for it.
> 
> On a final note, I would be happy to help clear up the details on web-based 
> threats and defenses if necessary.
> 
> —
> Pragmatic Web Security
> Security for developers
> https://pragmaticwebsecurity.com/ <https://pragmaticwebsecurity.com/>
> 
> 
>> On 8 Dec 2020, at 22:47, Brian Campbell > <mailto:bcampb...@pingidentity.com>> wrote:
>> 
>> Danial recently added some text to the working copy of the draft with 
>> https://github.com/danielfett/draft-dpop/commit/f4b42058 
>> <https://github.com/danielfett/draft-dpop/commit/f4b42058> that I think aims 
>> to better convey the "nutshell: XSS = Game over" sentiment and maybe 
>> dissuade folks from looking to DPoP as a cure-all for browser based 
>> applications. Admittedly a lot of the initial impetus behind producing the 
>> draft in the first place was born out of discussions around browser based 
>> apps. But it's neither specific to browser based apps nor a panacea for 
>> them. I hope the language in the document and how it's recently been 
>> presented is reflective of that reality. 
>> 
>> The more specific discussions/recommendations around in-browser apps are 
>> valuable (if somewhat over my head) but might be more appropriate in the 
>> OAuth 2.0 for Browser-Based Apps 
>> <https://datatracker.ietf.org/doc/draft-ietf-oauth-browser-based-apps/> 
>> draft.
>> 
>> With respect to the contents of the DPoP draft, I am still keen to try and 
>> flush out some consensus around the question p

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-08 Thread Philippe De Ryck
Yeah, browser-based apps are pure fun, aren’t they? :)

The reason I covered a couple of (pessimistic) XSS scenarios is that the 
discussion started with an assumption that the attacker already successfully 
exploited an XSS vulnerability. I pointed out how, at that point, finetuning 
DPoP proof contents will have little to no effect to stop an attack. I believe 
it is important to make this very clear, to avoid people turning to DPoP as a 
security mechanism for browser-based applications.


Specifically to your question on including the hash in the proof, I think these 
considerations are important:

1. Does the inclusion of the AT hash stop a concrete attack scenario?
2. Is the “cost” (implementation, getting it right, …) worth the benefits?


Here’s my view on these considerations (specifically for browser-based apps, 
not for other types of applications):

1. The proof precomputation attack is already quite complex, and short access 
token lifetimes already reduce the window of attack. If the attacker can steal 
a future AT, they could also precompute new proofs then. 
2. For browser-based apps, it seems that doing this complicates the 
implementation, without adding much benefit. Of course, libraries could handle 
this, which significantly reduces the cost. 


Note that these comments are specifically to complicating the spec and 
implementation. DPoP’s capabilities of using sender-constrained access tokens 
are still useful to counter various other scenarios (e.g., middleboxes or APIs 
abusing access tokens). If other applications would significantly benefit from 
having the hash in the proof, I’m all for it.

On a final note, I would be happy to help clear up the details on web-based 
threats and defenses if necessary.

—
Pragmatic Web Security
Security for developers
https://pragmaticwebsecurity.com/


> On 8 Dec 2020, at 22:47, Brian Campbell  wrote:
> 
> Danial recently added some text to the working copy of the draft with 
> https://github.com/danielfett/draft-dpop/commit/f4b42058 
> <https://github.com/danielfett/draft-dpop/commit/f4b42058> that I think aims 
> to better convey the "nutshell: XSS = Game over" sentiment and maybe dissuade 
> folks from looking to DPoP as a cure-all for browser based applications. 
> Admittedly a lot of the initial impetus behind producing the draft in the 
> first place was born out of discussions around browser based apps. But it's 
> neither specific to browser based apps nor a panacea for them. I hope the 
> language in the document and how it's recently been presented is reflective 
> of that reality. 
> 
> The more specific discussions/recommendations around in-browser apps are 
> valuable (if somewhat over my head) but might be more appropriate in the 
> OAuth 2.0 for Browser-Based Apps 
> <https://datatracker.ietf.org/doc/draft-ietf-oauth-browser-based-apps/> draft.
> 
> With respect to the contents of the DPoP draft, I am still keen to try and 
> flush out some consensus around the question posed in the start of this 
> thread, which is effectively whether or not to include a hash of the access 
> token in the proof.  Acknowledging that "XSS = Game over" does sort of evoke 
> a tendency to not even bother with such incremental protections (what I've 
> tried to humorously coin as "XSS Nihilism" with no success). And as such, I 
> do think that leaving it how it is (no AT hash in the proof) is not 
> unreasonable. But, as Filip previously articulated, including the AT hash in 
> the proof would prevent potentially prolonged access to protected resources 
> even when the victim is offline. And that seems maybe worthwhile to have in 
> the protocol, given that it's not a huge change to the spec. But it's a 
> trade-off either way and I'm personally on the fence about it.
> 
> Including an RT hash in the proof seems more niche. Best I can tell, it would 
> guard against prolonged offline access to protected resources when access 
> tokens are bearer and the RT was DPoP-bound and also gets rotated. The 
> trade-off there seems less worth it (I think an RT hash would be more awkward 
> in the protocol too). 
> 
> 
> 
> 
> 
> 
> 
> On Fri, Dec 4, 2020 at 5:40 AM Philippe De Ryck 
>  <mailto:phili...@pragmaticwebsecurity.com>> wrote:
> 
>> The suggestion to use a web worker to ensure that proofs cannot be 
>> pre-computed is a good one I think. (You could also use a sandboxed iframe 
>> for a separate sub/sibling-domain - dpop.example.com 
>> <http://dpop.example.com/>).
> 
> An iframe with a different origin would also work (not really sandboxing, as 
> that implies the use of the sandbox attribute to enforce behavioral 
> restrictions). The downside of an iframe is the need to host additional HTML, 
> vs a script file for t

Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-04 Thread Philippe De Ryck

> The suggestion to use a web worker to ensure that proofs cannot be 
> pre-computed is a good one I think. (You could also use a sandboxed iframe 
> for a separate sub/sibling-domain - dpop.example.com 
> ).

An iframe with a different origin would also work (not really sandboxing, as 
that implies the use of the sandbox attribute to enforce behavioral 
restrictions). The downside of an iframe is the need to host additional HTML, 
vs a script file for the worker, but the effect is indeed the same.

> For scenario 4, I think this only works if the attacker can trick/spoof the 
> AS into using their redirect_uri? Otherwise the AC will go to the legitimate 
> app which will reject it due to mismatched state/PKCE. Or are you thinking of 
> XSS on the redirect_uri itself? I think probably a good practice is that the 
> target of a redirect_uri should be a very minimal and locked down page to 
> avoid this kind of possibility. (Again, using a separate sub-domain to handle 
> tokens and DPoP seems like a good idea).

My original thought was to use a silent flow with Web Messaging. The scenario 
would go as follows:

1. Setup a Web Messaging listener to receive the incoming code
2. Create a hidden iframe with the DOM APIs
3. Create an authorization request such as 
“/authorize?response_type=code_id=..._uri=https%3A%2F%example.com=..._challenge=7-ffnU1EzHtMfxOAdlkp_WixnAM_z9tMh3JxgjazXAk_challenge_method=S256=none_mode=web_message”
4. Load this URL in the iframe, and wait for the result
5. Retrieve code in the listener, and use PKCE (+ DPoP if needed) to exchange 
it for tokens

This puts the attacker in full control over every aspect of the flow, so no 
need to manipulate any of the parameters.


After your comment, I also believe an attacker can run the same scenario 
without the “response_mode=web_message”. This would go as follows:

1. Create a hidden iframe with the DOM APIs
2. Setup polling to read the URL (this will be possible for same-origin pages, 
not for cross-origin pages)
3. Create an authorization request such as 
“/authorize?response_type=code_id=..._uri=https%3A%2F%example.com=..._challenge=7-ffnU1EzHtMfxOAdlkp_WixnAM_z9tMh3JxgjazXAk_challenge_method=S256”
4. Load this URL in the iframe, and keep polling
5. Detect the redirect back to the application with the code in the URL, 
retrieve code, and use PKCE (+ DPoP if needed) to exchange it for tokens

In step 5, the application is likely to also try to exchange the code. This 
will fail due to a mismatching PKCE verifier. While noisy, I don’t think it 
affects the scenario. 


> IMO, the online attack scenario (i.e., proxying malicious requests through 
> the victim’s browser) is quite appealing to an attacker, despite the apparent 
> inconvenience:
> 
>  - the victim’s browser may be inside a corporate firewall or VPN, allowing 
> the attacker to effectively bypass these restrictions
>  - the attacker’s traffic is mixed in with the user’s own requests, making 
> them harder to distinguish or to block
> 
> Overall, DPoP can only protect against XSS to the same level as HttpOnly 
> cookies. This is not nothing, but it means it only prevents relatively naive 
> attacks. Given the association of public key signatures with strong 
> authentication, people may have overinflated expectations if DPoP is pitched 
> as an XSS defence.

Yes, in the cookie world this is known as “Session Riding”. Having the worker 
for token isolation would make it possible to enforce a coarse-grained policy 
on outgoing requests to prevent total abuse of the AT.

My main concern here is the effort of doing DPoP in a browser versus the 
limited gains. It may also give a false sense of security. 



With all this said, I believe that the AS can lock down its configuration to 
reduce these attack vectors. A few initial ideas:

1. Disable silent flows for SPAs using RT rotation
2. Use the sec-fetch headers to detect and reject non-silent iframe-based flows

For example,  an OAuth 2.0 flow in an iframe in Brave/Chrome carries these 
headers:
sec-fetch-dest: iframe
sec-fetch-mode: navigate
sec-fetch-site: cross-site
sec-fetch-user: ?1


Philippe

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] DPoP followup I: freshness and coverage of signature

2020-12-04 Thread Philippe De Ryck
Hi all,

This is a very useful discussion, and there are some merits to using DPoP in 
this way. However, the attacker's capabilities are stronger than often assumed, 
so it may not matter in the end. I've been wanting to write this out for a 
while now, so I've added a couple of scenarios below. Note that I just came up 
with the scenario names on the fly, so these may not be the best ones for 
future use ...

(This got a lot longer than I expected, so here's a TOC)
- Attack assumption
- Scenario 1: offline XSS against existing tokens
- Scenario 2: passive online XSS against existing tokens
- Scenario 3: active online XSS against existing tokens
- Scenario 4 (!): obtaining fresh tokens
- Mitigation: DPoP in a Web Worker
- Conclusion (TL;DR)

I hope this all makes sense!

Philippe




Assumption

The attacker has the ability to execute JS code in the application's context 
(e.g., through XSS, a malicious ad, ...). For simplicity, I'll just refer to 
the attack as "XSS".



Scenario 1: offline XSS against existing tokens

In this scenario, the malicious code executes and immediately performs a 
malicious action. The attacker is not necessarily present or actively 
participating in the attack (i.e., abuse of stolen tokens is done at a later 
time). 

A common example would be stealing tokens from localStorage and sending them to 
an attacker-controlled server for later abuse. Existing mitigations include 
short AT lifetimes and RT rotation.

The attacker could determine that DPoP is being used, and also extract 
precomputed proofs for any of these tokens. The use of DPoP makes token abuse a 
bit harder (large window = lots of proofs), but does not really strengthen the 
defense beyond existing mitigations (Short AT lifetimes and RT rotation). 



Scenario 2: passive online XSS against existing tokens

In this scenario, the malicious code executes and sets up a long-term attack. 
The attacker (i.e., a malicious application running on a server) is passive 
until certain criteria are met. 

An attack could be to manipulate the JS execution context, so that the attacker 
can detect new tokens being obtained by the client. (e.g., by overriding a 
listener or changing core function prototypes). Each time new tokens are issued 
(AT + RT), the attacker sends them to the malicious server. The moment the 
attacker detects that the user closes the application, the malicious server 
continues the RT rotation chain. Since the application is no longer active, the 
AS will not detect this. The attacker now has access for as long as the RT 
chain can be kept alive.

When DPoP is used, the attacker will need proofs to present to the AS when 
running a refresh token flow. If the proofs are independent of the RT being 
used, these can be precomputed. When the RT is part of the proof, as per 
Filip's suggestion, the attacker can only run a RT flow once (with the stolen 
RT + proof). This attack scenario is fairly well mitigated when DPoP proofs 
include the RT (hash).



Scenario 3: active online XSS against existing tokens

In this scenario, the malicious code executes and sets up a long-term attack. 
The attacker is actively controlling the behavior of the malicious code. 

The attack vectors are the same as scenario 2. Once in control, the attacker 
can use the same mechanism as the application does to send requests to any 
endpoint. There is no need to obtain an RT (which may not even be possible), 
since the attacker can just abuse the AT directly.

When DPoP is used, little changes here. The attacker can use the application's 
DPoP mechanism to obtain legitimate proofs. DPoP does nothing to mitigate this 
type of attack (as already stated in Daniel's threat model: 
https://danielfett.de/2020/05/04/dpop-attacker-model/).



Scenario 4: obtaining fresh tokens

In this scenario, the malicious code executes and immediately launches the 
attack. In this attack, the malicious code loads a hidden iframe in the 
application's DOM. In that iframe, the attacker starts a silent flow with AS to 
obtain an authorization code (AC). If the user has an active session, this will 
succeed (existing cookie + all origins match). The attacker extracts this AC 
and exchanges it for tokens with the AS. 

At this point, the attacker has a fresh set of tokens that grant access to 
resources in the name of the user. Short AT lifetimes and RT rotation are 
useless, since the attacker is in full control of the tokens.

Using DPoP in this scenario does not help at all. The attacker can use their 
own private key to generate the necesary DPoP proofs, starting with the code 
exchange.

One solution is to turn off silent flows for SPAs, since they have become quite 
unreliable with third-party cookie blocking restrictions.



Mitigation: DPoP in a Web Worker

Isolating sensitive features from malicious JS is virtually impossible when the 
application's legitimate JS code needs access to them. One solution that can 
work is the use of a Web Worker. 

Re: [OAUTH-WG] OAuth 2.0 for Browser-Based Apps - On the usefulness of refresh token rotation

2020-05-16 Thread Philippe De Ryck
Hi Torsten,

> On 16 May 2020, at 19:50, Torsten Lodderstedt  wrote:
> 
> Hi Philippe, 
> 
>> On 16. May 2020, at 17:08, Philippe De Ryck 
>>  wrote:
>> 
>> Hi all,
>> 
>> I am working on formulating developer guidelines on using Refresh Token 
>> Rotation (RTR), as required by "OAuth 2.0 for Browser-Based Apps". 
>> 
>> The protection offered by RTR kicks in the moment a refresh token is used 
>> twice, so the assumption is that the attacker has the ability to steal 
>> tokens from the client. In general, this means the attacker has malicious 
>> code running in the application (e.g., XSS, remote JS inclusion, ...). 
>> 
>> Within these constraints, I can think of a couple of malicious payloads that 
>> sidestep the protection offered by RTR:
>> 
>> 1. Stealing access tokens in an online XSS attack
>> 2. Stealing refresh tokens, but waiting to use the latest until the original 
>> client is no longer active
>> 3. Running a silent authentication flow in an iframe to obtain a new and 
>> unrelated AT and RT, and use that until it expires
>> 
>> Scenario 1 is straightforward in most applications, but the attack requires 
>> the vulnerable application to remain online. Scenario 2 might be difficult 
>> if the RT is kept out of reach from the main application (e.g. in a worker 
>> thread). Scenario 3 is most dangerous, but also a bit tricky to implement as 
>> the payload needs to make sure the application's code does not interfere 
>> (however, the browser's Same-Origin Policy will not intervene). The 
>> specifics depend on the concrete implementation, but all three attacks are 
>> technically feasible.
>> 
>> With these attacks in mind, it seems that the use of the Authorization Code 
>> flow with RTR does not really add much improvement for security, if other 
>> best practices are followed (e.g., using HTTPS). RTR does a lot for 
>> usability and handling third-party cookie blocking scenarios though.
> 
> I also see this as the main advantage of RTs.
> 
> I think scenario 3 can be made more difficult for the attacker by requiring 
> user interaction. That’s ok since the normal case would be to refresh via RT 
> and not via authorization flow, so the legit app shouldn’t be affected. 

Preventing a silent flow from happening would indeed stop this attack vector, 
but it might create usability problems in single page applications.

A typical scenario is an SPA running a silent authentication flow in an iframe 
when it is first started. This allows the app to bootstrap itself with the 
user’s authentication status if a session already exists. This pattern is 
common when tokens are kept in memory, as a simple page reload causes that 
state to be cleared. Since Safari and Brave already block third-party cookies, 
they cannot run a silent flow in an iframe. A workaround would be to run a 
top-level silent redirect-based flow to check if an authenticated session 
exists or not. The impact on the UX for this initial redirect is limited, since 
it is silent anyway. By turning off a silent flow, both use cases would stop 
working. 

I totally get that this is a quite challenging problem to address. Given your 
suggestion, the authorization server could prevent iframe-based flows when RTs 
are used, but still allow top-level navigation flows for the bootstrap phase. 
Right now, I don’t think we can implement such a detection mechanism in a 
reliable way, but hopefully the upcoming Sec-Fetch-Dest header can help here 
(https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Sec-Fetch-Dest 
<https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Sec-Fetch-Dest>)

> 
>> 
>> In this context, my advice to developers is to avoid handling tokens in the 
>> browser in security-sensitive scenarios. A Backend-for-frontend pattern 
>> gives a server-side component control over tokens, along with the ability to 
>> implement additional security measures.
> 
> I full agree with those advice. Handling security sensitive aspects of the 
> app out of reach of the user (which might be an attacker) is a good idea. On 
> the functional side, this also gives the app access to authentication and 
> sender constrained access tokens via mTLS.

That’s precisely my recommendation as well. 

>> 
>> Additionally, is there any official recommendation to link the validity of a 
>> refresh token to the lifetime of the user's session with the Authorization 
>> Server? Having that property gives RTR similar security properties as the 
>> silent renew scenario. 
> 
> Section 4.12.2. of the Security BCP recommends refresh token revocation in 
> case of logout. 

Right, I should hav

[OAUTH-WG] OAuth 2.0 for Browser-Based Apps - On the usefulness of refresh token rotation

2020-05-16 Thread Philippe De Ryck
Hi all,

I am working on formulating developer guidelines on using Refresh Token 
Rotation (RTR), as required by "OAuth 2.0 for Browser-Based Apps". 

The protection offered by RTR kicks in the moment a refresh token is used 
twice, so the assumption is that the attacker has the ability to steal tokens 
from the client. In general, this means the attacker has malicious code running 
in the application (e.g., XSS, remote JS inclusion, ...). 

Within these constraints, I can think of a couple of malicious payloads that 
sidestep the protection offered by RTR:

1. Stealing access tokens in an online XSS attack
2. Stealing refresh tokens, but waiting to use the latest until the original 
client is no longer active
3. Running a silent authentication flow in an iframe to obtain a new and 
unrelated AT and RT, and use that until it expires

Scenario 1 is straightforward in most applications, but the attack requires the 
vulnerable application to remain online. Scenario 2 might be difficult if the 
RT is kept out of reach from the main application (e.g. in a worker thread). 
Scenario 3 is most dangerous, but also a bit tricky to implement as the payload 
needs to make sure the application's code does not interfere (however, the 
browser's Same-Origin Policy will not intervene). The specifics depend on the 
concrete implementation, but all three attacks are technically feasible.

With these attacks in mind, it seems that the use of the Authorization Code 
flow with RTR does not really add much improvement for security, if other best 
practices are followed (e.g., using HTTPS). RTR does a lot for usability and 
handling third-party cookie blocking scenarios though.

In this context, my advice to developers is to avoid handling tokens in the 
browser in security-sensitive scenarios. A Backend-for-frontend pattern gives a 
server-side component control over tokens, along with the ability to implement 
additional security measures.

Additionally, is there any official recommendation to link the validity of a 
refresh token to the lifetime of the user's session with the Authorization 
Server? Having that property gives RTR similar security properties as the 
silent renew scenario. 

Any feedback on this train of thought is more than welcome.

Philippe


___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth 2.1 - require PKCE?

2020-05-07 Thread Philippe De Ryck
From working with a lot of developers on understanding OAuth 2.0 and OIDC, I 
definitely vote for simplicity. Understanding the subtle nuances of when a 
nonce is fine and when PKCE should be used is impossible without in-depth 
knowledge of the flows and their properties. Misunderstandings will cause 
security vulnerabilities, which can easily be avoided.

Since OAuth 2.1 is a separate spec, I don’t really see a problem with existing 
code not being compliant. They support OAuth 2.0, and if they want to be OAuth 
2.1 compliant, they add PKCE. If I’m not mistaken, other requirements of OAuth 
2.1 would also clash with existing deployments (e.g., using non-exact redirect 
URIs).

I believe that optimizing for making OAuth 2.1 easier to understand will yield 
the highest return.

Philippe


> On 8 May 2020, at 03:42, Mike Jones 
>  wrote:
> 
> Aaron, I believe you’re trying to optimize the wrong thing.  You’re concerned 
> about “the amount of explanation this will take”.  That’s optimizing for spec 
> simplicity – a goal that I do understand.  However, by writing these few 
> sentences or paragraphs, we’ll make it clear to developers that hundreds or 
> thousands of deployed OpenID Connect RPs won’t have to change their 
> deployments.  That’s optimizing for interoperability and minimizing the 
> burden on developers, which are far more important.
>  
> As Brian Campbell wrote, “They are not equivalent and have very different 
> ramifications on interoperability”.
>  
> Even if you’re optimizing for writing, taking a minimally invasive protocol 
> change approach will optimize that, overall.  If we proceed as you’re 
> suggesting, a huge amount of writing will occur on StackOverflow, Medium, 
> SlashDot, blogs, and other developer forums, where confused developers will 
> ask “Why do I have to change my deployed code?” with the answers being 
> “Despite what the 2.1 spec says, there’s no need to change your deployed 
> code.”
>  
> I’d gladly write a few sentences in our new specs now to prevent ongoing 
> confusion and interop problems that would otherwise result.  Let me know when 
> you’re ready to incorporate them into the spec text.
>  
>-- Mike
>  
> From: Aaron Parecki mailto:aa...@parecki.com>> 
> Sent: Thursday, May 7, 2020 4:39 PM
> To: Dick Hardt mailto:dick.ha...@gmail.com>>
> Cc: OAuth WG mailto:oauth@ietf.org>>; Torsten Lodderstedt 
> mailto:tors...@lodderstedt.net>>; Mike Jones 
> mailto:michael.jo...@microsoft.com>>
> Subject: Re: OAuth 2.1 - require PKCE?
>  
> Backing up a step or two, there's another point here that I think has been 
> missed in these discussions.
>  
> PKCE solves two problems: stolen authorization codes for public clients, and 
> authorization code injection for all clients. We've only been talking about 
> authorization code injection on the list so far. The quoted section of the 
> security BCP (4.5.3) which says clients can do PKCE or use the nonce, is only 
> talking about preventing authorization code injection.
>  
> The nonce parameter solves authorization code injection if the client 
> requests an ID token. Public clients using the nonce parameter are still 
> susceptible to stolen authorization codes so they still need to do PKCE as 
> well.
>  
> The only case where OpenID Connect clients don't benefit from PKCE is if they 
> are also confidential clients. Public client OIDC clients still need to do 
> PKCE even if they check the nonce.
>  
> OpenID Connect servers working with confidential clients still benefit from 
> PKCE because they can then enforce the authorization code injection 
> protection server-side rather than cross their fingers that clients 
> implemented the nonce check properly.
>  
> I really don't think it's worth the amount of explanation this will take in 
> the future to write an exception into OAuth 2.1 or the Security BCP for only 
> some types of OpenID Connect clients when all clients would benefit from PKCE 
> anyway.
>  
> Aaron
>  
>  
>  
> On Wed, May 6, 2020 at 10:48 AM Dick Hardt  > wrote:
> Hello!
>  
> We would like to have PKCE be a MUST in OAuth 2.1 code flows. This is best 
> practice for OAuth 2.0. It is not common in OpenID Connect servers as the 
> nonce solves some of the issues that PKCE protects against. We think that 
> most OpenID Connect implementations also support OAuth 2.0, and hence have 
> support for PKCE if following best practices.
>  
> The advantages or requiring PKCE are:
>  
> - a simpler programming model across all OAuth applications and profiles as 
> they all use PKCE
>  
> - reduced attack surface when using  S256 as a fingerprint of the verifier is 
> sent through the browser instead of the clear text value
>  
> - enforcement by AS not client - makes it easier to handle for client 
> developers and AS can ensure the check is conducted
>  
> What are disadvantages besides the potential impact to OpenID Connect 

Re: [OAUTH-WG] DPoP: Threat Model

2020-05-04 Thread Philippe De Ryck
On 4 May 2020, at 21:44, Daniel Fett  wrote:
> 
> Am 04.05.20 um 21:27 schrieb Philippe De Ryck:
>> 
>>>> (https://beefproject.com <https://beefproject.com/>) rather than 
>>>> exfiltrating tokens/proofs.
>>> As a sidenote: BeEF is not really XSS but requires a full browser 
>>> compromise.
>>> 
>> 
>> No, it’s not. The hook for BeEF is a single JS file, containing a wide 
>> variety of attack payloads that can be launched from the command and control 
>> center. You can combine BeEF with Metasploit to leverage an XSS to exploit 
>> browser vulnerabilities and break out.
> I shall stand corrected!
>> 
>> Just keep in mind that once an attacker has an XSS foothold, it is extremely 
>> hard to prevent abuse. The only barrier that cannot be broken (in a secure 
>> browser) is the Same Origin Policy. Keeping tokens and metadata in a 
>> separate environment (e.g., iframe, worker, …) is effective to keep them out 
>> of reach. However, once the app “extracts” data from such a context, the 
>> same problem arises. 
> Compartmentalization within an origin is as old a problem as it is mostly 
> unsolved, indeed. That is why I would not further differentiate in case the 
> browser is online and the client's script is compromised, but instead assume 
> that the attacker can then forge arbitrary requests using the token.
> 
I agree on that assumption. The moment malicious script executes, it’s game 
over, regardless of the specifics on whether a token can be extracted or not. 
Even with isolation, an attacker would be able to trick the isolated context in 
making requests as a confused deputy.

Philippe___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] DPoP: Threat Model

2020-05-04 Thread Philippe De Ryck

>> (https://beefproject.com ) rather than 
>> exfiltrating tokens/proofs.
> As a sidenote: BeEF is not really XSS but requires a full browser compromise.
> 

No, it’s not. The hook for BeEF is a single JS file, containing a wide variety 
of attack payloads that can be launched from the command and control center. 
You can combine BeEF with Metasploit to leverage an XSS to exploit browser 
vulnerabilities and break out.

FYI, the name for the attack where the attacker proxies calls through the 
user’s browser is known as Session Riding. 

Just keep in mind that once an attacker has an XSS foothold, it is extremely 
hard to prevent abuse. The only barrier that cannot be broken (in a secure 
browser) is the Same Origin Policy. Keeping tokens and metadata in a separate 
environment (e.g., iframe, worker, …) is effective to keep them out of reach. 
However, once the app “extracts” data from such a context, the same problem 
arises. By rewriting JS functions, the attacker can extract tokens from deep 
within an SDK, as I discuss here: 
https://pragmaticwebsecurity.com/articles/oauthoidc/localstorage-xss.html 


Kind regards

Philippe
> Thanks for the feedback!
> 
> -Daniel
> 
> 
> 
>> You can protect against exfiltration attacks by e.g. token binding the DPoP 
>> proofs and/or access token, or storing the access token in a HttpOnly cookie 
>> (gasp!). You can protect against exfiltrating post-dated DPoP proofs by 
>> storing the private key in a separate origin loaded in an iframe that you 
>> use postMessage to ask for proof tokens so the attacker is not in control of 
>> those claims. Nothing really protects against an attacker proxying requests 
>> through your browser, so this is purely post-compromise recovery rather than 
>> an actual defence against XSS.
>> 
>> — Neil
>> 
>>> On 4 May 2020, at 18:24, Daniel Fett >> > wrote:
>>> 
>>> Hi all,
>>> 
>>> as mentioned in the WG interim meeting, there are several ideas floating 
>>> around of what DPoP actually does.
>>> 
>>> In an attempt to clarify this, if have unfolded the use cases that I see 
>>> and written them down in the form of attacks that DPoP defends against: 
>>> https://danielfett.github.io/notes/oauth/DPoP%20Attacker%20Model.html 
>>> 
>>> Can you come up with other attacks? Are the attacks shown relevant?
>>> 
>>> Cheers,
>>> Daniel
>>> 
>>> ___
>>> OAuth mailing list
>>> OAuth@ietf.org 
>>> https://www.ietf.org/mailman/listinfo/oauth 
>>> 
>> 
> 
> ___
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] Second WGLC on "JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens"

2020-04-20 Thread Philippe De Ryck
In theory, you can issue a token that only becomes valid in the future. That 
would have a different iat and nbf timestamp. I have not seen this in practice 
though. 

Given that RFC 7519 lists “iat” as informative, I would not change that 
behavior in a specific use case if there is no significant need to do so.

Philippe

> On 20 Apr 2020, at 08:50, Dominick Baier  wrote:
> 
> Just a quick data point - 
> 
> The Microsoft .NET JWT implementation checks for exp and nbf. Not iat.
> 
> I guess my real question is - what’s the difference between the two 
> practically speaking - and shouldn’t be the more common (aka supported by 
> most libraries) be used?
> 
> ———
> Dominick Baier
> 
> On 20. April 2020 at 06:59:47, David Waite 
> (david=40alkaline-solutions@dmarc.ietf.org 
> ) wrote:
> 
>> There are a number of ambiguities and statements around using JWTs in 
>> various contexts:
>> 
>> 1. Some implementations interpret “iat" to also have the meaning of “nbf” in 
>> the absence of “nbf”, although this is AFAIK not prescribed by any spec
>> 2. The DPoP draft’s client-generated tokens have the resource servers use 
>> their own nbf/exp heuristics around “iat”, since the tokens are meant for 
>> immediate one time use by a party that may not have clock synchronization.
>> 3. There are recommendations in the JWT profile for OAuth that the AS may 
>> reject tokens based on an “iat” too far in the past or “exp” too far in the 
>> future, but not that “nbf” was too far in the past or that the interval 
>> between nbf and exp was too large.
>> 
>> The JWT spec also allows implementers to provide some leeway for clock skew. 
>> Presumably this meant validators and not JWT creators, although there is 
>> history of messages setting similar values to account for clock skew (e.g. 
>> SAML IDPs setting notBefore to one minute before issuance and notOnOrAfter 5 
>> minutes after issuance). 
>> 
>> -DW
>> 
>>> On Apr 19, 2020, at 2:50 AM, Vladimir Dzhuvinov >> > wrote:
>>> 
>>> On 16/04/2020 10:10, Dominick Baier wrote:
 iat vs nbf
 What’s the rationale for using iat instead of nbf. Aren’t most JWT 
 libraries (including e.g. the ..NET one) looking for nbf by default?
>>> Developers often tend to intuitively pick up "iat" over "nbf" because it 
>>> sounds more meaningful (my private observation). So given the empirical 
>>> approach of Vittorio to the spec, I suspect that's how "iat" got here.
>>> 
>>> If we bother to carefully look at the JWT spec we'll see that "iat" is 
>>> meant to be "informational" whereas it's "nbf" that is intended to serve 
>>> (together with "exp") in determining the actual validity window of the JWT.
>>> 
>>> https://tools.ietf.org/html/rfc7519#section-4.1.5 
>>> 
>>> My suggestion is to require either "iat" or "nbf". That shouldn't break 
>>> anything, and deployments that rely on one or the other to determine the 
>>> validity window of the access token can continue using their preferred 
>>> claim for that.
>>> 
>>> Vladimir
>>> 
>>> ___
>>> OAuth mailing list
>>> oa...@ietf..org 
>>> https://www.ietf.org/mailman/listinfo/oauth 
>>> 
>> 
>> ___ 
>> OAuth mailing list 
>> OAuth@ietf.org  
>> https://www.ietf.org/mailman/listinfo/oauth 
>>  
> ___
> OAuth mailing list
> OAuth@ietf.org 
> https://www.ietf.org/mailman/listinfo/oauth 
> 
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth