Hi David Thanks for the detailed review and constructive feedback. I opened issues [1], [2] and [3] as a result. Some comments below:
On Sat, Dec 6, 2025 at 11:32 PM David Mandelberg via Datatracker < [email protected]> wrote: > Document: draft-ietf-oauth-cross-device-security > Title: Cross-Device Flows: Security Best Current Practice > Reviewer: David Mandelberg > Review result: Has Issues > > Mostly looks good, and I learned some things about flows that I use > personally, > so thanks! I only have one substantive comment, see the last paragraph. > > (nit) In the last paragraph of section 4.1.3, shouldn't only the > consumption > device that initiated the flow be able to use the code? I.e., isn't there > usually a session in the channel used for steps C and G such that the > attack > fails if those two steps occur on different consumption devices? This seems > related to section 6.1.12, but that's about constraining the result of the > attack rather than the data in the attack itself, right? > Good observation. In practice the session binding you suggest between steps C and G is not a given, so we defaulted to a worst case. I will add some text to clarify the lack of a session being maintained between step C and G could lead to this kind of exploit. Tracked in [1]. > (nit, about usability rather than security) Section 6.1.1 mentions the > possibility of an attacker using a VPN, but not the possibility of a real > user > using one. E.g., my phone almost always uses a VPN to my home network to > limit > what the cell or wifi network can see of my traffic. Using IP address > geolocation would be mildly annoying when I'm not near my house, since I'd > have > to disable the VPN. Not sure if this is common enough to include in the > document though. Do many companies have employees use VPNs to route all > traffic > through the corporate network? Or maybe embassies/consulates to route > traffic > through the home country? I'd also be curious about how IP geolocation > works in > remote multi-national places like Svalbard and Antarctica, e.g., if two > devices > meters apart could appear to be 1000s of km apart if they get their > internet > from two different countries. I really don't know much about that though, > just > guessing. > > Worth noting. I will add text to indicate that using IP addresses for location information may be further complicated or skewed by VPN use. Tracked in [2]. The text already addresses the "neighbouring country" problem with IP addresses, so I'll leave that as is. > (nit) Section 6.1.2 mentions the limitation of the time to enter a code, > but > not the time to authenticate. I've had cross-device flows timeout on me > before > while I was typing a long password on a phone keyboard. I'm not sure if > this is > common enough to be worth mentioning, but encouraging short passwords seems > counterproductive. Also, the timeout could encourage users to be less > careful > about phishing (conventional password phishing or the cross-device types) > if > they know they have less time. > Good point. It is reasonable to remind system designers to allow enough time for authentication in addition to time to enter the code. I will add some text. Tracked in [3]. > (optional) In section 6.1.5, would it make sense to also mention the > possibility of standardizing something like > https://en.wikipedia.org/wiki/EICAR_test_file for these QR codes? That > way in > the flows that don't involve email/SMS/etc. like a TV showing a QR code > and a > phone scanning it, the QR code could include a standardized string that > spam > filters could look for. I have no idea how well this would work in > practice, > and it might be out of scope for this document. Feel free to ignore this > idea > if it's not helpful. > I think suggesting future standardization efforts is out of scope. If anyone reading the document feels like this is a reasonable thing to do, they could suggest such a standard. > In section 6.2.2, isn't the protocol also vulnerable to much more targeted > attacks, where the attacker can predict exactly when a specific user is > going > to use the protocol? E.g., the whiteboard example from section 3.3.2 could > be > done in a publicly streamed presentation, and the attacker could see the > user > about to initiate the authorization flow. If the typical latency between a > user > initiating the flow and receiving the notification on the second device is > 5 > seconds, then the attacker could initiate the attack 4 seconds before the > victim, and the victim might then approve the attacker's session since that > notification arrives first. I assume that's not important for something > low-value like video streaming on a TV, but might be worth it to get > access to > corporate/government files. The device authorization grant protocol seems > more > secure against that type of attacker, *if* the user is careful enough. (In > the > public stream of a whiteboard example, an attacker could use the device > authorization grant to present unintended files, but they couldn't use it > to > steal files, right?) > > Regarding the whiteboard example, in theory access could be limited to displaying files, but displaying files also requires access to them and assumes a level of fine-grained access control that is not typically implemented. Access is often granted at the directory or repository level (or repositories), not on a per-file basis. The degree of access is dictated by the Acccess Token and and it is not unheard of that the token obtained from such a flow would give much broader access than just allowing files to be displayed (overprivileged tokens are often used in lateral moves for example). Consequently relying on the scope of access as a mitigation to allow for the use of Device Authorization Grant is problematic. In terms of protocols, both CIBA and Device Authorization grant are exploitable in the context of cross-device flows. Of the two, Device Authorization Grant has proven to be more easily and commonly exploited. The attack you describe in the context of CIBA is possible, but it is much more specific and narrowly scoped compared to all the attacks that become posssible if a system use Device Authorization Grant instead. Device Authorization Grant opens up different and more commonly exploited attacks. In general CIBA is harder to exploit than Device Authorization Grant. In the context of the attack you describe, it raises the bar for the attacker regarding timing and the scale of execution (targeting one vs targeting millions simultaneously) compared to the attacks against the Device Authorization Grant. Sometimes the best we can do is to raise the bar and choose the options that are harder to exploit. I have come to think of these trade-offs as "lesser of evils" exercises... FIDO with WebAuthn is your safest bet if you need cross-device authentication and authorization. Thanks for the thoughtful and constructive comments Issues have been opened to track the above Cheers Pieter Tracking issues: [1] https://github.com/oauth-wg/oauth-cross-device-security/issues/201 [2] https://github.com/oauth-wg/oauth-cross-device-security/issues/202 [3] https://github.com/oauth-wg/oauth-cross-device-security/issues/203 > _______________________________________________ > OAuth mailing list -- [email protected] > To unsubscribe send an email to [email protected] >
_______________________________________________ OAuth mailing list -- [email protected] To unsubscribe send an email to [email protected]
