On Wed, Apr 22, 2009 at 10:48 PM, Luca Mearelli luca.meare...@gmail.comwrote:
On Thu, Apr 23, 2009 at 7:37 AM, Chris Messina chris.mess...@gmail.com
wrote:
To add to this perspective, OpenID is an assertion or identity protocol
whereas OAuth is designed as an access or authorization
Sign-in with Twitter will become even more interesting if Twitter supported
OpenID. Then any site using Sign in with Twitter supports OpenID through
Twitter. Branded OpenID federated solution...
EHL
From: oauth@googlegroups.com [mailto:oa...@googlegroups.com] On Behalf Of Chris
Messina
Sent:
The OAuth Security Advisory 2009.1 was posted on the OAuth site:
http://oauth.net/advisories/2009-1
For more information on the attack:
http://www.hueniverse.com/hueniverse/2009/04/explaining-the-oauth-session-fixation-attack.html
No information has been withheld. The issue is now fully
first of all, my sincere thanks to all those involved in this for how
it has been managed!
Since reading the advisory and the post an Eran's blog, I kept
ruminating about the issue, here are a few thoughts:
To solve the problem we'd need a way to make it impossible (or at
least very hard) for
Hi,
We (tarpipe) have been thinking about the problem for a while and we
think we have a solution.
So, here's our proposed solution (against OAuth Core 1.0 —
http://oauth.net/core/1.0):
1- In §6 (OAuth Authentication Flow diagram), add two optional
parameters to step A (Consumer Request
I have a simple idea to propose not as a solution, but hopefully to
give someone an idea toward a true solution:
What if the callback URL is signed on the provider's end using the
consumer's secret key? The drawback is it puts the burden on the
consumer to close the security hole by checking the
Not necessarily. The provider builds the request_token, so it could
simply include the callback_url in the request_token. If it does so,
it must authenticate it (e.g., HMAC with a key known _only_ to the
provider) so that an attacker cannot tamper and modify it.
On Thu, Apr 23, 2009 at 9:23 AM,
We have thought about it and decided it was a bad idea, because:
1. If the consumer uses RSA keys, then the provider cannot sign using them.
2. If the consumer uses HMAC keys, then the provider would be signing
using the same key. It is generally consider a bad idea in
cryptographic protocols
On Apr 23, 10:50 am, Paul Lindner lind...@inuus.com wrote:
Hi Luca,
In the past few days we have discussed some of the options you mention
below. I even created some prototypes for the 3-legged OAuth
implementation in shindig to test them out. Note that these
proposals only
On Apr 23, 5:23 pm, pkeane pjke...@gmail.com wrote:
Does this add the extra burden on the Provider of maintaining state
between A C (i.e., being able to remember callback from A)?
Currently, it is the Consumer secret that ties these interactions
together. Again it is addressing the need to
How bout this...
Instead of the provider signing the callback URL, the consumer signs the
redirect to the SP using its consumer and request tokens. This will prevent
an attacker from changing the callback URL, but still allow the consumer to
specify any redirect URL it wants (provided the SP
Hi Mike,
I have a proof of concept I think might be similar to this. It
works like so:
1. When the consumer gets the request_token the provide the callback
URL.
» Note this is server-to-server communication, unavailable to the
user.
2. The user is redirected to the service
A lot of these solutions require something to be passed back by the provider in
the callback, however callbacks aren't required in OAuth because they don't
work for non-browser devices. How are you going to pass something back to the
application when there's no mechanism to do so?
Ryan Kennedy
Tell the user to type a pin or something like that.
EHL
On 4/23/09 11:15 AM, Ryan Kennedy rcken...@yahoo.com wrote:
A lot of these solutions require something to be passed back by the provider in
the callback, however callbacks aren't required in OAuth because they don't
work for non-browser
On Thu, Apr 23, 2009 at 11:46 AM, Mike Malone mjmal...@gmail.com wrote:
The other difference is that it seems you're not issuing a callback token
for the manual case, where there's no callback URL. I think you need a
callback token either way. There's still a timing attack for the manual case
On Thu, Apr 23, 2009 at 11:52 AM, Brian Eaton bea...@google.com wrote:
On Thu, Apr 23, 2009 at 11:46 AM, Mike Malone mjmal...@gmail.com wrote:
The other difference is that it seems you're not issuing a callback token
for the manual case, where there's no callback URL. I think you need a
On Thu, Apr 23, 2009 at 12:03 PM, Brian Eaton bea...@google.com wrote:
On Thu, Apr 23, 2009 at 11:54 AM, Mike Malone mjmal...@gmail.com wrote:
In the manual case the user is already typing the request token key
manually.
The manual case is not a good user experience, and it isn't
On Thu, Apr 23, 2009 at 12:10 PM, Mike Malone mjmal...@gmail.com wrote:
Er, right. Sorry. I was thinking of the Netflix style case. You're right,
for many Desktop apps manual entry of the request token key is not required.
I wrote the Pownce iPhone app. It used the web application token
On 4/23/09 6:00 PM, Zachary Voase wrote:
* If the consumer is a desktop app, then a few things might
happen. MU could start brute forcing the access token, which would
lead to one of a couple things:
If the consumer is a desktop app., then the attacker has access to the
token secret
On Thu, Apr 23, 2009 at 3:54 PM, Dossy Shiobara do...@panoptic.com wrote:
If the consumer is a desktop app., then the attacker has access to the
token secret through application memory inspection.
Malicious software on the user's computer does not need to steal
access tokens. It steals
On Thu, Apr 23, 2009 at 8:30 PM, Brian Eaton bea...@google.com wrote:
On Thu, Apr 23, 2009 at 3:54 PM, Dossy Shiobara do...@panoptic.com
wrote:
If the consumer is a desktop app., then the attacker has access to the
token secret through application memory inspection.
Malicious software on
On Thu, Apr 23, 2009 at 5:35 PM, Dossy Shiobara do...@panoptic.com wrote:
On 4/23/09 8:30 PM, Brian Eaton wrote:
Malicious software on the user's computer does not need to steal
access tokens. It steals passwords, bank account numbers, and
confidential documents.
Sure. But, this attack
It's not that the malicious software is scanning for access tokens,
but that the attacker gets the consumer secret for the desktop
application; this would allow the attacker to exchange request tokens
for access tokens, etc. (as the attacker has essentially compromised
the consumer, not the
On Thu, Apr 23, 2009 at 5:57 PM, Dossy Shiobara do...@panoptic.com wrote:
Alice (attacker) and Bob (victim).
snip concise explanation of attack
The current version of the protocol is susceptible to a very similar
attack for web applications, which is why people are so upset and
working on a
On 4/23/09 9:06 PM, Brian Eaton wrote:
The current version of the protocol is susceptible to a very similar
attack for web applications, which is why people are so upset and
working on a fix.
I won't go into those details until a reasonable fix is available. :-)
For desktop apps, it's hard
Brian Eaton wrote:
There are a few options.
1) Keep using OAuth 1.0.
SPs can tell users that they are authorizing an application on
their desktop. There is some risk of social engineering as you
describe, but hopefully the language on service provider pages
mentioning desktop
On 4/23/09 9:26 PM, Brian Eaton wrote:
A flow like this?
1) User visits SP, gets identity token
2) User enters identity token into desktop app.
3) Desktop app sends user back to SP again.
4) User approves access at SP.
5) User goes back to desktop to approve access.
Something like this,
On Thu, Apr 23, 2009 at 6:43 PM, Dossy Shiobara do...@panoptic.com wrote:
On 4/23/09 9:26 PM, Brian Eaton wrote:
That's not a good user experience, nor is it necessary to fix the
security problems in the protocol.
Let me say it another way: yanking support for OAuth in response to
security
Most discussions in the other thread is about protecting callbacks.
How about if we look at this issue from a different angle? Instead of
trying to stop session fixation, we find ways to detect it. How about
if we drop a cookie?
There are many ways to add a cookie. Here is my proposal,
1. On
On Thu, Apr 23, 2009 at 3:00 PM, Zachary Voase disturb...@googlemail.comwrote:
2) The following sentence is a monster: we need to ensure that the
user who initiated the consumer's request for the request token is the
same as the one who's authorizing it on the provider. This is a much
harder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 4/23/09 8:47 PM, Zhihong wrote:
Most discussions in the other thread is about protecting callbacks.
How about if we look at this issue from a different angle? Instead of
trying to stop session fixation, we find ways to detect it. How about
if
Hi all,
I'd like to keep things as simple as possible for the end user as well as
the consumer, since consumer applications tend to be less concerned with
security than service provider applications (and less likely to update their
apps).
*1. One time only token exchange*
I actually agree with
I've been thinking about this for the last couple hours and agree with
Leah and Zach. The best solution seems to be:
1) Single use tokens that are invalidated if you try to exchange them
a second too early.
2) Either sign the callback parameter or eliminate it altogether.
I'm using OAuth from
On Apr 23, 11:27 pm, Leah Culver leah.cul...@gmail.com wrote:
Hi all,
I'd like to keep things as simple as possible for the end user as well as
the consumer, since consumer applications tend to be less concerned with
security than service provider applications (and less likely to update
On Fri, Apr 24, 2009 at 7:15 AM, pkeane pjke...@gmail.com wrote:
The weakness is in the A-B connection.
...
Whatever happens, I think the consumer is
going to need to signal to the user that it is about to make contact
with the SP, and either ask for or present a PIN, or a pattern or
picture
35 matches
Mail list logo