Zack, very good points. We have been probably over thinking this a bit and
have gotten off topic.

Our focus should be:

+ Secure the callback in the authorization URL from tampering
+ Make sure the user that authorized the request token is the same user that
requested it

The first issue can be solved by signing the callback URL if it is included
in the authorization URL. Since devices / apps wont' use a callback, the
consumer
should set a flag during provider registration saying it will not use
callbacks. This will prevent attackers from injecting a callback.
This still allows for clean authorization URLs that are easy to type
manually.

We can solve the second issue by requiring a "confirmation token" be
included with the callback.
If there is no callback, the user must manually enter is confirmation back
into the consumer.
The token should be type able, but long enough to make brute attacks
unlikely.

These changes are easy to implement and don't really affect the current
oauth flow.

On Sat, Apr 25, 2009 at 4:31 PM, Zachary Voase <disturb...@googlemail.com>wrote:

>
> I completely agree. The whole point of this thread (I thought) was to
> develop a solution to a very specific security hole; this has already
> been done with three things: once-only exchanging, signed/pre-
> specified callbacks, and the concept of a callback nonce (a.k.a.
> authorization token, and a host of other names) (have a look at a
> previous post by Mike Malone for the details). These things require
> absolutely *no* change in user experience, they keep all of the burden
> of verification/authentication on the service provider (where it
> should be), and they need only minimal changes to the specification.
> We can't trust consumers to verify things, because that means the
> service provider is trusting a third-party with the security of its
> users' data.
>
> I suppose what I'm saying is that if you think you've got a totally
> better authorization protocol/strategy worked out, great, but let's
> try and keep the focus on patching this security hole rather than
> completely rewriting the spec.
>
> You might also be interested in reading the OAuth design goals for an
> explanation of why things are the way they are:
> http://oauth.net/about/design-goals
>
> Regards,
> Zack
>
> On Apr 25, 10:56 pm, "J. Adam Moore" <jadammo...@gmail.com> wrote:
> > Yeah, I have that at my bank and it sucks all kinds of hell. Thank god
> > I can just Google my mother's maiden name to reset my password when
> > that fails. If a system is designed to work only by relying upon
> > people to not be stupid it will fail. You can't outwit a fool; only
> > fools try. I really need to finish my post on this. It has pictures
> > and everything. Should clear up some confusion people might have.
> >
> > I am not saying that your method is forever flawed, but why change
> > OAuth when it works just fine? Remember, the problem we are facing is
> > still theoretical and the solution I proposed doesn't break anyones
> > current or past work or understanding.
> >
> > On Apr 25, 1:33 pm, Josh Roesslein <jroessl...@gmail.com> wrote:
> >
> >
> >
> > > The only place that a phishing attack would occur in the signed
> > > authorization proposal is the authorization URL.
> > > An attacker could lure an user to click on a link that directs the user
> to a
> > > clone of the provider and steal the users credentials
> > > when logging in. The best way to prevent this is users being careful to
> > > check the address bar and making sure the site they are at
> > > is indeed the provider's site. Another layer that can help prevent this
> is
> > > by using images that are displayed on the provider's site during login.
> Some
> > > banks use this during login. You first give your username and hit
> enter.
> > > Next the bank shows an image you set when you signed up. You verifty
> this is
> > > the right image and provide your password.
> > > This isn't really something oauth should mandate. It is up to the
> provider
> > > to add this layer of security on their own.
> >
> > > On Sat, Apr 25, 2009 at 3:24 PM, Brian Eaton <bea...@google.com>
> wrote:
> >
> > > > On Sat, Apr 25, 2009 at 1:11 PM, J. Adam Moore <jadammo...@gmail.com
> >
> > > > wrote:
> > > > > The problem itself is REALLY
> > > > > specific: Phishing. Like fish in a barrel phishing. The solution is
> to
> > > > > take away their bullets, and is not to try and harden the barrels
> or
> > > > > educate the fish to dodge bullets.
> >
> > > > The problem is very similar to phishing, in that it requires some
> > > > element of social engineering to exploit.  However, the current
> > > > protocol allows a phishing attack where everything the user sees is
> > > > completely in context and true.  The session fixation vulnerability
> > > > allows perfect phishing.
> >
> > > > I just reread the protocol you proposed above, and I'm pretty sure it
> > > > doesn't actually fix the session fixation attack.  You need some kind
> > > > of a callback token passed through the user's browser back to the
> > > > consumer.  (If you were including that, sorry, I missed it.)
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"OAuth" group.
To post to this group, send email to oauth@googlegroups.com
To unsubscribe from this group, send email to oauth+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/oauth?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to