John Kemp wrote:
Regardless of whether I'm misunderstanding, it would sure be nice to have
both the problem and your assumptions laid out, hopefully with some prominence
so you don't get these sort of dumb questions.

One point I would mention first is that your question isn't dumb ;)
But, as I noted, OAuth seeks to avoid the requirement for a user to share her 
username/password at one web application with another. That being said, there are lots of 
ways to get that wrong, and the way of resolving those is to implement OAuth-based 
applications using the security features available in their specific environments, as 
these vary quite a lot. OAuth provides a number of different protocol flows to help with 
that, and "security considerations" that discuss known security threats within 
various environments. By careful reading, you can determine which flow is appropriate for 
your application, and which security features should be used to avoid the threats to your 
application.

So to take this back to the concrete (I'm new here, so abstractions are hard): 
are you saying that Twitter
got it wrong? My app can't be the one that's wrong because my app is the 
potential attacker. If it was
Twitter, what did they do wrong? If not, who got what wrong that allows this 
situation to occur?

Mike
_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to