Before diving back in, I want to make one point. One reason I've been arguing
so strongly for not starting with a centrally hosted JS file was expressed
pretty well by Phillip Hallam-Baker earlier in this thread:

"my experience of HTTP is that it is almost impossible to change a scheme 
once deployed."

John, you've written that "It is trivial to replace the XAuth JS core with 
calls to a browser solution." but I don't see that. We're going to have 
lots of RPs and IdPs with Web apps coded to reference the xauth.org
site. It will become entrenched in the websites, just as the old urchin.js
code became entrenched for Google Analytics users (2.5 years so far and 
still folks use urchin.js even though ga.js is better and there is no
visible UX impact when switching). How do you replace xauth.org JS in the
browser? Are you seriously suggesting that browsers learn to recognize
calls to xauth.org and treat them differently from other iframes? Every
browser vendor has to add if/else code in their core rendering engines
to make the switch to in-browser XAuth? I don't see that happening, and 
I say that having watched more straightforward change requests languish.

On Tue, Jun 08, 2010 at 10:38:49AM -0700, John Panzer wrote:
> On Tue, Jun 8, 2010 at 7:07 AM, Peter Watkins <[email protected]> wrote:
> 
> > On Mon, Jun 07, 2010 at 09:46:35PM -0700, John Panzer wrote:

> > > What makes you think their IdP wouldn't be doing this based on the user's
> > > preferences?
> >
> > Because that would just move the NASCAR problem from the RP site to the IdP
> > site. Your current draft says the IdP can specify a list of RP domains when
> > it deposits a token. In order to give the end user control over what sites
> > this should be used at, the IdP would need a UI for determining what to go
> > in this "extend" list. And it would have to so so when adding each token!

> This would certainly be a poor UI.  I can imagine better ones, but more to
> the point, the marketplace can decide what the best UI is in this case.

Better ones like what?  I'm serious. The current XAuth spec has an IdP/Extender
deciding upfront which RP should be allowed to see the end user has a 
relationship to the IdP. I cannot imagine how you'd build a UI to fix that
problem. If you can imagine a UI improvement, please describe it!

> > This is a great example of why this should be in-browser. With an
> > in-browser
> > solution, a user could be prompted each time an RP asks for XAuth tokens,
> > and could decide at that time which IdP tokens to reveal, and whether to
> > always reveal the same set to that RP, etc.

> I think this would be a poor UI too -- it's well known that most users will
> simply end up clicking "OK" in this situation, and the experience is worse.
>  But without getting into that argument:  You could implement essentially
> the same UX using JS -- the RP doesn't get the data sent back via
> postMessage() unless the xauth.org JS says it can.  You could probably have
> a better UX with an in-browser solution, but not a qualitatively different
> one.  In other words, this is not a strong differentiator for in-browser vs.
> JS solutions.

Sure it is. In-browser, I get to decide which XAuth client-side handler to use;
I can decide to use one that prompts me fairly often, or I could opt for a
"less private" handler that stays out of my way. With the current hosted 
solution, how would I choose to use a different UX? Use a GreaseMonkey script 
to completely replace the JS you host at xauth.org with something entirely 
different? While technically feasible, that sounds like a terrible proposition,
suggesting to users that they install a hack to replace the core piece of
a central identity system. With a centrally hosted solution, users are pretty
well stuck with whatever UI(s) the central provider wants to offer.

> I agree that an in-browser solution could provide a better UX; in fact
> that's my argument (make it work, then make it better) ;).

And, as a few of us have reminded you, your company has the 4th most popular
desktop browser on the market. If you really want this to be in-browser, you
have a great way to start doing that. As Eran said, you're in a much better
position now than Dick, David, et. al. were when starting OpenID.

> > > > > I think that browser support would make some
> > > > > things easier -- perhaps defending against "pretend" IdPs that use
> > social
> > > > > engineering to get themselves on your IdP list -- but (a) those

> > Many phishers don't really care about having legitimate-looking URLs.
> > Some would try using this to phish someone's Facebook credentials,

> I do agree that we need more comprehensive defenses against "bad actors" on
> the Internet.  This is a separate discussion, but the ability to have

Why? I'm responding to your threat example, and you say you want XAuth 
in-browser, and you agree that in-browser would help not only UX but also
defend against your attack. So why does this become a separate discussion?

I don't know how to make sense of your simultaenously saying you want this
in-browser and trying to avoid any discussion of various threat vectors that
support the idea of moving the code to the client.

> If for example xauth.org had a good way to ask for the Internet's opinion of
> a site, it could ask the user to confirm for the small subset of sites that
> it thinks are hinky (or which it has no information about), with appropriate
> warnings.  An immune system for the Internet, if you will.

Sounds like a traditional browser phishing filter. Make XAuth a spec rather
than a centrally-hosted implementation and the market will provide those kinds
of solutions.

> On the other hand, this is also something that is not a differentiator
> between in-browser and JS-based implementations; either one could consult
> such a service and pop up (infrequent, scary) warnings.

It is a differentiator for the potential host of xauth.org. Publishing 
realtime blacklists is a legal minefield -- look at the history of RBLs 
for email, and how those good guys have been sued left and right by the
spammers. Heck, OIDF is worried about paying for bandwith for xauth.org,
what makes you think that they'd want to take on legal risk of trying
to warn users away from shady websites?

> > > (Note that exactly the same issues arise when downloading extensions.  JS
> > is
> > > just a way of delivering always-latest-version extensions to your
> > browser.)
> >
> > And the solutions are similar -- code-signing and publishing extension info
> > on https pages, as Firefox does.

> How does this avoid having to trust a central site (the extension site and
> the owner of the signing key)?  Or do you see the case of retrieving JS via

If this is code run in-browser that implements an agreed-upon API, then users
can choose freely between different implementations. Maybe I'll be able to
choose between your extension, Microsoft's, and one written by Dan Bernstein.
As long as the system is a centralized JS file, then users' only hope of
customization is stuff like GreaseMonkey hacks.

> Thought experiment:  Would you be satisfied if xauth were baked into
> Chromium (hosted at www.chromium.org)?  

I'm not intimately familiar with the Chromium licensing and IP policies,
but if this meant that anyone could at least re-implement the spec for
other browsers (or maybe Chromium and Chrome itself), then yes.

> If so, would it be sufficient to
> CNAME xauth.org to www.chromium.org and serve up JS from there, signed with
> the Chromium.org private key?

No.

Thanks,

Peter

_______________________________________________
specs mailing list
[email protected]
http://lists.openid.net/mailman/listinfo/openid-specs

Reply via email to