Brendan Eich wrote:

> Trevor Jim wrote:
>> I don't
>> think I can anticipate just what policies web app developers will  
>> want.
>
> Our experience is that they want everything, and only know to  
> blacklist
> the obvious threats, which is of course insufficient for security, and
> also (they say) bad for users who should be trusted to script  
> (within a
> jail-type sandbox).
>
> They want a mashup without pain. They want the user-generated  
> content to
> be web content, ideally.

Interesting to hear this.  What we proposed does not help mashups much,
if at all.

However, I think there are many more web apps that are not mashups at  
all,
and would be satisfied with just preventing all scripts in user- 
generated
content.  And, I think the current situation with script injection is  
very
dangerous, and our scheme would be a big win for this.

>> Therefore, we went with an scheme that lets web app developers  
>> decide this
>> themselves.
>
> This is like giving whiskey and car keys to teenagers.
>
> I'm not kidding, and I'm not saying some web developers should not  
> have
> the ability to script filtering of user-generated content. The  
> expertise
> to do this well, and to track evolving browser features, is rare.

I have to disagree on this point.  Consider the current situation: web
app developers **must** write code to filter out scripts in user- 
provided
content, period.  There simply is no choice, script injection is too
dangerous.  This filtering now occurs on the server.  Our scheme would
add the ability to do this detection and filtering of scripts in the
browser, where detection can be 100% accurate.  Yes, the developer
could make a mistake in the policy script that will run in the
browser; but they can also make a mistake in the filtering code that
runs on the server.  Furthermore, our scheme does not mean they can
stop filtering on the server, because there are browsers that do not
support our scheme.

It seems to me that our scheme, even with the chance of an error in a
policy script, has a vastly greater upside than downside.  The only
possible consequence is that fewer scripts will run.  If the scripts
that are prevented are malicious, it's good.  If the scripts that are
prevented interfere with the web app, well, that's not much different
from today.  There are lots of ways to screw up a web app, they
require a lot of testing before deployment.  Their scripts have to be
right.  Note that using our scheme to implement a whitelist or sandbox
is no different in this respect from a less programmable solution,
like the ones Gerv and others have suggested.  You still have to
rely on the web app to provide the policy, and there is still the
chance that the web app can screw up.

Furthermore, there is a tremendous amount of innovation going on in web
apps today --- these are smart people and it's likely that someone else
out there is going to come up with something interesting.

>> At the same time we want the burden on browser developers to
>> be low, with only very simple changes to current browsers.  We  
>> hope this
>> will make it easier to convince them to adopt our scheme.
>
> Certainly, doing anything will entail risk, and your scheme would be
> better than the status quo. But if browser vendors are going to  
> agree on
> something, it need not be very simple, especially if that makes  
> hazards
> for web developers that will only lead to another cycle of R&D  
> followed
> by browser changes.

Programmability reduces the chances that further changes will be needed.
OTOH, I don't pretend that other schemes are uninteresting, or that
extensive thought and discussion shouldn't happen before making any  
changes.
I'm interested in any improvement.

> Whitelist beats blacklist, and browser vendors know better what to
> whitelist.

Here's another place I disagree.  First, I think of a sandbox as a  
blacklist,
and a sandbox is clearly useful, and can be easier to use (from a web  
app
programming perspective) than a whitelist.  Yes, "default reject" is the
conservative approach, but that doesn't mean blacklists are not useful.
You say above that they are "insufficient for security" but really, no
single feature that we have talked about is sufficient for security.

Second, browser vendors do not know at all what to whitelist, only  
the web
app developer knows what scripts are need to run their app.

> But as I wrote above, mashup hosting services really want it
> all to "just work" yet be secure. This means a stronger JS security
> model (two other use-cases -- there are more -- were listed in my  
> slides).

I agree that mashups require more, and I'd be happy to hear more details
of your proposals.

-Trevor

_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to