Nigel McFarlane wrote:
Firefox's "smooth user experience" makes Fx a popular
product for end users. A similarly smooth experience will
help make moz/xulrunner/Fx a popular product for app
developers. Developers, however, use local disk a lot
and that puts them at odds with some security goals. In
particular, they want to "do whatever they want" without
annoying warnings or restrictions,

Can you remind me of the use case here? Who wants to load HTML pages from local disk and have JavaScript in that HTML have local disk access?


2) maintain a whitelist

In this solution, which is analogous to nightclub membership,
only those files known to Mozilla are admitted from disk
to mozilla's secure environment. If users save files to disk
then those files can be reloaded anytime, based on a URL
check against some small database maintained by Mozilla.

I don't understand this. Surely files Mozilla has saved to disk are the ones _not_ to be trusted, because they came from the web rather than being locally authored?


3) manufacture trusted credentials.

One of the proposed solutions for this problem, that of
marking the web page with meta-data, is a weak form of this
stronger solution.

Why is it weaker?

The problem with this approach is that it doesn't help
developers at all; and also every Mozilla install becomes
a mint.

Only if you use an encrypted string.

Also, I've said elsewhere that this kind of thing is an attack
on the web itself. Files saved in Word or vi or PDF don't require
extra security in order to be useable - why should web pages?

Files saved in Word do require extra security - Word's macro virus protection. Files saved in PDF might, actually, as I think you can embed script in them. Not sure how Acrobat reader handles this. Filed viewed in vi don't because they don't contain executable code.


Executable code downloaded from the web needs extra security. This isn't discriminatory, it's sensible.

Microsoft's "Mark of the web" solution is very distasteful
in this respect. It's anti-Web. If the web is a shared
commons, then files on the web shouldn't have to wear
armbands.

I don't understand this either. What's wrong with marking files downloaded from the web as "untrusted" in this way? They are, after all.


What's the difference between "files shouldn't have to wear armbands" and "files shouldn't be subject to security restrictions"?

4) proposed solution - best effort with fallback.

As roc points out, all options except robust versions of 3. can
be undone by local disk manipulation via the desktop.

By whom?

In such an environment everything is up for grabs, but
I suggest that we can perform quite well just by applying
"best effort" checks. In a banking environment (for example)
best effort identity checks are used over the counter
and over the phone all the time. Using "best effort" we
can reduce the impact of implementing robust security by
finding many cases where the page can "pass though" and be
displayed without annoying the user or developer.

"Best effort" is probably not the best phrase to use here. It implies "we'll try and protect you, but sometimes we won't manage it", rather than "We'll try and load the content silently, but sometimes we may have to ask you".


iv) content. If the page contains no JS, PASS.

That's extremely hard to assess. You can't use grep. What happens if the JS is in an event handler, for example? Or a JS URL? You'd need to load the page and then look through it for JS, which requires the parser (or something) to know a lot about all the different places JS can get put in an HTML file.


We could try and stop it at execution time - this would tie in with Content Restrictions a bit.
http://www.gerv.net/security/content-restrictions/


i) URL. If, according to a small database maintained by
   Mozilla, this file: URL matches no http: URL ever
   saved to disk, then PASS. (file created by a developer
   or deliberately copied to the filesystem by the user)

How does this avoid the problem outlined by Jesse in comment 16 of bug 273419:


"As a result, roc's solution would make our message to users *more* complicated than it is now. The current message is "Opening local HTML files is not safe". With roc's change, the message will be "Opening local HTML files is not safe, unless you saved them with Firefox".

iv) "Mark of the web". If the content contains this MS
    feature, then FAIL. It could have been auto-generated
    by an IDE, and will probably be discredited as a
    security measure any day.

Again, I'm confused here. The MOTW is, as I understand it, used to apply security restrictions to the page (e.g. so it can't read from local disk). Why would an IDE ever want to put one in? And, if it did, what's the security problem with the page having less abilities than it otherwise would?


You said in the bug:
"To break it, just catch a virus that scans local .htm files and removes this metadata."


roc sensibly replied:

"Once an attacker is running arbitrary code on your system, all dominos have fallen. They could, for example, reconfigure Mozilla (or IE) so that all sites are fully trusted."

Some of these could be used in combination, for example,
the 2nd and 4th option can be used exclude all possible
phishing attacks along the lines of bug 273419.

I don't understand how 2 and 4 exclude phishing attacks, I'm afraid.

Gerv
_______________________________________________
Mozilla-security mailing list
[email protected]
http://mail.mozilla.org/listinfo/mozilla-security

Reply via email to