[EMAIL PROTECTED] writes: > On Tue, 05 Nov 2002 22:38:32 +0100, Florian Weimer <[EMAIL PROTECTED]> >said: > >> What about HTTP headers which advise user agents to disable some >> features, e.g. read/write access to the document or parts of it via >> scripting or other Internet Explorer interfaces? >> >> Is anybody interested in writing an Informational RFC on this topic? > > Pointless.
No, it isn't. We can say for sure that the most elaborate security model which has ever been implemented in a web browser has failed. Experience shows that it is impossible to implement it correctly, and somewhat simpler models are still too hard to implement, too. How can we deal with this situation? (A) Get rid of client-side scripting. (B) Get rid of unsigned client-side scripting. (C) Redesign the scripting language APIs so that only obviously unproblematic interfaces are available. (D) Pretend that there isn't a fundamental problem and do nothing but issuing patch after patch. (E) Add an additional, much simpler security model as an option. (In (B) to (E), the current security restrictions would still be enforced, and bugs could still be critical.) The market won't accept (A) and probably (B). (For (B), the current world-wide code signing PKI is sufficient, I guess, because there are security checks even after successful authentication.) (C) would result in considerable loss of functionality, and is rather error-prone (what is "obviously unproblematic"?). (E) is my suggestion. I envision that some servers can request additional protection of their pages, as a further layer of protection besides what is enforced by the Same Origin Policy etc. > It's one thing for a web browser to refuse to do something because > it suspects that it has been asked something underhanded (for > instance, to not give a cookie value to a script if it were tagged > 'httponly'). I'd like to give servers the opportunity to mark pages as "user only", without scripting access to them. I don't see how this differs fundamentally from the cookie tagging approach. Okay, I admit, there's a leap of thought here: The original approach adds some protection for web servers against their own faults, and the "disable scripting access" proposal targets add protection for user agents. > A well-behaved user agent won't need the hints, and a malicious one won't > listen to them.... Yes, but the most common case is the user agent with implementation errors. > (Note - I'm talking here about a server trying to say "Thou Shalt Not Do > XYZ" and expecting to be listened to - if anything, this is a big clue to > the attacker that they should look for a way to try to do XYZ anyhow. That > never works. On the other hand, there are *lots* of areas where *HINTS* > (like the HTTP 'Expires' header) are quite valuable... The server would say something like "I won't use DOM/DHTML/whatever for this page, you can prevent script access to this page without breaking something". Currently, this meta-data is not available anywhere, and the best thing a user agent can do is to rely on the improperly implemented traditional security model. -- Florian Weimer [EMAIL PROTECTED] University of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/ RUS-CERT fax +49-711-685-5898