Scripting in a DOMParser-generated data document: Should I sandbox?
Ladies and gentlemen of the jury, In developing my markup templating system for Verbosio, it's becoming clear that I need to add JavaScript support for documents parsed through a DOMParser (data documents). At first, I need it for testing - data documents define the parameters of the test, and I need to dynamically execute certain commands based on the documents I'm working with. Soon, I'll need the same scripting support for live use, in order to support event handlers for specific events that happen in my application. In either case, some data documents themselves (with XTF thrown in for good measure, until XBL 2 comes along) will provide scripts inline which I must evaluate. What I'm wrestling with is whether or not to sandbox these scripts. On the one hand, sandboxing is usually a good idea. On the other hand, the scripts would be working with XUL and XTF elements, which the application has graciously generated in a chrome, privileged environment. By handicapping them within a sandbox, I may be restricting what the template author can do with the (not data, but chrome) user interface they're supposed to manipulate. Consider a XUL tree, for example. With chrome privileges, the script could provide a nsITreeView object for the tree directly. Without chrome privileges, I'm not sure if it can even generate and maintain a whole bunch of xul:treeitem/ and xul:treechildren/ elements for the tree. It would need access to the XUL tree's DOM, and full permission to add new elements to the tree. Here's why I'm debating it, instead of just saying sandbox it: These data documents, in my current model, would come as part of a XULRunner application's extensions. A chrome XUL overlay from the extension would have the same power as an unsandboxed script: the user would be no more vulnerable than they are from installing an extension. On the other hand, if I switch to a model where these template data documents are generally retrieved from a website, then of course I'd want to sandbox them. Then, though, I'd have to use some sandboxed extensions model, which I'm not sure exists for general XULRunner applications. I'm looking for feedback on what the best route is: just how much trust should I give scripts in these data documents, and if I should start thinking about ways to keep the whole kit kaboodle in a sandbox. (Of course, there's the scary possibility that given a chrome element, a template's XUL UI could insert a xul:script/ tag into the chrome DOM. I'm already thinking of stripping html:script/, svg:script/ and xul:script/ elements from the template, but how else from the DOM can I detect that a node could load a script? If I load the template's XUL user interface into a content iframe with a data: URL, will it still work, per the disable remote XUL bug?) --- Footnote: Here's what I'm thinking the scripting element would look like: markup:script![CDATA[{ // properties, methods, etc. }]]/markup:script [scriptable, uuid(...)] interface xeIMarkupScriptElement : nsISupports { /** * The JSObject generated from the script element's contents, or null if a parsing error occured. */ readonly attribute nsIVariant scriptObject; }; xul:script // When I have a particular method to call function callMethod(scriptElement, method, args) { // argument checking first, of course, then: return scriptElement.scriptObject[method].apply(scriptElement, args); } /xul:script ___ dev-security mailing list dev-security@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security
Re: Content Security Policy feedback
You guys should add Arun + Jonas to this conversation if you can. --Chris ___ dev-security mailing list dev-security@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security
Re: Browser security in the XO aka OLPC aka $100 laptop
Check with Marco Gritti m...@redhat.com. He's the guy doing all the browser work. --Chris On Apr 3, 2008, at 2:06 AM, Xavier Vergés wrote: Thanks, Boris I haven't asked them (still don't know where they hang out), but I'm under the impression that they are running plain Xulrunner, using Hulahop, a Gecko embedding widget based on pyxpcom http://dev.laptop.org/git?p=projects/hulahop;a=tree Still lots of things to learn... -Xv On Apr 3, 12:07 am, Boris Zbarsky bzbar...@mit.edu wrote: Xavier Vergés wrote: In the XO Browser, it failed silently doing it from a file: url The file:// special-casing is hardcoded in nsPrincipal.cpp. So it sounds like the XO browser has some sort of code changes to make this not work. The obvious way to see whether there is some way to enable the prompt is to look at their code. Sadly, it looks like the link from their wiki is broken. You might want to ask the XO folks where you can get the source to their browser. -Boris ___ dev-security mailing list dev-security@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security ___ dev-security mailing list dev-security@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security
Problem with Firefox and manual NTLM authentication
[Sorry if this gets re-posted. I tried sending to this NG using Thunderbird, but it didn't appear.] Hi, I am having a problem with Firefox failing to do manual NTLM authentication. By manual, I mean where, when you access the website, you get a popup login window and enter domain\username and password. When I do this, instead of being able to access the website, the popup login window just re-appears. Some background: The webserver is an IIS6 webserver on Windows 2003 Server. Hostname is idmiwa.whatever.com. This (both manual and automatic NTLM login with Firefox with IIS6) works fine in a different, parallel environment. When I configure the ntlm.trusted-uris in Firefox about:config with .whatever.com, AUTOMATIC ntlm authentication works, and in the IIS Event Viewer, I can see the logon event and it says NTLM. However, during testing, with the trusted-uris empty/default, I cannot login manually. So, it appears that, for some reason, when Firefox does the automatic NTLM login (ntlm.trusted-uris set), it works, but when Firefox is not configured for automatic NTLM login (ntlm.trusted-uris default/not set) it doesn't work. It's puzzling to me why the manual NTLM authentication would not work, since in both cases (automatic and manual), NTLM is being used. Can anyone suggest why that would be the case? Thanks, Jim P.S. I am aware that Firefox can do automatica login using Kerberos, using the negotiate.trusted-uris setting, but in our case, we need to do NTLM rather than Kerberos. ___ dev-security mailing list dev-security@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security
Problem with Firefox and manual NTLM authentication
Hi, I am having a problem with Firefox failing to do manual NTLM authentication. By manual, I mean where, when you access the website, you get a popup login window and enter domain\username and password. When I do this, instead of being able to access the website, the popup login window just re-appears. Some background: The webserver is an IIS6 webserver on Windows 2003 Server. Hostname is idmiwa.whatever.com. When I configure the ntlm.trusted-uris in Firefox about:config with .whatever.com, AUTOMATIC ntlm authentication works, and in the IIS Event Viewer, I can see the logon event and it says NTLM. However, during testing, with the trusted-uris empty/default, I cannot login manually. So, it appears that, for some reason, when Firefox does the automatic NTLM login (ntlm.trusted-uris set), it works, but when Firefox is not configured for automatic NTLM login (ntlm.trusted-uris default/not set) it doesn't work. It's puzzling to me why the manual NTLM authentication would not work, since in both cases (automatic and manual), NTLM is being used. Can anyone suggest why that would be the case? Thanks, Jim P.S. I am aware that Firefox can do automatica login using Kerberos, using the negotiate.trusted-uris setting, but in our case, we need to do NTLM rather than Kerberos. ___ dev-security mailing list dev-security@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security
NSS dislikes my server
Dear list, (I am not sure this is the correct newsgroup. If it isn't, please point me in the correct direction.) I am having troubles with my TLS-enabled lighttpd and any browser that uses NSS (Firefox, SeaMonkey, Chromium). For example, Firefox bails out with sec_error_bad_signature on connecting. When I ran Chromium through a debugger, it appeared that pkix_BuildForwardDepthFirstSearch is the function which fails, specifically the test (state- buildConstants.numHintCerts 0). The chain provided by the server has two items: the server certificate, and a custom CA (i.e. self-signed) certificate. As a counterexample, Opera does not fail (when the CA certificate is added to its trust list), and neither does OpenSSL's s_client -verify 100. If anybody wants to help me diagnose this problem, the server is running at https://ondrahosek.dyndns.org/. Thanks a lot in advance, ~~ Ondra Hošek ___ dev-security mailing list dev-security@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-security