In case this was too verbose, I'll summarize: I'm suggesting that many
applications that require cross-context communication might be solved
with the combination of:

(1) easy object (de)serialization to/from strings and
(2) a proxy object that safely passes strings across contexts (via a
proxy model, producer/consumer, or whatever)

Perhaps I've reduced the problem to itself (how to protect the
proxy?), but I don't think so: you can basically do this with the
sandbox today, though it is a little ugly.

On 3/9/06, Fritz Schneider <[EMAIL PROTECTED]> wrote:
> Excellent, thanks for the response -- very helpful.
>
> In this particular instance we want to pass content from the page into
> untrusted code, and then be able to have that code tell us a result
> (also a primitive type, e.g., a string serialization of a map or
> similar).
>
> Monica Chew (cc'd above) has implemented this safely using
> evalInSandbox. Basically, she calls into a fixed interface in the
> sandbox that returns a string containing a representation of the
> content the untrusted code is interested in as well as the name of the
> function(s) to invoke with that data. For example, "title,url,a:hrefs
> || func1,func2,func3". The chrome driver then extracts these data from
> the page,  stringifies them, and then pieces them together into a
> string appropriate to eval in the sandbox. For example, "var data =
> {title: <value>, url:
> <value>,...};func1(data);func2(data);func3(data)".
>
> In the process of working this out, I was thinking two things.
>
> First, that the trouble arises from giving references to _objects_ in
> other contexts, but that often the reference to an object is a means
> rather than an end. For example, we wanted to do sandbox.foo =
> functionInTrustedContext to enable passing of data out of the context.
> I suspect that for many applications, passing reference types isn't
> strictly necessary; we're can get by with passing data so long as
> there is a way to pass it.
>
> So perhaps it might make sense to provide a safe, easy way to pass
> _primitive data_ between contexts with different levels of trust. The
> brower could provide a proxy service to do this. One context could
> call:
>
> var targetContext = // a window, a sandbox, an xpcom context
> function consumer() {...}
> proxy.registerInterface(targetContext, consumer, "sendData");
>
> then in the target context code could call:
>
> // causes consumer(someValue) to be called in first context
> proxy.sendData(someValue);
> // or maybe just sendData(someValue)
>
> The proxy could enforce that someValue is a string, or could perhaps
> do the serialization itself. I haven't thought this out much, but
> perhaps there's something there.
>
> The second thought that I had was that you could actually safely
> execute code in one context on behalf of another using a serialization
> of there were an easy way stringify code (and there is!). Suppose you
> have some code that wants to operate on content but that you don't
> want to have access to chrome. The chrome piece could get the string
> representation of the code (e.g., the serialization of an object
> encapsulating state as well as functions) from the untrusted context
> via the proxy-like thing. Then it could eval it in the content context
> and receive the results through a proxy call from within the content
> (you can prevent the content from making spurious calls by only
> enabling them when you know you're evaling). Or maybe via a return
> value.
>
> It's late and maybe I'm rambling, but I thought I'd throw it out
> there. Or maybe people already do this kind of thing -- I haven't
> looked at the state of the GM art in a while.
>
> On 3/9/06, Brendan Eich <[EMAIL PROTECTED]> wrote:
> >  Fritz Schneider wrote:
> >
> >  This direction of access (untrusted is handed a "trusted" object by
> > trusted code) is not safe.
> >
> >  Then it sounds like it is the case that there is no possible way to
> > safely expose an interface to code in a sandbox?
> >
> >  We *think* we've secured the paths in the object graph that don't go
> > through generic access checking code in window object get- and set-property
> > hooks.  We may have a few obscure paths to check, which are exposed only on
> > certain kinds of objects, which may not be exposed via Firefox chrome to
> > content (extensions are another story...).
> >
> >  But, as noted earlier today in my previous reply, it seems like a bad idea
> > to rely on each trusted object to be fully trustworthy, since the trust
> > comes from inheritance, specifically from URI origin being chrome -- a
> > blanket judgment that doesn't address individual weak links.
> >
> >  So the greatest lower bound on trust labels for code active on the JS stack
> > gives us a small, central, easily audited piece of code to trust.  This
> > "meet-computing" code automatically lowers privileges for all "trusted
> > objects" whose methods might be called from untrusted code, even in very
> > indirect ways.  There may still be covert channels, but we need to address
> > this very overt one first.
> >
> >  Doing so sounds easier than it will prove to be, I bet.  I believe (bz may
> > remember off the top of his head) that we have code where content calls
> > chrome (obvious example would be a dialog, say for a file upload widget or
> > some such thing) and needs the callee to have greater privileges than the
> > caller does.
> >
> >
> >  I'm playing with some
> > maybe-untrusted code in a sandbox, and was hoping to give it a way to
> > pass information out to my trusted code (e.g., by attaching
> > sandbox.foo = function() {} and letting the sandboxed code call foo),
> > aside from the result of evalInSandbox.
> >
> >
> >  We should talk more about the particulars.  Better yet, we should implement
> > the "meet" idea and diagnose the hard cases that it breaks, giving them
> > better means to their hard-case ends.  That way everyone's safer by default.
> >
> >
> >
> >
> >  Doesn't seem to be able to (I get a security exception accessing
> > .__proto__ on the privileged object).
> >
> >  That's because of one of those JS-level checks (JS calls the hook, the
> > CAPS code implements it).
> >
> > We check __proto__, __parent__, <class-prototype>.constructor, and
> > scripted getter or setter.
> >
> >  Why, though?
> >
> >
> >  Because of the window-level access checks not sufficing in our extended
> > world.  In the DOM same-origin model, windows are articulation points in the
> > graph of all objects, and also containers of objects that all have the same
> > trust label within a given window.  You can open another window and load a
> > doc from a different origin in it.  You shouldn't be able to read data from
> > its DOM, though (you can write only to location, to navigate the window away
> > to another document).
> >
> >  So it is necessary to check access at window boundaries.  It's also
> > sufficient to identify objects in each window by their window's trust label,
> > i.e. to link window and label, but not link every object in the window to
> > the trust label (aka principal). Instead, we require security code that
> > wishes to find the trust label for a contained object to follow the object's
> > static scope (or parent, in SpiderMonkey jargon) chain up to the window.
> >
> >  This avoids the cost of a trust-label slot per object, at the cost of short
> > loops up the scope chain.  (The scope chain link slot is unavoidable in
> > SpiderMonkey and used by function objects and all DOM objects -- in ECMA-262
> > Edition 3 it is specified for function objects as the [[Scope]] internal
> > property).  A user-constructed object obj whose constructor function is Ctor
> > has obj.__parent__ === Ctor.__parent__.  And top-level function objects are
> > parented by the global object, which is the window in the browser embedding
> > of JS.  So, most objects are scoped directly by their window, with the
> > obvious exception of the DOM level 0, wherein objects nest as their tags do
> > (window contains document contains form contains form element contains event
> > handler).  Still, searching rather than linking every object to its
> > principal is a good trade-off.
> >
> >  Window-boundary access checking may appear to be secure, since each window
> > gets its own copy of the JS standard objects, so all objects in a window,
> > including the window itself, have Object.prototype as their ur-prototype
> > object, and all objects are scoped by the window, which has a null scope
> > chain link (__parent__ in SpiderMonkey).
> >
> >  But it's not secure to access-check only at the window boundaries in
> > Mozilla, because of our JS extensions, some of which go back eight or more
> > years:
> >
> >
> > SpiderMonkey exposes not only Object.prototype__defineGetter__ and
> > Object.prototype.__defineSetter__, but also
> > Object.prototype.watch.  These are implemented internally using per-property
> > getters and setters, hooks that are peculiar to the property named by the
> > method.  Therefore gets and sets on such properties, even if they are
> > defined on a window object, bypass the class-generic window get- and
> > set-property hooks that do the common access checking required for
> > same-origin security.
> >
> > SpiderMonkey reflects __proto__ and __parent__ for all objects, again using
> > per-property getter and (for __parent__) setter hooks.  The ability to bind
> > chrome XBL makes these hazardous without specific access checking.  See
> > https://bugzilla.mozilla.org/show_bug.cgi?id=296397.
> > In the JS object model, a constructor function object C has a prototype, and
> > C.prototype.constructor === C.  Again XBL introduced hazards not found in
> > the conventional DOM level 0 same-origin model.  This was originally
> > reported in
> > https://bugzilla.mozilla.org/show_bug.cgi?id=296489, which
> > was dup'ed against 296397.
> >  I will take the fifth on other hard cases, because I'm not sure the bugs
> > have been patched in all older releases.
> >
> >  /be
> >
>
_______________________________________________
dev-security mailing list
dev-security@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Reply via email to