I started to respond to Allan's message, but I'll combine them to here. Note the additional proposal in the middle of the message.
On Mon, Nov 26, 2012 at 11:33 AM, Tom Van Cutsem <tomvc...@gmail.com> wrote: > 2012/11/25 Allen Wirfs-Brock <al...@wirfs-brock.com> > >> >> I have a couple "virtual object" use cases in mind where I don't think I >> would want to make all properties concrete on the target. >> > > Thanks for spelling out these examples. While they still don't feel like > actual important use cases to support, they give a good flavor of the kinds > of compromises we'd need to make when turning to notification-only proxies. > I agree. My usual expectation for proxies is to support remote and persistent objects. While supporting other scenarios is great, usually that's incidental. Is there a broader list of aspirations for proxies? or is this just a "all else being equal it would be good if we can do this"? > > >> 1) A bit vector abstraction where individual bits are accessible as >> numerically indexed properties. >> >> Assume I have a bit string of fairly large size (as little as 128 bits) >> and I would like to abstract it as an array of single bit numbers where >> the indexes correspond to bit positions in the bit string. Using Proxies I >> want be able to use Get and Put traps to direct such indexed access to a >> binary data backing store I maintain. I believe that having to reify on >> the target each bit that is actually accessed would be too expensive in >> both time and space to justify using this approach. >> > > Yes. As another example, consider a self-hosted sparse Array > implementation. > > The paradox here is that it's precisely those abstractions that seek to > store/retrieve properties in a more compact/efficient way than allowed by > the standard JS object model would turn to proxies, yet having to reify > each accessed property precisely voids the more compact/efficient storage > of properties. > I don't have a good sense of how often and for what purpose clients call getOwnPropertyNames and the like. That frankly seems like a terrible operation for any client to be calling; it's architecturally necessarily inefficient; especially since it currently demands a fresh array. Worst case, I'd like to see it have a frozen result or be deprecated in favor of an operation that is more architecturally efficient (e.g., return an iterator of names so they need never all be reified). If the operation is typically only called for debugging and inspection, or once per type or some such, then the performance questions are less important. If libraries constantly call it for web services, then having an improved API might be a big win. BTW, this is a scenario where I might not even brother trying to make sure >> that Object.getOwnPropertyNames listed all of the bit indexes. I could, >> include them in an array of own property names, but would anybody really >> care if I didn't? >> > So for this example, you might want to suppress the integer properties from getOwnPropertyNames *regardless *of the proxy approach. Otherwise you are indeed doing O(N) work for all your otherwise efficiently-implemented bit fields. Such a hack would work poorly with meta-driven tools (e.g., something that maps fields to a display table for object inspection), but that's not because of the proxy support. (It is conceivable to me that integer-indexed fields deserve explicit support in a meta-protocol anyway, since their usage patterns are typically so different from that of named fields.) > Well, yes and no. > > Yes, in the sense that your object abstraction will break when used with > some tools and libraries. For instance, consider a debugger that uses > [[GetOwnPropertyNames]] to populate its inspector view, or a library that > contains generic algorithms that operate on arbitrary objects (say copying > an object, or serializing it, by using Object.getOwnPropertyNames). > > No, in the sense that even if you would implement getOwnPropertyNames > consistently, copying or serializing your bit vector abstraction would not > lead to the desired result anyway (the copy or deserialized version would > be a normal object without the optimized bit representation) (although the > result might still be usable!) > >> >> More generally, notification proxies are indeed >> "even-more-direct-proxies". They make the "wrapping" use case (logging, >> profiling, contract checking, etc.) simpler, at the expense of "virtual >> objects" (remote objects, test mock-ups), which are forced to always >> "concretize" the virtual object's properties on a real Javascript object. >> >> >> Yes, I also like the simplicity of notification proxies but don't want to >> give up the power of virtual objects. Maybe having both would be a >> reasonable alternative. >> > > Brandon beat me to it, but indeed, having two kinds of proxies for the two > different use cases makes sense. Except that there's a complexity budget we > need to take into account. If we can avoid the cost of two APIs, we should. > I too would like to avoid two kinds of proxies. And if there are, building the simpler one out of the more expressive must be done carefully to avoid giving away performance and other benefits of the simpler approach. > Brandon's proposal tries to reduce the API bloat by keeping the exact same > API for both direct proxies and notification proxies, and changing the > rules dynamically based on the presence/absence of invariants. One issue I > have with that is that it will make it very hard for people writing proxies > to understand when the trap return value is ignored, and when it is not. > I would rather avoid that. You still have the same more-complicated semantics, but with new, hidden dangers :). I had a variant, however, that might address both concerns: The trapGetOwnPropertyNames can return a list of *additional* properties to add to the ones on the target. These are then validated and added to the collection returned by the primitive operation on the target. In the "notify" case, the collection is empty, and that should be typical. In the more complicated case, the collection is iterated (and so could be any kind of collection) and the elements are added (with dup detection) by the proxy to the set of properties it got from the underlying target. Since there is dup detection, you can't change a property that is already on the object, but you can add "virtual" properties with appropriate restrictions. Thus, the trap is a "delta" operation. 2) Multiple Inheritance > > I'm playing with what it takes to support self-like multiple inheritance > using proxies. One approach that looks promising is to use a Proxy-based > object as the immediate [[Prototype]] of leaf objects that have multiple > logical inheritance parents. That lets put/gets of own property operate at > native speed and the Put/Get handlers only get invoked for inherited > properties. The MI parent proxy keeps track (using its own private state) > of the multiple parents and doesn't really use it own [[Prototype]] ( > actually it's target object's [[Prototype]]) as a lookup path for proto > climbing. It encapsulates this entire mechanism such that from the > perspective of the leaf object, all of its inherited properties look like > own properties of the MI parent proxy. I would hate to have to "copy > down" every accessed inherited property. There are situations were I might > want to copy down some of them, but probably not all. > In any case, even in the simplest approach, the target object doesn't need to actually duplicate the contents of the fields, merely their definitions (i.e., they can point to null). It would have to iterate the parents to find out their property names anyway to make a combined list, so again the work seems the same order of magnitude. That is usually a useful goal since it then means that optimization makes it better, but is not essential for making it usable.
_______________________________________________ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss