On Dec 16, 2010, at 9:11 PM, David-Sarah Hopwood wrote:

> I don't like the private names syntax. I think it obscures more than it
> helps usability, and losing the x["id"] === x.id equivalence is a significant
> loss.

Again, this equivalence has never held in JS for all possible characters in a 
string. But let's agree that it must hold where "id" happens to contain a 
lexically valid identifier in ES1-5.

It's still not holy writ, unchangeable. The private names proposal 
intentionally changes the equivalence:

x[#.id] === x.id

given either const id="id" or private id.

Is there a significant loss? For some people, could be. For others learning 
Harmony fresh (assuming private names makes it) the cognitive load may be 
higher but it's not a beginner topic. ES5, never mind Harmony, has advanced 
features that should not be introduced until students are prepared.

For users who can make good use of private names but find x[name] cumbersome 
and error-prone compared to x.name, the private name; declaration may be 
useful. It may be that such users aren't numerous enough for this syntax to be 
worth adding. We'll have a hard time proving this one way or another. More 
below.


> As Mark points out, though, that syntax can be supported with either
> proposal.

No one ever disagreed on this point.


>> In fairness, I think the apples-to-apples comparison you can make between
>> the two proposals is the object model. On that score, I think the private
>> names approach is simpler: it just starts where it wants to end up (private
>> names are in the object, with an encapsulated key), whereas the soft fields
>> approach takes a circuitous route to get there (soft fields are
>> semantically a side table, specified via reference implementation, but
>> optimizable by storing in the object).
> 
> The private names approach is not simpler. It's strictly more complicated for
> the same functionality.

Dave clearly meant "simpler" by analogy to property names today. The 
"circuitous route" via transposition in inherited soft fields is not simple and 
it does not correspond to property lookup today.

If you could forget all you know about soft fields and weak maps for a minute, 
and try to imagine what a JS programmer who knows about today's objects and 
properties has to digest to understand what is going on, you might see what I 
mean.

Without "private x" syntax, the JS programmer has to grok something new in any 
event, as we all agree: x[name] where name is not converted to a string is a 
new thing under the sun, whether name is a private name, or the expression 
transposes as name.get(x) and name is a soft field.


> You can see that just by comparing the two proposals:
> in
> http://wiki.ecmascript.org/doku.php?id=strawman:inherited_explicit_soft_fields
> the specification consists entirely of the given code for the SoftField
> abstraction. In practice you'd also add a bit of non-normative rationale
> concerning how soft fields can be efficiently implemented, but that's it.

There's a higher point of order here: users need conceptually simple and usable 
features, and the spec serves users. Yes, the spec should not be 
overcomplicated, all else equal. Desugaring (not arbitrary compilation) to 
kernel semantics is preferable, if it sufficies for usability.

But (to channel TRON) we fight for the users: they need to be considered first, 
and throughout. Your entire exposition here about "simpler" vs. "more 
complicated" is about the spec, not the users.


> In http://wiki.ecmascript.org/doku.php?id=strawman:private_names (even
> exlcuding the syntactic changes, to give a fairer comparison), we can see
> a very significant amount of additional language mechanism, including:
> 
> - a new primitive type, with behaviour distinct from any other type.
>   This requires changes, not just to 'typeof' as the strawman page
>   acknowledges, but to every other abstract operation in the spec that
>   can take an arbitrary value. (Defining these values to be objects
>   would simplify this to some extent, but if you look at how much
>   verbiage each [Class] of objects takes to specify in ES5, possibly
>   not by much.)

I've advocated a new object subtype and Allen's writeup mentions the idea. It's 
strictly easier to spec (by a lot). [[Class]] is not enumerated much, only 
checked against one or another string value. But this is a minor point.


> - quite extensive changes to the behaviour of property lookup and
>   EnvironmentRecords. (The strawman is quite naive in suggesting that
>   only 11.2.1 step 6 needs to be changed here.)

Did you read the whole proposal? It includes much more than 11.2.1 in proposing 
ToPropertyName replace ToString:
Object.prototype.hasOwnProperty (ES5 15.2.4.5), 
Object.prototype.PropertyIsEnumerable (ES5 15.2.4.7) and the in operator (ES5 
11.8.7) are all extended to accept private name values in addition to string 
values as property names. Where they currently call ToString on property names 
they will instead call ToPropertyName. The JSON.stringify algorithm (ES5 
15.12.3) will be modified such that it does not process enumerable properties 
that have private name values as their property names.

All the Object reflection functions defined in ES5 section 15.2.3 that accept 
property names as arguments or return property names are extended to accept or 
produce private name values in addition to string values as property names. A 
private name value may appear as a property name in the collection of property 
descriptors passed to Object.create and Object.defineProperties. If an object 
has private named properties then their private name values will appear in the 
arrays returned by Object.getOwnPropertyNames and Object.keys (if the 
corresponding properties are enumerable).


> - changes to [[Put]] (for arrays and other objects) and to object literal
>   initialization; also checking of all uses of [[DefineOwnProperty]] that
>   can bypass [[Put]].

The proposal discusses [[Put]] and [[DefineOwnProperty]]. It's certainly not 
complete, but you write here as if it failed to mention these internal methods. 
No fair.


> - changes to a large number of APIs on Object.prototype and Object,
>   the 'in' operator, JSON.stringify, and probably others.

See above.


> None of these additional mechanisms and spec changes are needed in the
> soft field approach.

And that gets back to the higher point of order. Fight for the users, not for 
the programs.


> In addition, the proposal acknowledges that it only provides weak
> encapsulation, because of reflective operations accessing private
> properties. It justifies this in terms of the utility of "monkey patching",
> but this seems like a weak argument; it is not at all clear that monkey
> patching of private properties is needed. Scripts that did that would
> necessarily be violating abstraction boundaries and depending on
> implementation details of the code they are patching, which tends to
> create forward-compatibility problems. (This is sometimes true of scripts
> that monkey-patch public properties. I'm not a fan of monkey patching in
> general, but I think it is particularly problematic for private properties.)

Users cannot monkey-patch Object.prototype now without fear of collision, and 
although Prototype.js does it still, monkey-patching other prototypes requires 
too much care as well. With private names, especially with the lexically bound 
x declared by 'private x', users can monkey-patch without fear of collision.

Never mind what you're a fan of. Could you say what is pariticularly 
problematic about private names, given that it resolves the collision problem?


> There is some handwaving about the possibility of sandboxing environments
> being able to work around this deficiency, but the details have not been
> thought through; in practice I suspect this would be difficult and error-
> prone.

Nice suspicion without effort!

The sandboxing idea is simple to explain: if your code has no access to a 
private name N, then it can't reflect on it or gain unwanted access to it via a 
proxy. The idea is to wrap any such N returned via reflection APIs and passed 
through proxy traps with a wrapper capable only of being asked whether it is 
the same name as N. The reflecting or trapping could would have to possess N to 
be able to use the reflected or passed-in name.

I believe this restores equivalence between the two semantic models. If we made 
private names not leak, and separated the 'private x' syntax, then we would 
have a more apples-to-apples setting for evaluating the semantic models. I 
think we should do that, in order to make progress and avoid rehashing fights 
about name leaks and new syntax.


> In general, I disagree with the premise that the best way to *specify* a
> language feature is to "start where it wants to end up", i.e. to directly
> specify the programmer's view of it. Of course the programmer's view needs
> to be considered in the design, but as far as specification is concerned,
> if a high-level feature cannot be specified by a fairly simple desugaring
> to lower-level features, then it's probably not a good feature.

This is too extreme, since taken as you word it, it would have banned function 
expressions from being added to ES3. Mapping function expressions to function 
declarations does not entail a "fairly simple desugaring".

/be
_______________________________________________
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to