On 2010-12-23 00:40, Brendan Eich wrote:
> On Dec 22, 2010, at 2:56 PM, David-Sarah Hopwood wrote:
> 
>> What I said, paraphrasing, is that weak encapsulation favours code that 
>> doesn't work reliably in cases where the encapsulation is bypassed.
>> Also, that if the encapsulation is never bypassed then it didn't need to
>> be weak. What's wrong with this argument?
> 
> The reliability point is fine but not absolute. Sometimes reliability is
> not primary.
> 
> You may disagree, but every developer knows this who has had to meet a
> deadline, where missing the deadline meant nothing further would be
> developed at all, while hitting the deadline in a hurry, say without strong
> encapsulation, meant there was time to work on stronger encapsulation and
> other means to achieve the end of greater reliability -- but *later*, after
> the deadline. At the deadline, the demo went off even though reliability
> bugs were lurking. They did not bite.

How precisely would weak encapsulation (specifically, a mechanism that is
weak because of the reflection and proxy trapping loopholes) help them to
meet their deadline?

(I don't find your inspector example compelling, for reasons given below.)

> The second part, asserting that if the encapsulation was never bypassed
> then it didn't need to be weak, as if that implies it might as well have
> been strong, assumes that strong is worth its costs vs. the (not needed, by
> the hypothesis) benefits.
> 
> But that's not obviously true, because strong encapsulation does have costs
> as well as benefits.

What costs are you talking about?

 - Not specification complexity, because the proposal that has the simplest
   spec complexity so far (soft fields, either with or without syntax changes)
   provides strong encapsulation.

 - Not runtime performance, because the strength of encapsulation makes no
   difference to that.

 - Not syntactic convenience, because there exist both strong-encapsulation
   and weak-encapsulation proposals with the same syntax.

 - Not implementation complexity, because that's roughly similar.

So, what costs? It is not an axiom that proposals with any given desirable
property have greater cost (in any dimension) than proposals without that
property.

> Yet your argument tries to say strong encapsulation is absolutely always
> worth it, since either it was needed for reliability, or else it wouldn't
> have hurt. This completely avoids the economic trade-offs -- the costs over
> time. Strong can hurt if it is unnecessary.

How precisely can it hurt, relative to using the same mechanism with
loopholes?

> To be utterly concrete in the current debate: I'm prototyping something in
> a browser-based same-origin system that already uses plain old JS objects
> with properties. The system also has an inspector written in JS.

[snip example in which the only problem is that the inspector doesn't show
private fields because it is using getOwnPropertyNames]

Inspectors can bypass encapsulation regardless of the language spec.
Specifically, an inspector that supports Harmony can see that there is a
declaration of a private variable x, and show that field on any objects
that are being inspected. It can also display the side table showing the
value of x for all objects that have that field.

Disadvantages: slightly greater implementation complexity in the inspector,
and lack of compatibility with existing inspectors that don't explicitly
support Harmony.

Note that inspectors for JS existed prior to the addition of
getOwnPropertyNames, so that is merely a convenience and a way to avoid
implementation dependencies in the inspector.

> With soft fields, one has to write strictly more code:

Nope, see above.

>> I've also stated clearly *why* I want strong encapsulation, for both 
>> security and software engineering reasons. To be honest, I do not know 
>> why people want weak encapsulation. They have not told us.
> 
> Yes, they have. In the context of this thread, Allen took the trouble to
> write this section:
> 
> http://wiki.ecmascript.org/doku.php?id=strawman:private_names#private_name_properties_support_only_weak_encapsulation
>
>  Quoting: "Private names are instead intended as a simple extensions of the
> classic JavaScript object model that enables straight-forward encapsulation
> in non-hostile environments. The design preserves the ability to manipulate
> all properties of an objects at a meta level using reflection and the
> ability to perform “monkey patching” when it is necessary."

Strong encapsulation does not interfere with the ability to add new
monkey-patched properties (actually fields). What it does prevent, by
definition, is the ability to modify or read existing private fields to
which the accessor does not have the relevant field object. What I was
looking for was not mere assertion that this is sometimes necessary to
be able to do that, but an explanation of why.

As for "the ability to manipulate all properties of objects at a meta
level using reflection", strictly speaking that is still possible in the
soft fields proposal because soft fields are not properties. This is not
mere semantics; these fields are associated with the object, but it is
quite intentional that the object model views them as being stored on a
side table. Note that other methods of associating private state with an
object, such as closing over variables, do not allow that state to be
accessed by reflection on the object either.

-- 
David-Sarah Hopwood  ⚥  http://davidsarah.livejournal.com

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to