On 2010-12-22 07:57, Brendan Eich wrote: > On Dec 21, 2010, at 10:22 PM, David-Sarah Hopwood wrote: >> On 2010-12-21 22:12, Brendan Eich wrote: >> >>> It's tiresome to argue by special pleading that one extension or >>> transformation (including generated symbols) is "more complex, and >>> less explanatory", while another is less so, when the judgment is >>> completely subjective. And the absolutism about how it's *always* >>> better in every instance to use strong encapsulation is, well, >>> absolutist (i.e., wrong). >> >> I gave clear technical arguments in that post. If you want to disagree >> with them, disagree with specific arguments, rather than painting me as >> an absolutist. (I'm not.) > > Here's a quote: "As you can probably tell, I'm not much impressed by this > counterargument. It's a viewpoint that favours short-termism and code that > works by accident, rather than code that reliably works by design." > > How do you expect anyone to respond? By endorsing bugs or programming based > on partial knowledge and incomplete understanding? Yet the real world > doesn't leave us the option to be perfect very often. This is what I mean > by absolutism.
That isn't what absolutism generally means, so you could have been clearer. What I said, paraphrasing, is that weak encapsulation favours code that doesn't work reliably in cases where the encapsulation is bypassed. Also, that if the encapsulation is never bypassed then it didn't need to be weak. What's wrong with this argument? Calling it "absolutist" is just throwing around insults, as far as I'm concerned. > When prototyping, weak or even no encapsulation is often the > right thing, but you have to be careful with prototypes that get pressed > into products too quickly (I should know). JS is used to prototype all the > time. OK, let's consider prototyping. In the soft fields proposal, a programmer could temporarily set a variable that would otherwise have held a soft field to a string. All accesses via that variable will work, but so will encapsulation-breaking accesses via the string name. Then before we release the code, we can put back the soft field (requiring only minimal code changes) and remove any remaining encapsulation-breaking accesses. Does this address the issue? > So rather than argue for strong encapsulation by setting up a straw man > counterargument you then are not much impressed by, decrying short-termism, > etc., I think it would be much more productive to try on others' hats and > model their concerns, including for usable syntax. Weak vs strong encapsulation is mostly independent of syntax. At least, all of the syntaxes that have been proposed so far can provide either strong or weak encapsulation, depending on the semantics. >> There is a separate discussion to be had about whether the form of >> executable specification MarkM has used (not to be confused with the >> semantics) is the best form to use for any final spec. Personally, I like >> this form of specification: I think it is clear, concise (which aids >> holding the full specification of a feature in short-term memory), easy >> to reason about relative to other approaches, useful for prototyping, and >> useful for testing. >> >> I don't mind at all that the correspondance with the implementation is >> less direct than it would be in a more operational style; implementors >> often need to handle less direct mappings than this, and I don't expect a >> language specification to be a literal description of how a language is >> implemented in general (excluding naive reference implementations). > > Once again, you've argued about what you like, with subjective statements > such as "I don't mind". Yes, I try very hard not to misrepresent opinions as facts. >>> With inherited soft fields, the ability to "extend" frozen objects >>> with private fields is an abstraction leak (and a feature, I agree). >> >> How is it an abstraction leak? The abstraction is designed to allow >> this; it's not an accident (I'm fairly sure, without mind-reading >> MarkM). > > If I give you an object but I don't want you "adding" fields to it, what do > I do? Freezing works with private names, but it does not with soft fields. What's your intended goal in preventing "adding" fields to the object? If the goal is security or encapsulation, then freezing the object is sufficient. If I add the field in a side table, that does not affect your use of the object. I could do the same thing with aWeakMap.set(obj, value). If the goal is concurrency-safety, then we probably need to have a concurrency model in mind before discussing this in detail. However, adding fields in a side table does not affect the concurrency-safety of your code that does not have access to the table or those fields. It might affect the concurrency-safety of my code that does have that access; so I shouldn't add new fields and rely on my view of the object to be concurrency-safe just because the object is frozen. This doesn't seem like an onerous or impractical restriction. >> With private names, the inability to "extend" frozen objects with >> private fields is a significant limitation. > > Can you try taking a different point of view and exploring it, for a > change? :-/ That statement you quoted is a technical argument. If you disagree, please say *why* you think that the inability to extend frozen objects is not a significant limitation. (It's an inability to do something, so it's a limitation, and it has plausible use cases that one might expect to be supported by this feature, so it's significant.) It's not up to me to enumerate all of the possible ways in which people might disagree with me. >>> If you don't like x[#.id] / x.id supplanting x["id"] / x.id, it seems >>> to me you have to count some similar demerits against this change. >> >> If we compare both proposals with the additional syntax, they are >> equally "magical"; the only difference is whether the magic is built-in >> or the result of a desugaring. > > Agreed. Disagree on desugaring to weak maps, not counting weak map > complexity, or costs in executable spec approach, globally winning. Weak map complexity is a sunk cost when considering the additional global cost of each of these proposals, so it *should* be discounted. The cost of the executable spec approach indeed should not be discounted. >>> The weak encapsulation design points are likewise "leaky" for private >>> names, where no such leaks arise with soft fields: reflection and >>> proxies can learn private names, they "leak" in the real ocap sense >>> that secure subsets will have to plug. >> >> As I said earlier, designers of secure subsets would prefer that this >> leak not exist in the first place, rather than having to plug it. >> Regardless of your statements above, this is not an absolutist position; >> the onus is on proponents of weak encapsulation to say why it is useful >> to have the leak (by technical argument, not just some vague >> philosophical position against strong encapsulation). > > We're arguing about the default in a mostly-compatible next edition of > JavaScript. JS has only closures for strong encapsulation. You can't make a > "technical argument" or proof I never claimed to make any proof. Statements about the desirability of language properties are not amenable to proof. > that strong encapsulation must be presumed the default, I never claimed that it must be presumed the default. I presented arguments in favour of it. > so how can you put the onus on proponents of weak encapsulation to make > any such bogus argument? Because very few technical arguments have so far been made in favour of weak encapsulation. I made one that you dismissed as a strawman (it wasn't), and you made one about prototyping above. > Programmers use no-encapsulation and weak encapsulation in JS every day. They would continue to be able to do that. No-one is suggesting that the ability to create unencapsulated objects with only public properties should be removed. No-one is suggesting that programmers be forbidden from putting _ in their public property names as a convention to mark weak encapsulation. The question is, given that we are proposing to add a new encapsulation mechanism (at least, a feature with keywords and semantics that strongly suggest it is intended to be usable for encapsulation, even if it also has other uses), whether the encapsulation provided by that mechanism should be strong or weak. > To be fair, I think that trends and tendencies do matter. The language is > being used at larger scale, where integrity and other properties need > automated enforcement (at the programmer's discretion). It should be clear that the programmer of an encapsulated abstraction always has discretion over the visibility of its state. For a strongly encapsulated abstraction, programmers of code outside the scope of the abstraction cannot have any discretion over that visibility (given a correct language implementation and excluding debugging, etc.), by definition. > But there's no onus reversal or technical standard of proof that will help > us make design decisions. I know you want strong encapsulation. Others want > weak. Now what? I've also stated clearly *why* I want strong encapsulation, for both security and software engineering reasons. To be honest, I do not know why people want weak encapsulation. They have not told us. Perhaps their actual concerns can be addressed by a mechanism that provides strong encapsulation according to the definition I gave. -- David-Sarah Hopwood ⚥ http://davidsarah.livejournal.com
signature.asc
Description: OpenPGP digital signature
_______________________________________________ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss