Re: Thoughts on WeakMaps

2011-06-07 Thread David Herman
Hi David,

[A propos of nothing, can I ask that you either change your font or use 
plain-text email? Your font shows up almost unreadably small in my mail client.]

 I'm currently working on the WeakMap documentation [1] and I have thought of 
 two points:
 1) myWeakMap.set(key, value) doesn't return anything. It could return the 
 previous value for the key (if such a thing exists). Is it intentional that 
 the set function doesn't return anything?

I don't have strong feelings about this, but I guess I have a mild preference 
for the way it's currently specified. I think it's useful as a rule of thumb to 
separate imperative actions from operations that are performed to compute a 
result. JS already violates this in a bunch of places, but I don't think 
consistency is sacrosanct here. OTOH, I don't think this is all that big of a 
deal.

 2) The notion of weak reference as used in current WeakMap seems to be 
 assuming that the garbage collector will work on whether objects are 
 reachable or not. I have read (I thought it was the wikipedia page, but it 
 apparently wasn't) that there is another notion for garbage collection which 
 is whether an object will be used or not in the future. Of course, this 
 notion is far more difficult to determine than reachability, but this is not 
 my point.
 Let imagine for a minute that a lot of improvements is done in the field of 
 object non-future-use. Will WeakMap be any different than a regular Map?
 If an engine is able to tell that an object will not be reachable, does it 
 matter if there are remaining (soft or strong) references?

Correct me if I'm wrong, but I don't see how this would be observable. If your 
miracle-GC can predict that e.g. the key will never be used again, even though 
it's actually reachable, then it's able to predict that you're never going to 
look it up in the table. So even though the spec describes it in terms of 
reachability, your miracle-GC is not observably violating the behavior of the 
spec.

 The consequence of this second point is wondering whether it's a good idea to 
 standardize WeakMap (instead of Map) at all.

I think this was already said in this thread, but just to be clear: WeakMap 
comes with different space, performance, and membership behavior than Map, and 
Map also exposes more operations (namely, enumeration) than WeakMap -- by 
design. WeakMap allows non-deterministic deletion of elements, so its 
operations are restricted to avoid leaking this non-determinism to programs. 
This is important both for portability and security.

IOW, WeakMap and Map are both there to serve different purposes and they carry 
different guarantees. We've discussed this in committee meetings before, but I 
want to make sure this is captured in public. We should also add verbiage to 
the proposals to make this clear.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


minimal classes

2011-06-27 Thread David Herman
I've been concerned about the schedule risk of classes for ES.next; following 
are some thoughts about a minimal class feature that I believe satisfies the 
most important needs while leaving room for future growth.

I think the bare-minimum requirements of classes would be:

- declarative class expressions
- declarative constructor/prototype inheritance (i.e., the extends clause)
- super-calls
- declarative methods

Examples:

class C {
constructor(x, y) {
this.x = x;
this.y = y;
}
foo() {
return this.x + this.y;
}
}

class D extends C {
constructor() {
super(0, 0);
}
bar() {
print(hi)
}
}

I suggest that for ES.next, we completely leave out declarative properties, 
class properties, or even private data. All of this can be expressed already 
with imperative features. (Private data can be implemented with private names, 
and I've come to believe that Object.freeze just simply shouldn't freeze 
private properties.) That's not to say that these features would never be 
desirable, but we've so far struggled to come up with something that hangs 
together. And the features I describe above don't close off these 
possibilities. I want classes to succeed for ES.next, and even more importantly 
I want ES.next to succeed and succeed *on time*.

I'd argue that the minimal feature set described above covers the most 
important needs:

- standardizing prototype hierarchies (I'm of the opinion that superclass 
constructors ought to be the prototype of subclass constructors in order to 
inherit class properties)
- providing declarative syntax for initializing the C.prototype object
- providing idiomatic syntax for calling the superclass constructor
- enabling instance-private and/or class-private data (via private names)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: minimal classes

2011-06-27 Thread David Herman
 - providing idiomatic syntax for calling the superclass constructor
 
 But what about subclass method calling superclass method(s)?

In terms of priorities, I think super-constructors are the single most 
important use case for super. But I think super-methods fall out naturally from 
the semantics of super, so they could easily be supported.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JavaScript parser API

2011-06-28 Thread David Herman
Yeah, tough questions. I don't know. I tried to make the API flexible by 
allowing custom builders, and in fact if you look at the test suite you'll see 
I did a proof-of-concept showing how you could generate the format that Mark 
mentioned:


http://hg.mozilla.org/tracemonkey/file/2ce7546583ff/js/src/tests/js1_8_5/extensions/reflect-parse.js#l1051

But there are also tough questions about what the parser should do with 
engine-specific language extensions.

I agree about the issue of multiple parsers. The reason I was able to do the 
SpiderMonkey library fairly easily was that I simply reflect exactly the parser 
that exists. But to have a standards-compliant parser, we'd probably have to 
write a separate parser. That's definitely a tall order.

Dave

On Jun 28, 2011, at 4:02 PM, Mike Shaver wrote:

 On Tue, Jun 28, 2011 at 6:34 PM, Axel Rauschmayer a...@rauschma.de wrote:
 http://blog.mozilla.com/dherman/2011/06/28/the-js-parser-api-has-landed/
 
 I’ve just read D. Herman’s post on Firefox’s parser API. Is there any chance 
 that this kind of API will make it into Harmony? It would be really useful 
 for a variety of generative/meta-programming tasks.
 
 I'm interested in this too, for a number of applications, but I keep
 getting stuck on one thing.
 
 Would you standardize the resulting parse tree, too?  That would
 probably mean that every engine would have two parsers, since I'm sure
 we produce different parse trees right now, and wouldn't want to lock
 down our parser mechanics for all time.
 
 If you don't standardize the parse tree, is it still useful?  More
 useful than just using narcissus or whatever other parsing libraries
 exist?
 
 Mike
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module grammar

2011-07-01 Thread David Herman
Thanks-- missed one when manually doing s/ImportPath/ImportBinding/g. Fixed.

Thanks,
Dave

On Jul 1, 2011, at 9:55 AM, Kam Kasravi wrote:

 Should this 
 
 ImportDeclaration(load) ::= import ImportBinding(load) (, 
 ImportBinding(load))* ;
 ImportPath(load) ::= ImportSpecifierSet from ModuleExpression(load)
 ImportSpecifierSet ::= *
 | IdentifierName
 | { (ImportSpecifier (, ImportSpecifier)*)? ,? }
 ImportSpecifier ::= IdentifierName (: Identifier)?
 Be this?
 
 ImportDeclaration(load) ::= import ImportBinding(load) (, 
 ImportBinding(load))* ;
 ImportBinding(load) ::= ImportSpecifierSet from ModuleExpression(load)
 ImportSpecifierSet ::= *
 | IdentifierName
 | { (ImportSpecifier (, ImportSpecifier)*)? ,? }
 ImportSpecifier ::= IdentifierName (: Identifier)?
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JavaScript parser API

2011-07-05 Thread David Herman
 the AST API strawman - given the positive discussions on this list, I
 thought the idea was implicitly accepted last year, modulo details,
 so I was surprised not to see a refined strawman promoted.

It hasn't really been championed so far. I was concentrating on other proposals 
for ES.next.

   - it does not support generic traversals, so it definitely needs a
   pre-implemented traversal, sorting out each type of Node
   (Array-based ASTs, like the es-lab version, make this slightly
   easier - Arrays elements are ordered, unlike Object properties);

I designed it to be easily JSON-{de}serializable, so no special prototype. 
However, you can use the builder API to construct your own format:

https://developer.mozilla.org/en/SpiderMonkey/Parser_API#Builder_objects

With a custom builder you can create objects with whatever methods you want, 
and builders for various formats can be shared in libraries.

   at that stage, simple applications (such as tag generation)
   may be better of working with hooks into the parser, rather
   than hooks into an AST traversal? also, there is the risk that
   one pre-implemented traversal might not cover all use cases,
   in which case the boilerplate tax would have to be paid again;

I don't understand any of this.

   - it is slightly easier to manipulate than an Array-based AST, but

More than slightly, IMO.

   lack of pattern matching fall-through (alternative patterns for
   destructuring) still hurts, and the selectors are lengthy, which
   hampers visualization and construction; (this assumes that
   fp-style AST processing is preferred over oo-style processing)

If I'd defined a new object type with its own prototype, it still wouldn't 
define all operations anyone would ever want. So they'd either have to 
monkey-patch it or it would need a visitor. Which you could write anyway. So I 
don't see much benefit to pre-defining a node prototype.

But again, see the builder API, where you can create your own custom node type.

   - it is biased towards evaluation, which is a hindrance for other
   uses (such as faithful unparsing, for program transformations);

It's just a reflection of the built-in SpiderMonkey parser, which was designed 
for the sole purpose of evaluation. I didn't reimplement a new parser.

   this can be seen clearly in Literals, which are evaluated (why
   not evaluate Object, Array, Function Literals as well? eval should
   be part of AST processing, not of AST construction), but it also
   shows in other constructs (comments are not stored at all, and
   if commas/semicolons are not stored, how does one know
   where they were located - programmers tend to be picky
   about their personal or project-wide style guides?);

None of this data is available in a SpiderMonkey parse node.

   - there are some minor oddities, from spelling differences to
   the spec (Label(l)ed),

Heh, I shouldn't've capitulated to my (excellent and meticulous!) reviewer, who 
was unfamiliar with the spec:

https://bugzilla.mozilla.org/show_bug.cgi?id=533874#c28

I can probably change that.

 to structuring decisions (why separate
   UpdateExpression and LogicalExpression, when everything
   else is in UnaryExpression and BinaryExpression?);

I separated update expressions and logical expressions because they have 
different control structure from the other unary and binary operators.

   btw, why alternate/consequent instead of then/else, and

I was avoiding using keywords as property names, and consequent/alternate are 
standard terminology. I suppose .then/.else would be more convenient.

   shouldn't that really be consequent-then and alternate-else
   instead of the other way round (as the optional null for
   consequent suggests)?

Doc bug, thanks. Fixed.

 My main issue is unparsing support for program transformations

https://bugzilla.mozilla.org/show_bug.cgi?id=590755

 (though IDEs will similarly need more info, for comment extraction,
 syntax highlighting, and syntax-based operations).

This is all the stuff that will almost certainly require separate 
implementations from the engine's core parser. And maybe that's fine. In my 
case, I wanted to implement a reflection of our existing parser, because it's 
guaranteed to track the behavior of SpiderMonkey's parser.

 What I did for now was to add a field to each Node, in which I
 store an unprocessed Array of the sub-ASTs, including tokens.
 Essentially, the extended AST Nodes provide both abstract info
 for analysis and evaluation and a structured view of the token
 stream belonging to each Node, for lower-level needs.
 
 Whitespace/comments are stored separately, indexed by the
 start position of the following token (this is going to work better
 for comment-before-token that for comment-after-token, but it
 is a start, for unparsing or comment-extraction tools).

You've lost me again. 

Re: Type of property names, as seen by proxy traps

2011-07-07 Thread David Herman
 2011/7/6 Andreas Rossberg rossb...@google.com
 While putting together some test cases for Object.keys, I wondered: is
 it intended that property names are always passed to traps as strings?
 
 That is indeed the intent.

Unless they are private name objects, right?

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Type of property names, as seen by proxy traps

2011-07-08 Thread David Herman
 I'm not sure. I briefly checked the private names proposal 
 http://wiki.ecmascript.org/doku.php?id=harmony:private_name_objects and I 
 think the detailed interaction with proxies still has to be fleshed out.

Sure, I'll be happy to work with you on this.

 The proposal does mention: All reflective operations that produce a property 
 name, when reflecting on a private name, produce the name’s .public property 
 instead of the name itself.
 
 Would the same hold for reflective operations that consume property names, 
 such as handler traps?

No, they would require the private name object. The idea here is that you need 
a reference to the private name to get access to its property. So you can't do 
any proxy operations on a private property if you don't have the private name 
object. But the proxy traps do not automatically hand out that reference to a 
handler trap, in case the trap didn't already have a reference to it (which 
would constitute a leak). Instead, it hands them the corresponding public key. 
This way, *if* the trap has a reference to the private key, it can identify 
which private name is being accessed. Otherwise, the trap can't conclude 
anything more than operation X was requested on *some* private name.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Type of property names, as seen by proxy traps

2011-07-08 Thread David Herman
Sorry, yes. Too early in the morning for me. :)

Indeed, handler traps are exactly the place where the system *produces* names 
and hands them to handler traps which consume them, and that's where it must 
produce a public key rather than a private name object.

Dave

On Jul 8, 2011, at 8:20 AM, Brendan Eich wrote:

 On Jul 8, 2011, at 7:17 AM, David Herman wrote:
 
 The proposal does mention: All reflective operations that produce a 
 property name, when reflecting on a private name, produce the name’s 
 .public property instead of the name itself.
 
 Would the same hold for reflective operations that consume property names, 
 such as handler traps?
 
 No, they would require the private name object.
 
 I don't think that's what Tom was asking about, though. The proposal may 
 simply be unclear in using produce instead of consume since the proxy 
 mechanism does not produce private names in any generative sense when one 
 writes p[q] for proxy p and private name q.
 
 Rather, the VM substitutes q.public for q when calling p's handler's relevant 
 trap (getOwnPropertyDescriptor, get, ...). So there's no leak, as you note, 
 and the owner of q is free to share it with trap implementations that should 
 have access to it, so they can compare name == q.public, memoize in q.public 
 in a weak map, etc.
 
 /be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Type of property names, as seen by proxy traps

2011-07-08 Thread David Herman
And just to be clear, I meant produce in the sense of producer/consumer 
relationship on the trap functions, not in the generative sense.

Dave

On Jul 8, 2011, at 8:40 AM, David Herman wrote:

 Sorry, yes. Too early in the morning for me. :)
 
 Indeed, handler traps are exactly the place where the system *produces* names 
 and hands them to handler traps which consume them, and that's where it must 
 produce a public key rather than a private name object.
 
 Dave
 
 On Jul 8, 2011, at 8:20 AM, Brendan Eich wrote:
 
 On Jul 8, 2011, at 7:17 AM, David Herman wrote:
 
 The proposal does mention: All reflective operations that produce a 
 property name, when reflecting on a private name, produce the name’s 
 .public property instead of the name itself.
 
 Would the same hold for reflective operations that consume property names, 
 such as handler traps?
 
 No, they would require the private name object.
 
 I don't think that's what Tom was asking about, though. The proposal may 
 simply be unclear in using produce instead of consume since the proxy 
 mechanism does not produce private names in any generative sense when one 
 writes p[q] for proxy p and private name q.
 
 Rather, the VM substitutes q.public for q when calling p's handler's 
 relevant trap (getOwnPropertyDescriptor, get, ...). So there's no leak, as 
 you note, and the owner of q is free to share it with trap implementations 
 that should have access to it, so they can compare name == q.public, memoize 
 in q.public in a weak map, etc.
 
 /be
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Design principles for extending ES object abstractions

2011-07-08 Thread David Herman
I think I still haven't fully grokked what | means on array literals, but 
could it also be used to subclass Array? For example:

function SubArray() {
return SubArray.prototype | [];
}

SubArray.prototype = new Array;

I'm not sure what Array.prototype methods would or wouldn't work on instances 
of SubArray.

Dave

On Jul 8, 2011, at 5:48 PM, Allen Wirfs-Brock wrote:

 
 On Jul 8, 2011, at 5:16 PM, Brendan Eich wrote:
 
 On Jul 8, 2011, at 3:49 PM, Allen Wirfs-Brock wrote:
 
 2) Anything that can be done declaratively can also be done imperatively.
 
 What's the imperative API for | (which has the syntactic property that it 
 operators on newborns on the right, and cannot mutate the [[Prototype]] of 
 an object that was already created and perhaps used with its original 
 [[Prototype]] chain)?
 
 Fair point and one I was already thinking about :-)
 
 For regular objects, it is Object.create.
 
 For special built-in object with literal forms, I've previously argument that 
  | can be used to implement an imperative API:
 
 Array.create = function (proto,members) {
 let obj = proto | {};
 Object.defineProperties(obj,members);
  return obj;
 }
 
 Basically, | is sorta half imperative operator, half declaration component.
 
 This may be good enough.  It would be nice it it was and we didn't have to 
 have additional procedural APIs for constructing instances of the built-ins.  
 Somebody has already pointed out | won't work for built-in Date objects 
 because they lack a literal form. I think the best solution for that is to 
 actually reify access to a Date object's timevalue by making it a private 
 named property.   BTW, way did ES1(?) worry about allowing for alternative 
 internal timevalue representations?  If in really there really any perf 
 issues that involve whether or not the timevalue is represented as a double 
 or something else?
 
 Allen
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array generation

2011-07-10 Thread David Herman
 So from this viewpoint (and regarding that example with squares), it's good 
 to have also `Array.seq(from, to)` method (the name is taken from Erlang, I 
 just frequently uses lists:seq(from, to) there):

bikeshedArray.range seems like an intuitive name as well./bikeshed

 Array.seq(1, 5).map((x) - x * x); [1, 4, 9, 16, 25]

This pattern (integer range immediately followed by map) is so common that many 
Schemes have a more general function that fuses the two traversals, sometimes 
called build-list or list-tabulate:

Array.build(n, f) ~~~ [f(0), ..., f(n-1)]

Another common and useful fusion of two traversals that's in many Schemes is 
map-filter or filter-map:

a.filterMap(f) ~~~ [res for [i,x] of items(a) let (res = f(x, i)) if (res 
!== void 0)]

I rather arbitrarily chose to accept both null and undefined here as way to say 
no element -- a reasonable alternative would be to accept *only* undefined as 
no element.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Pure win: Array.from and Array.of

2011-07-10 Thread David Herman
I mentioned two benefits I can see to Array.of over []-literals here:

https://twitter.com/#!/littlecalculist/status/89854372405723136

1) With Array.of you know you aren't going to accidentally create holes, and

2) if you're passing it to a higher-order function you know you aren't going to 
trip over the single-uint32-arg special case.

That said, the readability story you and I tweeted about is not so compelling 
given that, in the first-order usage pattern an array-literal is strictly more 
readable. So a longer name like Array.fromElements or something might be okay.

Dave

On Jul 10, 2011, at 10:33 AM, Rick Waldron wrote:

 _that_ is the compelling use-case I was looking for.
 
 Rick
 
 
 
 -- Sent from my Palm Pre
 
 On Jul 10, 2011 1:23 PM, Brendan Eich bren...@mozilla.com wrote: 
 
 On Jul 10, 2011, at 10:18 AM, Rick Waldron wrote:
 
 The more I think about it, I still can't come up with any really exciting 
 use cases where Array.of would outshine anything that already exists. I say 
 strike it from the wishlist.
 
 Higher-order programming with Array as constructing-function bites back for 
 the single-number-argument case. That's where Array.of helps.
 
 /be
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Design principles for extending ES object abstractions

2011-07-10 Thread David Herman
 I'm not sure what Array.prototype methods would or wouldn't work on 
 instances of SubArray.
 
 All of them.  They are all generic.

We're speaking too broadly here. It depends on what we want to work how. For 
example, .map can't magically know how to produce a SubArray as its result if 
that's how SubArray wants it to work. But what I'm actually more concerned 
about is the behavior of .length. Does the | semantics make .length work 
automagically the way it does for ordinary Array instances?

 However, subarray instances have all the internal state and methods that make 
 them true arrays so even if some of the inherited Array methods weren't 
 generic they would still work.

Including .length (which isn't a method, but YKWIM)?

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array generation

2011-07-10 Thread David Herman
Agreed. I think that's a pretty common way people think about null vs 
undefined, and it's consistent with the language's behavior.

Dave

On Jul 10, 2011, at 3:09 PM, liorean wrote:

 On 10 July 2011 22:23, David Herman dher...@mozilla.com wrote:
 Another common and useful fusion of two traversals that's in many Schemes is 
 map-filter or filter-map:
 
a.filterMap(f) ~~~ [res for [i,x] of items(a) let (res = f(x, i)) if (res 
 !== void 0)]
 
 I rather arbitrarily chose to accept both null and undefined here as way to 
 say no element -- a reasonable alternative would be to accept *only* 
 undefined as no element.
 
 The way I think of it is that in analogy to NaN being the Numbers that
 represent no number, null is the Object that represents no object, in
 other words a reasonable value to store to tell just that. The
 undefined value is by analogy the value that represents no value, so
 is the only value that should be a no element.
 
 But that might be just my way of thinking about and distinguishing the
 not-a-something special cases.
 -- 
 David liorean Andersson
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: module exports

2011-07-10 Thread David Herman
 According to the module grammar, the following is valid:
 
 691module car {
   function startCar() {}
   module engine {
 function start() {}
   }
   export {start:startCar} from engine;
 }
 
 
 It seems like there would be issues with exporting module elements after the 
 module has been defined.

I don't see any conflicts with the code you wrote, but it does contain a 
linking error, because the car module doesn't have access to the unexported 
start function. Maybe you intended:

module car {
export function startCar() { }
module engine {
export function start()  { }
}
export { start: startCar } from engine;
}

In this case, you have a conflict, because the car module is attempting to 
create two different exports with the same name. This is an early error.

 Also, what is the behavior of aliasing over existing Identifiers? Would the 
 compiler fail or would behavior 
 be the 'last' Identifier wins?

Early error.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Extending standard library of arrays

2011-07-11 Thread David Herman
 My point is that the map spec is a deterministic algorithm because 
 side-effects would be noticeable otherwise. However, this prevent 
 implementations where function calls would be done in parallel for instance 
 (for better performances). In some cases (like the one I showed), the exact 
 order in which the function calls are performed does not matter, but I have 
 no way to tell the JS engine I don't need the execution order guarantee, 
 allowing it to do the calls in parallel. The addition of the functions I 
 suggested would be the way to say it.

We can't do this in general for JS, especially for the mutable Array data 
structure. It's not in general safe to run JS code in parallel. However, this 
would be much more appropriate for immutable and homogeneous datatypes. At 
least for callback code that could be proved to be safe to run 
non-deterministically, operations like map could be run in parallel. We have 
been working with some partners on this. Watch this space.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: module exports

2011-07-11 Thread David Herman
No, a module determines its exports by itself, and no one can override that. 
Notice that you missed *two* export declarations, car.startCar *and* 
car.engine.start. If the engine module doesn't export start, then the outer car 
module cannot access it.

Dave

On Jul 10, 2011, at 11:19 PM, Kam Kasravi wrote:

 Yes, thanks, my mistake on the unexported startCar function declaration. My 
 question is more about semantics, if the author of engine did not want to 
 export start, the grammar allows anyone importing the engine module to 
 override the original author's intent. 
 
 On Jul 10, 2011, at 8:11 PM, David Herman dher...@mozilla.com wrote:
 
 According to the module grammar, the following is valid:
 
 691module car {
   function startCar() {}
   module engine {
 function start() {}
   }
   export {start:startCar} from engine;
 }
 
 
 It seems like there would be issues with exporting module elements after 
 the module has been defined.
 
 I don't see any conflicts with the code you wrote, but it does contain a 
 linking error, because the car module doesn't have access to the unexported 
 start function. Maybe you intended:
 
 module car {
 export function startCar() { }
 module engine {
 export function start()  { }
 }
 export { start: startCar } from engine;
 }
 
 In this case, you have a conflict, because the car module is attempting to 
 create two different exports with the same name. This is an early error.
 
 Also, what is the behavior of aliasing over existing Identifiers? Would the 
 compiler fail or would behavior 
 be the 'last' Identifier wins?
 
 Early error.
 
 Dave
 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: using Private name objects for declarative property definition.

2011-07-11 Thread David Herman
 Adding a non-enumerable Array.prototype method seems doable to me, if the 
 name is clear and not commonly used.
 
 
 We can probably still add Array.prototoype.isArray if that would help to 
 establish the pattern. Document as being preferred over Array.isArray

This doesn't make sense to me. Let's say I have a variable x and it's bound to 
some value but I don't know what. That is, it has the type any. I could check 
to see that it's an object:

typeof x === object

and then that it has an isArray method:

typeof x.isArray === function

but how do I know that this isArray method has the semantics I intend? I could 
check that it's equal to Array.prototype.isArray, but that won't work across 
iframes.

IOW, I can't call x.isArray() to find out what I want it to tell me until I 
already know the information I want it to tell me.

Alternatively, if I expect isArray to be a boolean rather than a predicate, I 
still don't have any way of knowing whether x is of a type that has an entirely 
different semantics for the name isArray. It's anti-modular to claim a 
universal semantics for that name across all possible datatypes in any program 
ever.

These static predicates (which are just glorified module-globals) intuitively 
have the type:

any - boolean:T

(I'm using notation from SamTH's research here.) This means a function that 
takes any value whatsoever and returns a boolean indicating whether the result 
is of the type T. Because they accept the type any, it doesn't make sense to 
put them in an inheritance hierarchy. It makes sense to have them as functions. 
Since globals are out of the question, in the past they've been made into 
statics. But with modules, we can actually make them functions in their 
appropriate modules.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: using Private name objects for declarative property definition.

2011-07-11 Thread David Herman
 I'm not so sure about this now. I was just reviewing with Dave how the design 
 evolved. We had Function.isGenerator, analogous to Array.isArray. For taskjs, 
 Dave had thought he had a use-case where the code has a function and wants to 
 know whether it's a generator. It turned out (IIUC) that what he really 
 wanted (this will warm your heart) was simply the duck-type does it 
 implement the iterator protocol test.

Right-- it was a poor initial implementation decision on my part; I shouldn't 
have done any test at all.

JJB had a different use case: for Firebug, he wanted the ability to check 
before calling a function whether it was going to have a different control-flow 
semantics. IOW, he wanted to be able to reliably reflect on values.

 On the other hand, code that wants to ask is this *value* a generator? may 
 not have a-priori knowledge that the value is a function, so a class method 
 such as Function.isGenerator wins.

Yeah, I think it's generally more convenient for type-distinguishing predicates 
to have input type 'any', so that clients don't have to do any other tests 
beforehand. That way, you can always simply do

isGenerator(x)

instead of

typeof x === function  x.isGenerator()

without knowing anything about x beforehand.

 So I think we took the wrong door here. Function.isGenerator by analogy to 
 Array.isArray, or an isGenerator export from @iter (equivalent 
 semantically), is the best way.

I agree.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Design principles for extending ES object abstractions

2011-07-12 Thread David Herman
 My understanding of generators was naively that they are syntactic sugar for 
 defining an iterator.

Well, I think I understand what you're getting at: there's a sense in which 
generators don't add the ability to do something that's *absolutely impossible* 
to express in ES5.

OTOH, generators make it possible to express something that otherwise -- in 
general -- requires a deep translation (in particular, a CPS transformation of 
the entire function). This is more than syntactic sugar. This is a new 
expressiveness that wasn't there before. The slope between 
Felleisen-expressiveness (is it possible to do X without a deep 
transformation?) and Turing-expressiveness (is it possible to do X at all?) 
is a slippery one.

I don't want to quibble about philosophical pedantry, but that's really all I 
mean by my shyness about General Principles. They're strictly *harder* to 
decide on than the problem at hand. It's too easy to lose focus and waste time 
arguing abstractions. And then even when we agree on general principles, when 
we get into concrete situations it's too easy to start quibbling about exactly 
how to interpret and apply the principles for the problem at hand.

I prefer to work out from the middle of the maze, learning and refining 
principles as I go, rather than trying to decide on them all up front.

 Re-reading the generators proposal, I was concerned at first that somehow the 
 semantics of the syntactic desugaring might be taking dependencies on the 
 internal properties of the generator objects when consumed in a generator, 
 such as in a “yield* other”.  However, it looks like even there, the 
 semantics are in terms of the public API on the object, so that a user 
 defined object that provides next/send/throw/close can correctly 
 interoperate. 

Yup, nothing to worry about there. Fear not the yield*.

 I haven’t yet been able to intuit from the module_loaders page what is needed 
 to accomplish each of the above though.  For example, if it is the case that 
 loading the “@name” module required putting all my code in a callback passed 
 to SystemLoader.load, that feels like it might be too heavy.  Do you have 
 examples of what each of these would look like given the current proposal?

I need to have another go 'round at the module loaders API, and I will get 
there before too long. Sorry about that.

But it should not be necessary to have a callbacky version for builtin modules 
like @name -- the API should include a simple, synchronous way to test for 
the presence of modules in a loader. So it should be possible to do something 
like SystemLoader.getLoaded(@name).

 (1) why is a child loader needed?

Not needed, just the point of the example. Let me make the example more 
concrete:

You're writing an online code editor like CodeMirror or ACE, and you want to 
support the programmer running JS code from within the editor. So you want to 
run that code in a sandbox, so that it doesn't see the data structures that are 
implementing the IDE itself. So you create a child loader. Now, within that 
child loader, you might want the ability to construct some initial modules that 
make up the initial global environment that the user's code is running in. So 
you use buildModule to dynamically construct those modules.

 (2) any particular reason why the buildModule and registerModule are 
 separated?

Because you might want to build a single shared module instance that you 
register in multiple loaders. These are orthogonal primitives that can be 
composed. It may also make sense to have conveniences for common operations, 
layered on top of the primitives.

 (3) Would this allow declaring module dependencies for the new module?  As 
 one comparison, the require.js module definition syntax is simpler in terms 
 of APIs, but also requires an extra closure due to module dependencies, which 
 may also be needed in the model above:
  
 define(m, [], function() {
 return {
 x: 42,
 f: function() { …  }
 }
 });

I think the more straightforward approach is just to pre-load (and pre-register 
in the loader, if appropriate) whatever dependencies are needed.

 ASIDE: It still feels a bit odd to see ES5 syntax running on ES.next runtime 
 referred to as ‘legacy’.

No pejorative overtones intended. We just don't yet have any decent terminology 
for distinguishing the full ES.next front end from the backwards-compatible one.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxy.isProxy (Was: using Private name objects for declarative property definition.)

2011-07-13 Thread David Herman
Putting private properties on a proxy or storing it in a weak map are simple 
protocols you can use to keep track of proxies that you know about. You can 
hide or expose this information then without however many or few clients you 
like. If you want to give people access to knowledge about your proxy, you can 
share the private name object or weak map so that they can look it up, or even 
provide a similar predicate to isProxy.

By contrast, if you want to virtualize an object with a proxy and we provide 
isProxy, we've made it almost impossible to protect the abstraction. It becomes 
a universal on-off switch that you can turn off by hiding the isProxy predicate 
(via module loaders or deleting/mutating the function).

And to be even more concrete, if we want to use proxies for platform-level 
features, e.g. the DOM, then isProxy is something we *can't* turn off without 
violating the ECMAScript spec, so we're then *forced* to expose the 
implementation detail to anyone on the web who wants to look at it.

Dave

On Jul 13, 2011, at 8:31 AM, Allen Wirfs-Brock wrote:

 isProxy is definitely a meta-layer operation and you don't want it polluting 
 the application layer.  However, when you doing proxy level meta programming 
 I can imagine various situations where you might need to determine whether or 
 not an object is a proxy.  I think it should exist, but should exist in a 
 namespace that is clearly part of the meta-layer.  Arguably Proxy, itself, is 
 such a namespace.  But if there is fear that it is too close to the app layer 
 then we might hand it from Proxy.Handler or something else that is clearly on 
 the meta side.  Or have a Proxy support modules.
 
 Allen
 
 
 
 
 
 On Jul 13, 2011, at 2:07 AM, David Bruant wrote:
 
 And in general, the main use case for proxies is to emulate host objects. If 
 there is a language construct that helps separating the two cases, we're 
 going against this use case.
 
 David
 
 Le 13/07/2011 10:26, Andreas Gal a écrit :
 
 
 I really don't think IsProxy is a good idea. It can lead to subtle bugs 
 depending on whether an object is a DOM node, or a wrapper around a DOM 
 node (or whether the embedding uses a proxy to implement DOM nodes or not). 
 In Firefox we plan on making some DOM nodes proxies for example, but not 
 others. I really don't think there is value in exposing this to programmers.
 
 Andreas
 
 On Jul 13, 2011, at 1:23 AM, Tom Van Cutsem wrote:
 
 Perhaps Proxy.isProxy was used merely as an example, but wasn't the 
 consensus that Proxy.isProxy is not needed? Dave pointed out that it 
 breaks transparent virtualization. Also, there is Object.isExtensible 
 which always returns |true| for (trapping) proxies. That means we already 
 have half of Proxy.isProxy without exposing proxies: if 
 !Object.isExtensible(obj), obj is guaranteed not to be a proxy.
 
 Cheers,
 Tom
 
 2011/7/9 Brendan Eich bren...@mozilla.com
 Also the Proxy.isTrapping, which in recent threads has been proposed to be 
 renamed to Proxy.isProxy or Object.isProxy.
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Feedback on Binary Data updates

2011-07-20 Thread David Herman
Hi Luke,

The idea is definitely to subsume typed arrays as completely as possible.

 * Array types of fixed length
 The current design fixes the length of an ArrayType instance as part of the 
 ArrayType definition, instead of as a parameter to the resulting constructor. 
  I'm not sure I understand the motivation for that.

The idea is that all Types have a known size, and all Data instances are 
allocated contiguously.

For example, if you could put unsized array types inside of struct types, it 
wouldn't be clear how to allocate an instance of the struct:

var MyStruct = new StructType({
a: Uint8Array,
b: Uint8Array
});
var s = new MyStruct; // ???

But you're right that this is inconsistent with typed arrays. Maybe this can be 
remedied by allowing both sized and unsized array types, and simply requiring 
nested types to be sized.

 * Compatibility with Typed Arrays array objects
 There are a few divergences between Binary Data arrays and Typed Array 
 arrays, that look like they could be addressed:
 - The constructor difference mentioned above, including support for copy 
 constructors.

I don't know what you mean by copy constructors. Are you talking about being 
able to construct a type by wrapping it around an existing ArrayBuffer? That 
doesn't copy, but I do think we should support it, as I said in my preso at the 
f2f in San Bruno. That's something I intended to add to the wiki page but 
hadn't gotten to yet.

 - Lack of buffer, byteLength, byteOffset, BYTES_PER_ELEMENT.   I see these 
 are noted in TODO.

Yep.

I do think there's a case to be made for not exposing the ArrayBuffer for Data 
objects that were not explicitly constructed on top of an ArrayBuffer. This 
would hide architecture-specific data that is currently leaked by the Typed 
Arrays API. It also accommodates the two classes of usage scenario involving 
binary data:

Scenario 1: I/O

socket.readBuffer(1000, function(buf) {
var s = new MyStruct(buf, 0); // also allow an optional endianness 
argument
... do some computation on s ...
});

Scenario 2: Pure computation

var s = new MyStruct({ x: 0, y: 0 });
... do some computation on s ...

Scenario 1 comes up when reading files, network sockets, etc; here you *have* 
to let the programmer control the endianness and layout/padding. The simplest 
way to do the latter is simply to assume zero padding, as with Data Views, and 
then the programmer would have to insert padding bytes where necessary.

Scenario 2 comes up when building internal data structures. Here the system 
should use whatever padding and endianness is going to be the most efficient 
for the architecture, but that detail should ideally not be exposed to the 
programmer. So in that case, we could make the .buffer field censored, by 
having it be null or an accessor that throws.

 - array.set(otherArr, offset) support on the Binary Data arrays

Good catch; looks unproblematic.

 - Conversions, see below
 - Different prototype chains, additional members like elementType on binary 
 data arrays.  
 
 The last item is one of the reasons why it would be nice to pull the Typed 
 Arrays objects into Binary Data, so that they could be augment to be fully 
 consistent - for example, to expose the elementType.

If we can pull them into the prototype hierarchy, that's cool, but we still 
have to see. In particular, if we want to close off some of the leaks I 
describe above, then we may have to retain some distinction.

 * Conversions
 The rules for conversions of argument values into the primitive value types 
 seem to be different than typical ES conversions and those used by 
 TypedArrays via WebIDL.  Why not use ToInt32 and friends for conversion?  
 Current rules appear to be quite strict - throwing on most type mismatches, 
 and also more permissive for some unexpected cases like 0x-prefixed strings.

Interesting question. I may have followed js-ctypes too blindly on this.

 * DataView integration with structs
 DataView is an important piece of Typed Arrays for reading from heterogenous 
 binary data sources like files and network protocols, and for controlling 
 endianness of data reads.  DataView would seem to benefit from structs, and 
 structs would benefit from DataView.  This is another reason to want to spec 
 DataView itself in ES.next.  I imagine an additional pair of functions on 
 DataView akin to the following would allow nice interop between DataView and 
 Binary Data Types/Data:
 
Data getData(Type type, unsigned long byteOffset, optional boolean 
 littleEndian);
void setData(Type type, unsigned long byteOffset, Data value, optional 
 boolean littleEndian);

I agree that this kind of use case is important, and I'm not opposed to 
DataViews, but I'm not sure the ArrayBuffer approach described above doesn't 
already handle this, e.g.:

new T(ArrayBuffer buffer, unsigned long byteOffset, optional boolean 
littleEndian);

 * Explicit inclusion 

Re: private name objects confusion

2011-07-27 Thread David Herman
 I've been exploring private name objects [1] and I'm a bit confused by a few 
 things in the proposal, especially the Reflection example...

The page was out of date, sorry. I've updated the page to reflect the agreement 
we came to in the last face-to-face, which was that private names should not be 
reflected anywhere except to proxy traps. This leaks less information than what 
was on the wiki. In particular, now you can't figure out how many private names 
an object is carrying around.

 Should this statement return [foo, fooName.public] or [foo, fooName]? If 
 the latter interpretation is correct, what advantage does a visible private 
 name have over a plain old non-enumerable property?

Guaranteed uniqueness. For example, multiple separately developed libraries can 
monkey-patch the same shared prototype object with new unique names and they're 
guaranteed not to conflict.

I've separated this out on the wiki page as a remaining open issue. I'm not 
sure if we've come to consensus about this case. (One concern was that 
Object.getOwnPropertyNames() and for...in no longer are guaranteed to produce 
strings, although this is mitigated by the toString() coercion of the name 
objects.)

 I also see no mention of what `str` should default to in Name.create, even 
 though it's defined as optional and is quite significant as the 
 name.public.toString return value. Is there something like a unique string 
 value planned for this? At the very least the proposal should hint at what 
 Name.create().public.toString() should return (assuming it's not undefined).

This hasn't been settled yet. IMO, I don't think it needs to be guaranteed to 
be unique. The uniqueness guarantee is about the *identity* of the object, and 
strings are forgeable, even if they're unique at the time they are created.

 My apologies if some of this has been discussed

Not at all; thanks for the feedback.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: private name objects confusion

2011-07-27 Thread David Herman
 Understood WRT the forgeability of strings -- I was more concerned with the 
 potential hazard of toStringing the values of an own-names array, only to 
 find out you have several keys with the string value undefined. Sure you're 
 doing it wrong, but string keys are an es5 invariant -- it's bound to happen, 
 as you hinted at in your reply.

Yeah, this has come up several times in discussions. It's currently an 
invariant that for...in and getOPN produce a sequence of strings with no 
duplicates (though not in fully specified order), and exposing name objects to 
these operations would break that invariant.

 But now that you've clarified the reflection semantics it's clear that this 
 is not much of an non-issue for non-reflective private names,

ITYM not much of an issue, right?

 just another note for proxy authors. Of course, if the visibility flag 
 becomes harmonious (and I really hope it does!) it's still a bit of a problem 
 for these unique names. There's no need for unforgeability here so a unique 
 string could be used, but it's probably too much magical and too little help 
 -- if unintentionally used (the result of an own-names string coercion) it 
 papers over a deeper semantic bug. Better would be to throw, but there's no 
 sane way to do that.

Throw where, exactly? I don't quite follow you.

Another alternative for unique-but-public names is simply to have a unique 
string generator. It trivially maintains the invariants of getOPN and for...in 
while still having the same user experience for getting and setting. The 
downsides are that the string will probably be ugly and unpleasant for 
debugging, and that there's no efficient way to ensure that the string 
generated won't be one that's *never* been observed by the program so far. 
You'd probably have to use UUIDs, which are ugly.

 Ah well, this is great stuff. Thanks again, Dave...

Glad to hear it!

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: private name objects confusion

2011-07-28 Thread David Herman
 Yep. Sorry, editing snafu -- I'd started to call it a non-issue when it 
 occurred to me that proxy authors would still have to know not to string 
 coerce keys. No big deal -- proxy authors should know better than to rely on 
 es5 invariants.

Agreed.

 Throw at the point where a unique name from a getOPN array is coerced to a 
 string. AFAICT this will always indicate a logic bug. One way to enforce this 
 would be to have getOPN (and enumerate?) hand out public objects with special 
 toStrings that throw. Meh. That's why I said there's sane way :)

Yikes. :)

 
 Another alternative for unique-but-public names is simply to have a unique 
 string generator. It trivially maintains the invariants of getOPN and 
 for...in while still having the same user experience for getting and setting. 
 The downsides are that the string will probably be ugly and unpleasant for 
 debugging, and that there's no efficient way to ensure that the string 
 generated won't be one that's *never* been observed by the program so far. 
 You'd probably have to use UUIDs, which are ugly.
 
 Yes, this is precisely what I meant when I said a unique string could be 
 used. But again, I wonder if worth it. If a producer of unique name objects 
 can provide the internal string it's trivial to generate UUIDs (or any other 
 type of unique string) should they so choose.

Right, this is already something you can express in the language. I sense YAGNI.

 But from a spec standpoint is it a good idea to actively encourage the 
 toStringing of getOPN keys?

What I meant was that we could not provide the visibility flag -- i.e., not 
have a notion of unique names at all -- but simply provide a makeUUID() 
function that produces a string which can be used directly as a property name.

 A bigger problem: what happens when you have two unique name objects where 
 both have a foo internal string and you toString the result of getOPN? Bad 
 things, right?

Right. This issue is why I suggested using UUID strings instead. It wouldn't be 
possible to have two properties with different unique names that produce the 
same UUID result of toString().

 Another alternative would be to explicitly disallow custom internal strings 
 for unique name objects, and give them a consistent toString behavior (e.g. 
 always generate unique strings). But this smells too, partly for the reasons 
 you pointed out, and also because the semantics of the two name types start 
 to diverge.

Agreed.

 So I wonder: what does this particular reflection buy you that you can't just 
 as easily attain by explicitly exposing your visible private name objects?

The only thing is that you can't introspect on them conveniently. If you have 
access to a private name, it'd be nice to have a simple way of saying give me 
all of the names, including the private names I know about. But I don't see a 
simple way of doing that other than something like a variant of getOPN:

Object.getOwnPropertyNames(o, p) : function(Object, WeakMapname,any?) - 
[string | name]

This version would produce an array of all property names including any private 
names that are in the given table. (This is technically a conservative 
extension of the existing ES5 getOPN but it could alternatively be provided as 
a different function.)

Again, this would be implementable without providing it as a core API, so I'm 
not sure if it's worth it. Standardizing on this use of a map seems like it 
might be premature; you might want any number of different ways of representing 
these are the private names I know about.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Clean scope

2011-08-17 Thread David Herman
 Mozilla has evalInSandbox built-ins. We've talked about them, but no one has 
 produced a strawman based on this work. The module loader API:
 
 http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders
 
 provides enough functionality.

In fact, I think sandbox is a pretty good intuition for what a loader is. And 
it makes sense to think of loader.eval(str) as an OO version of evalInSandbox. 
But it should give you much more fine-grained control over exactly how you want 
to set up the sandbox than just the global scope.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.{getPropertyDescriptor,getPropertyNames}

2011-09-02 Thread David Herman
Duly chastised. :)

Dave

On Sep 2, 2011, at 3:39 PM, Brendan Eich wrote:

 Already in harmony: namespace on the wiki:
 
 http://wiki.ecmascript.org/doku.php?id=harmony:extended_object_apis=object+getpropertydescriptor
 
 (note the s=... query part shows I found this by searching -- worked!)
 
 /be
 
 On Sep 2, 2011, at 3:33 PM, David Herman wrote:
 
 Object.getPropertyDescriptor
 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Sep 27 meeting notes

2011-09-30 Thread David Herman
I pretty much agree with Axel's goals listed here. But I don't think Mark or 
Waldemar do. As Erik says, this seems to be the biggest sticking point.

As for IDE's, I'm with Allen: we don't need to bend over backwards. The worst 
offender I've seen was the design that involved uninitialized property 
declarations in the class body, only to assign the properties in the 
constructor body anyway. This just seems like extra make-work for no particular 
gain; I'd rather let the IDE do the work of inferring the properties from the 
body. (If you want to document type information about the properties in 
comments or something, put them with the class declaration or the constructor 
declaration.)

But when Waldemar said:

 This seems like a fundamental conflict with classes as sugar unless we 
 take the Object.defineProperty semantics as the salty long-hand to sugar.
 
 Without 2, 4, and 5, object initializers are close enough to make having an 
 extra class facility not carry its weight.

I disagree. The super patterns are really painful and easy to get wrong in 
existing JS, and the benefits of combining a prototype declaration and 
constructor declaration in a single form shouldn't be dismissed. It's 
noticeably more readable and it codifies and standardizes a common pattern.

Dave

On Sep 30, 2011, at 2:49 PM, Axel Rauschmayer wrote:

 From: Waldemar Horwat walde...@google.com
 Subject: Re: Sep 27 meeting notes
 Date: September 30, 2011 23:17:04 GMT+02:00
 To: Brendan Eich bren...@mozilla.com
 Cc: es-discuss es-discuss@mozilla.org, Erik Arvidsson 
 erik.arvids...@gmail.com
 
 Without 2, 4, and 5, object initializers are close enough to make having an 
 extra class facility not carry its weight.
 
 
 Can you show code that backs up that assertion? (I’m curious, not dismissive.)
 
 Wasn’t it David Herman a while ago who listed a minimal feature list? For me 
 it would be:
 
 1. Super property references (mainly methods)
 2. Super constructor references
 3. Subclassing (mainly wiring the prototypes)
 4. Defining a class as compactly as possible (with subclassing, it is painful 
 that one has to assemble so many pieces).
 5. Having a standard construct that fosters IDE support. Currently there are 
 too many inheritance APIs out there, making IDE support nearly impossible.
 6. A platform on which to build future extensions (traits!).
 
 Allen’s object literal extensions give us #1 and #2. His prototype operator 
 gives us #3. #4 can be done via Allen’s pattern or by introducing the methods
 - Function.prototype.withPrototypeProperties(props)
 - Function.prototype.withClassProperties(props)
 
 I’m not sure about #5 (I’d consider class literals a plus here). #6 can be 
 postponed if we can get 1-5 by other means, but there will be a price to pay 
 if two competing ways of defining classes have to be used in ES.next.next.
 
 -- 
 Dr. Axel Rauschmayer
 
 a...@rauschma.de
 twitter.com/rauschma
 
 home: rauschma.de
 blog: 2ality.com
 
 
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Enums?

2011-10-03 Thread David Herman
A couple reactions:

- strings are already interned in current engines for symbol-like performance; 
there's no need to introduce symbols into the language

- private names are overkill for most uses of enums; just use string literals

- in SpiderMonkey I think you get better performance if your switch cases use 
known constants; for example:

const RED = red, GREEN = green, BLUE = blue;
...
switch (color) {
  case RED: ...
  case GREEN: ...
  case BLUE: ...
}

- with modules, you would be able to define these consts and share them 
modularly (currently in SpiderMonkey the only way to share these definitions 
across modules as consts is either to make them global or to share an eval-able 
string that each module can locally eval as a const declaration -- blech)

Dave

On Sep 30, 2011, at 7:13 PM, Axel Rauschmayer wrote:

 One language feature from JavaScript that I miss are enums. Would it make 
 sense to have something similar for ECMAScript, e.g. via 
 Lisp-style/Smalltalk-style symbols plus type inference? If yes, has this been 
 discussed already? I feel strange when I simulate symbols with strings.
 
 -- 
 Dr. Axel Rauschmayer
 
 a...@rauschma.de
 twitter.com/rauschma
 
 home: rauschma.de
 blog: 2ality.com
 
 
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Bug: String.prototype.endsWith

2011-10-07 Thread David Herman
Fixed, thanks.

Dave, digging his way out of a massive backlog...

On Sep 23, 2011, at 12:18 PM, Axel Rauschmayer wrote:

 http://wiki.ecmascript.org/doku.php?id=harmony:string_extras
 I’ve found a small bug:
 
 String.prototype.endsWith = function(s) {
 var t = String(s);
 return this.lastIndexOf(t) === this.length - t.length;
 };
 Interaction:
  .endsWith(/)
 true
  #.endsWith(//)
 true
  ##.endsWith(///)
 true
 
 Fix (e.g.):
 String.prototype.endsWith = function(s) {
 var t = String(s);
 var index = this.lastIndexOf(t)
 return index = 0  index === this.length - t.length;
 };
 
 
 -- 
 Dr. Axel Rauschmayer
 
 a...@rauschma.de
 twitter.com/rauschma
 
 home: rauschma.de
 blog: 2ality.com
 
 
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Grawlix

2011-10-07 Thread David Herman
On this particular issue, I'm inclined to agree -- I think we should be 
extremely sparing with how many new sigils, if any, we introduce into the 
language. You'll notice Brendan has repeatedly said similar things about | and 
.{ for example. Syntax matters.

But I feel like now might be a good time for a reminder (probably belongs in a 
FAQ!):

Design is a process. In the beginning, you consider lots of needs and use cases 
and you generate lots of ideas. As you go along, you iterate on these ideas to 
try to come up with the simplest, most parsimonious, most general-purpose, and 
most composable core elements you can find that address as many of these needs 
as possible. Later in the process, you start prioritizing, culling, and 
polishing.

Generally speaking, during this process, new needs and new issues arise. So 
typically this process is actually happening in many parallel strands, each of 
which is at a different phase of the design process. Of course, design is a 
holistic thing, so they often affect each other, sometimes forcing one thread 
that appeared to be stabilizing back into an earlier phase to iterate anew.

TC39 does our design work out in the open. That means everyone gets to see and 
participate in all parts of this process. The most common misunderstanding that 
arises is that TC39 is on the brink of standardizing on every single idea that 
has been considered. However, this has never been and will not be (at least as 
long as I'm part of this process) the case. But you can't short-cut the 
process. You can't pick the winners until you've let all the ideas run their 
course.

Dave

On Oct 6, 2011, at 3:56 PM, John J Barton wrote:

 JavaScript's original C-like syntax used symbols for limited purposes. 
 Consequently developers familiar with, for example C and Java, could read 
 most of the code and concentrate on the new things. This greatly lowered the 
 barrier to entry. While theoretically there is nothing magical about the 
 C-like syntax, practically it really helps. (Famous exceptions include of 
 course this, (looks like Java; isn't like Java), and highly nested function 
 definitions marching ever rightward).
 
 Recent syntax discussions head in a completely different direction, 
 introducing a seemingly large number of new symbols resulting in code that 
 isn't readable by current JS, Java, or C devs. Instead of JavaScript they 
 will be attempting to read GrawlixScript. I'm skeptical that this direction 
 will be welcomed by developers.
 
 jjb 
 
 On Thu, Oct 6, 2011 at 12:19 PM, Douglas Crockford doug...@crockford.com 
 wrote:
 On 11:59 AM, John J Barton wrote:
 GrawlixScript is the connection I guess.
 No, grawlix is a term of art that can be used to describe some the literal 
 syntax proposals.
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: {Weak|}{Map|Set}

2011-10-07 Thread David Herman
I mostly have a similar approach in mind for tail calls. Precise about the 
interface, imprecise/informative about the implementation requirements. For 
WeakMaps, that means a well-defined API with informal English describing the 
expectations about memory consumption. For tail calls, it means a well-defined 
spec for what is considered a tail call with, again, informal English 
describing the expectations about memory consumption.

Dave

On Sep 16, 2011, at 3:36 PM, Mark S. Miller wrote:

 On Fri, Sep 16, 2011 at 3:13 PM, Allen Wirfs-Brock al...@wirfs-brock.com 
 wrote:
 
 I'm not sure exactly how we are going to specify tail calls.  I know that 
 Dave Herman has ideas that I assume we will build upon .
 
 For weak maps I think that a non-normative note that make explicit the 
 doesn't leak expectation and points implementors towards an ephemeron based 
 implementation will suffice. 
 
 +1. At least until we see how Dave proposes specing tail calls to see if he 
 has any ideas we might adapt.
 
 -- 
 Cheers,
 --MarkM
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: holes in spread elements/arguments

2011-10-07 Thread David Herman
I don't think we can get away with repurposing _ as a pattern sigil, since it's 
already a valid identifier and used by popular libraries:

http://documentcloud.github.com/underscore/

In my strawman for pattern matching, I used * as the don't-care pattern:

http://wiki.ecmascript.org/doku.php?id=strawman:pattern_matching

Dave

On Oct 6, 2011, at 2:04 AM, Andreas Rossberg wrote:

 On 5 October 2011 21:19, Sean Eagan seaneag...@gmail.com wrote:
 However, I don't see why variable declaration array destructuring
 binding and parameter lists should be different in any way.  The only
 current syntactic difference between them is elision:
 
 // allowed
 function f([,,x]){}
 // disallowed
 function f(,,x){}
 
 Only apropos of semantics, but I really don't like this syntax at all. It is 
 far, far too easy to overlook a hole. I think we should forbid this syntax in 
 Harmony.
  
 If we want to support holes in patterns -- and I'm all for it! -- then we 
 should do what all other languages with proper pattern matching do and 
 introduce explicit syntax for wildcards, namely _. That simplifies both 
 syntax and semantics (because it's more compositional) and increases 
 readability:
 
 function f([_, _, _, x]){}
 function f(_, _, _, x){}
 
 This has been suggested before, but I want to reinforce the point.
 
 (I'm far less convinced about allowing holes in expressions, but an argument 
 could be made that _ is simply syntax for undefined in expressions. No more 
 writing void 0.)
 
 /Andreas
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Harmony transpilers

2011-10-11 Thread David Herman
I have some thoughts about how to use Narcissus as a basis for a compiler to 
ES3 as well. It's obviously not necessary to do separately from Traceur, but it 
might be interesting to experiment with alternative implementation strategies. 
I haven't really done anything in earnest yet, including looking through 
Traceur's source, but during my vacation I played with an implementation 
strategy for generators that was pretty neat.

Dave

On Oct 11, 2011, at 6:41 AM, Juan Ignacio Dopazo wrote:

 Hi! Is there anyone working on a Harmony transpiler besides Traceur? It'd 
 be really useful to have a transpiler that justs desugars (what's possible to 
 desugar) without using a library like Closure, the way CoffeeScript is 
 working nowadays.
 
 Thanks,
 
 
 Juan
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Harmony transpilers

2011-10-11 Thread David Herman
There is a concept of expressiveness in programming languages that has to do 
with the notion of whether a feature is just sugar. A lightweight feature is 
one that can be described as a simple, local rewrite into some other feature. 
For example, the proposed method shorthand syntax:

{ f(x) { return x + 1 } }

can easily be translated to

{ f: function(x) { return x + 1 } }

By local rewrite I mean you can replace the AST node MethodInit(name, args, 
body) into an equivalent node PropertyInit(name, FunctionLiteral(args, body))) 
without modifying name, args, or body, and the program will behave identically.

Intuitively, sugar doesn't increase the expressiveness of a language; it just 
makes for nice little syntactic abstractions for common patterns. This doesn't 
really change the *power* of the language because there's nothing you can't do 
with the new feature that you couldn't already do with only minor revisions to 
the code. Conversely, a feature that *can't* be implemented exclusively with 
local rewrites is one that can potentially replace massive amounts of 
boilerplate. This is a feature that can fundamentally increase the 
expressiveness of the language.

CoffeeScript is, by design, a language that is designed to have the same 
expressiveness as JavaScript -- all its features are defined as syntactic 
sugar. Harmony is, by design, a language that is designed to have *more* 
expressiveness than ES5 -- it contains some features that can't be defined as 
syntactic sugar (in addition to some that can).

But there are costs to adding expressiveness, such as the need to adapt tools 
like debuggers, as you mention, and the risk of features that are subject to 
abuse or unwieldy to program with. That's why we have been very conservative 
about only introducing a few such features, using concepts that are 
well-studied from other languages, and even then limiting their expressiveness. 
For example, generators are strictly less expressive than full coroutines or 
continuations. How so? You can expressive the former as sugar for the latter, 
but not vice versa.

Dave


On Oct 11, 2011, at 8:00 PM, John J Barton wrote:

 
 
 On Tue, Oct 11, 2011 at 6:17 PM, David Herman dher...@mozilla.com wrote:
  Traceur is very good! I'd just like to have something that compiles to ES5 
  without intermediate libraries, the way CoffeeScript works, so that it's 
  easier to debug and use in NodeJS.
 
 You aren't going to be able to match CoffeeScript's readability for many 
 features, especially generators, private names, and proxies. Those require 
 deep, whole-program compilation strategies.
 
 I'm unclear, but are you saying: some features translate directly to simple 
 JS but some features are more pervasive so their translation will not be as 
 readable? So we need to develop new strategies for debugging these features? 
 Or something else?
 
 jjb
 
  
 
 Dave
 
 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


proxies: stratifying toString, valueOf

2011-10-16 Thread David Herman
If you want to create a clean-slate proxy object -- for example, a dictionary 
-- then you can't predefine toString or valueOf. But this means your object 
will always fail at the semantic operations [[ToString]] and [[ToPrimitive]]. 
For example:

 var obj = Proxy.create(myEmptyHandler, proto);
 String(obj)
TypeError: can't convert obj to string
 obj + 
TypeError: can't convert obj to primitive type

If you actually instrument the proxy to watch which operations it's trying, 
you'll see:

 var obj = Proxy.create(myEmptyHandler, proto);
 String(obj)
trying: toString
trying: valueOf
TypeError: can't convert obj to string
 obj + 
trying: valueOf
trying: toString
TypeError: can't convert obj to primitive type

Should we not offer derived traps for toString and valueOf (which, if not 
defined, default to looking up the toString and valueOf properties, 
respectively, on the receiver), to allow for stratified implementation of this 
reflective behavior, i.e., without polluting the object's properties?

Dave

PS SpiderMonkey also has the unstratified Object.prototype.toSource, which is 
used for the non-standard uneval function, for printing out values at the 
console. This is kind of unfortunate since it suggests a need for another proxy 
trap, but this time it's not for a standard functionality.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: proxies: stratifying toString, valueOf

2011-10-16 Thread David Herman
Ugh, that formatted poorly, at least in my mail client. Here's the example 
again:

js var obj = Proxy.create(myEmptyHandler, proto);
js String(obj)
trying: toString
trying: valueOf
TypeError: can't convert obj to string
js obj + 
trying: valueOf
trying: toString
TypeError: can't convert obj to primitive type

Dave

On Oct 16, 2011, at 1:46 PM, David Herman wrote:

 If you want to create a clean-slate proxy object -- for example, a dictionary 
 -- then you can't predefine toString or valueOf. But this means your object 
 will always fail at the semantic operations [[ToString]] and [[ToPrimitive]]. 
 For example:
 
 var obj = Proxy.create(myEmptyHandler, proto);
 String(obj)
TypeError: can't convert obj to string
 obj + 
TypeError: can't convert obj to primitive type
 
 If you actually instrument the proxy to watch which operations it's trying, 
 you'll see:
 
 var obj = Proxy.create(myEmptyHandler, proto);
 String(obj)
trying: toString
trying: valueOf
TypeError: can't convert obj to string
 obj + 
trying: valueOf
trying: toString
TypeError: can't convert obj to primitive type
 
 Should we not offer derived traps for toString and valueOf (which, if not 
 defined, default to looking up the toString and valueOf properties, 
 respectively, on the receiver), to allow for stratified implementation of 
 this reflective behavior, i.e., without polluting the object's properties?
 
 Dave
 
 PS SpiderMonkey also has the unstratified Object.prototype.toSource, which is 
 used for the non-standard uneval function, for printing out values at the 
 console. This is kind of unfortunate since it suggests a need for another 
 proxy trap, but this time it's not for a standard functionality.
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


proxies: receiver argument and object maps

2011-10-16 Thread David Herman
Forgive me that I've not kept track of where we are in the discussion about the 
additional receiver argument.

I think I just found a pretty important use case for the receiver argument. Say 
you want to keep some information about a proxy object in a Map or a WeakMap, 
and you want the handler to be able to access that information. Then you're 
going to need the proxy object to do it.

I suppose you can close over the proxy value:

var proxy;
var handler = { ... proxy ... };
proxy = Proxy.create(handler);

But then you have to make a fresh handler for each instance.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: proxies: receiver argument and object maps

2011-10-16 Thread David Herman
D'oh -- of course, you're right. The use case I'm describing wants the proxy, 
not the receiver.

Thanks,
Dave

On Oct 16, 2011, at 2:44 PM, David Bruant wrote:

 Le 16/10/2011 23:02, David Herman a écrit :
 Forgive me that I've not kept track of where we are in the discussion about 
 the additional receiver argument.
 
 I think I just found a pretty important use case for the receiver argument. 
 Say you want to keep some information about a proxy object in a Map or a 
 WeakMap, and you want the handler to be able to access that information. 
 Then you're going to need the proxy object to do it.
 
 I suppose you can close over the proxy value:
 
var proxy;
var handler = { ... proxy ... };
proxy = Proxy.create(handler);
 
 But then you have to make a fresh handler for each instance.
 There are 2 different things:
 1) the receiver object.
 This one may only be useful in case of inheritance:
 -
 var p = Proxy.create(someHandler);
 
 var o1 = Object.create(p);
 var o2 = Object.create(p);
 
 o1.a;
 o2.a;
 -
 here, both o1 and o2 delegates their get to the proxy... so we
 thought. Sean Eagan started a thread which conclusion was based on the
 semantics of [[Get]], you never call the [[Get]] of the prototype, but
 rather its [[GetProperty]] internal. Consequently, you never need the
 receiver. Not even for the tricky case of getter/setter binding. See
 http://wiki.ecmascript.org/doku.php?id=strawman:proxy_drop_receiver
 
 But in my event as property experiment, I have found that I actually
 need that I need the receiver object somewhere to properly implement
 inherited events. I'll post on es-discuss as soon as I'm done to make my
 case and argue back in favor of receiver.
 
 
 2) the proxy object
 It seems to be what you're describing
 Several arguments and experiments have been made proving that proxy as
 an argument was necessary for all traps. See
 http://wiki.ecmascript.org/doku.php?id=strawman:handler_access_to_proxy
 
 
 I think we're waiting for the November TC39 meeting for decisions to be
 made regarding proxies.
 
 David

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: proxies: receiver argument and object maps

2011-10-16 Thread David Herman
Yeah, actually, that's roughly what I just ended up doing. I was just playing 
with the idea of creating pollution-free dictionaries using proxies, and came 
up with this:

https://github.com/dherman/dictjs

The biggest issue, though, is the toString/valueOf one I describe in the other 
message I sent this afternoon.

(Well, that and performance, which quite possibly sucks. So this may not be a 
viable idea. It was an interesting experiment, anyway.)

Dave

On Oct 16, 2011, at 2:49 PM, David Bruant wrote:

 Le 16/10/2011 23:02, David Herman a écrit :
 Forgive me that I've not kept track of where we are in the discussion about 
 the additional receiver argument.
 
 I think I just found a pretty important use case for the receiver argument. 
 Say you want to keep some information about a proxy object in a Map or a 
 WeakMap, and you want the handler to be able to access that information. 
 Then you're going to need the proxy object to do it.
 
 I suppose you can close over the proxy value:
 
var proxy;
var handler = { ... proxy ... };
proxy = Proxy.create(handler);
 
 But then you have to make a fresh handler for each instance.
 Also, a temporary solution to have access to the proxy without is being
 an argument is to put it as a property of the handler. Example:
 https://github.com/DavidBruant/HarmonyProxyLab/blob/master/LazyReadCopy/LazyReadCopy.js#L84
 
 Actually, an object is created with the handler as prototype. this
 object gets an own proxy property and this is the object that is used
 as handler. I think that Tom gets credit for this idea.
 
 David

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: proxies: stratifying toString, valueOf

2011-10-16 Thread David Herman
 If you want to stratify toString/valueOf in general and for all objects, I 
 would very much support that.

I'm not sure I understand what you mean. Do you mean something like:

js var obj = Object.create(null, {});
js String(obj)
TypeError: can't convert obj to string
js Object.setMetaToStringBehavior(obj, function() { return hello world 
});
js String(obj)
hello world

for normal objects?

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: proxies: stratifying toString, valueOf

2011-10-16 Thread David Herman
 I agree with Andreas. The implicitly-called base level methods are not 
 meta-methods or (spec language) internal methods. They do not need their 
 own traps. They are base-level property accesses.

Well, certainly that's the way the language currently works. But the way it 
currently works is problematic, because you can't define an object that is 
convertible to string without using the toString name.

 Allowing proxies to trap implicit calls to base-level methods but not 
 explicit calls seems weird. If a Dict should not pollute its pseudo-property 
 namespace with 'toString' and 'valueOf', then it can still delegate those to 
 standard methods on its prototype chain.

This doesn't work. It's a pigeonhole problem. If you allow toString to be 
inherited from the prototype, then it pollutes dictionary lookup. If you don't, 
then string conversion fails. It's not possible to have both.

 And if someone does enter 'toString' or 'valueOf' into a Dict, let the chips 
 fall where they may.

Well of course someone should be allowed to enter 'toString' into a Dict. But 
why shouldn't it be possible to define robust string conversion for an object 
without having to pollute the object's namespace?

 Anything else is more like a new type (typeof type, data type) that's not an 
 object.

I disagree. If you inherit from Object.prototype, then yes, 'toString' has a 
reserved meaning and a prescribed behavior. But since ES5 it's possible to 
create an object that does not inherit from Object.prototype (via 
Object.create), and I see no reason why we should insist on pre-reserving 
meaning for *any* properties on such an object. If you define your own 
prototype hierarchy root, you should be able to give whatever meaning you want 
to whatever names, or even no meaning to any names whatsoever.

 That was proposed separately:
 
 wiki.ecmascript.org/doku.php?id=strawman:dicts

Yup, I remember writing it. ;-) But ES5 pretty much already gives you the 
ability to define such an object, with the exception of the result of typeof 
and the fact that [[ToString]] and [[ToPrimitive]] break. What I'm proposing is 
that we clean up these places in the semantics that currently look for concrete 
names in favor of something stratified.

One possibility might be to create some private names, à la |iterate|, which 
you could use in place to 'toString' and 'valueOf':

js import { toString, valueOf } from @meta;
js var obj = Object.create(null, {});
js String(obj)
TypeError: cannot convert obj to string
js obj[toString] = function() { return hello world };
js String(obj)
hello world

This isn't exactly stratified like proxies, but it at least leaves room to 
unreserve the meaning of 'toString' and 'valueOf'.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Direct proxies strawman

2011-10-17 Thread David Herman
Hi Tom, this looks very promising. Some comments below; quoting the wiki page 
inline.

 * target is the object which the direct proxy wraps

Just checking: presumably this proposal doesn't allow for target to be a 
primitive, right? (Other than the special case of null you mention later.) 
I.e., this is still just a spec for virtualized objects, not for virtualized 
primitives.

 Unlike in the original proxy proposal, any non-configurable properties 
 reported by the handler of a direct proxy are also stored on the target 
 object. In doing so, the proxy ensures that the target object always 
 “matches” the proxy as far as non-configurability of properties is concerned.

I'm confused about this. Do you mean that, if the proxy provides a property 
descriptor that said .foo is non-configurable, the semantics automatically adds 
the property to the target and from that point on, all operations involving 
.foo go through the target instead of through the handler? Does this let you 
faithfully virtualize the .length property of arrays? How? You still want to 
intercept gets and sets, but you want getOwnPropertyDescriptor to say that it's 
a data property, not an accessor property.

 When a direct proxy is made non-extensible, so is its target. Once a direct 
 proxy is non-extensible, all properties reported by the handler are stored on 
 the target, regardless of their configurability. This ensures that the 
 handler cannot report any “new” properties, since the target object will now 
 reject any attempt to add new properties.

I'm still confused about when operations go through the handler and when they 
go through the target. If they can still go through the handler after making 
the proxy non-extensible, then what's to stop the handler from making it look 
like new properties are appearing?

 A direct proxy may acquire some of its internal properties from its target 
 object. This includes the [[Class]] and [[Prototype]] internal properties:

This is awesome.

 * typeof aProxy is equal to typeof target.


To make sure I'm following along correctly: the typeof result can only be 
object or function, right?

 We could even allow for direct proxies to acquire non-standard internal 
 properties from their target object. This could be a useful principle when 
 wrapping host objects.

This seems important in order to make host methods work, e.g., the ones that 
access the [[Value]] property. I guess you could code around it by proxying 
those methods as well?

 For Direct Proxies, rather than adopting the handler_access_to_proxy strawman 
 that adds the proxy itself as an additional argument to all traps, we propose 
 to instead add the target as an additional, last, argument to all traps. That 
 allows the handler to interact with the target that it implicitly wraps.

You still might want access to the identity of the proxy itself, e.g., to store 
the object in a WeakMap. But as you guys have pointed out, you can store this 
in the handler object, which can still inherit its traps from prototype 
methods. So I guess this isn't critical.

 The protect trap no longer needs to return a property descriptor map...

This seems like a big deal to me. The property descriptor map could potentially 
be quite large.

 Proxy.stopTrapping()

This one makes me a little queasy. I'm sure you guys already thought of and 
dismissed the possibility of having Proxy.for(...) return a pair of a proxy and 
a stopTrapping() thunk that's tied to the one proxy. That's obviously got 
wretched ergonomics. But I'm not crazy about the idea of drive-by 
deproxification. Just my initial reaction, anyway.

 Both the call and new trap are optional and default to forwarding the 
 operation to the wrapped target function.

Nicely done! Much cleaner than Proxy.create and Proxy.createFunction.

 Proxy.startTrapping() (a.k.a. Proxy.attach)

I don't fully understand how this one works. Is it essentially a Smalltalk 
become, in the sense that the existing object is turned into a proxy, and its 
guts (aka brain) now become a different object that the proxy uses as its 
target?

So, this has obvious appeal; for example, it addresses the data binding use 
cases.

But I have some serious reservations about it. For one, tying the notion of 
becomeability to extensibility seems sub-optimal. I'm not sure you always 
want an object to be non-extensible when you want it to be non-becomeable. And 
a serious practical issue is whether host objects could be becomeable. I'm 
pretty sure that's going to be a serious problem for implementations.

 It’s still as easy to create such “virtual” proxies: just pass a fresh empty 
 object (or perhaps even null?)

Please, make it null. So much more pleasant (and avoids needless allocation).

(The only downside of allowing null to mean no target would be if you wanted 
to future-proof for virtualizable primitives, including a virtualizable null.)

   Proxy.create = function(handler, proto) {
 return 

Re: decoupling [ ] and property access for collections

2011-10-17 Thread David Herman
 (Dave Herman has another way to say this: [ ]  and . can be viewed as 
 operating on two separate property name spaces, but for legacy/normal ES 
 objects those namespaces are collapsed into a single shared namespace.)

Lest the above be construed as a tacit approval on my part... ;)

 IMHO the single property name space of es-current is a feature, not a bug.

I tend to agree. There are expressibility problems, too. For example, if you 
have an object that uses . for its API and [] for some other data, then what do 
you do when you want to dynamically compute an API name? I would hope not

eval(obj. + computeName())

But I don't see any obvious ways out of this that aren't pretty convoluted.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Rationale for dicts?

2011-10-17 Thread David Herman
 I do not yet fully understand the rationale behind dicts.

Nothing fancy, really. Just an idiomatic way to create a non-polluted 
string-to-value map. In ES5 you can use Object.create(null), which is not bad 
but still kind of a hack. Wouldn't it be nice to have sweet literal syntax for 
a dictionary that you know is not going to pull any any of the complexities of 
the object semantics?

My main issues with the proposal at this point are 1) the cost of new typeof 
types, and 2) the syntax doesn't work for the case of empty dictionaries.

 - Why does it need to use the same mechanism for looking up keys as objects? 
 Couldn’t methods be introduced for this? Then a dict could have other 
 methods, too (even if they come from a wrapper type). For example, a size() 
 method.

The point was for it not to be an object. A much flatter kind of data 
structure. You could think of it as formalizing the internal concept of an 
object's own-properties table.

 - Why not allow any kind of key? Why restrict keys to strings? That seems 
 arbitrary.

As I say, what inspired me to explore the idea was exposing the internal 
concept of an own-property table as a first-class data structure. It's just a 
simpler primitive to work with than an object.

We might want to allow name objects as keys too, to continue tracking the 
notion that a dictionary is a mapping from property names to values.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: decoupling [ ] and property access for collections

2011-10-17 Thread David Herman
I suspect it's not nearly so rare as you think. For example, it just showed up 
in Tom and Mark's new proxy proposal:

 The protect trap is called on Object.{freeze,seal,preventExtensions}(aProxy). 
 The operation argument is a string identifying the corresponding operation 
 (“freeze”, “seal”, “preventExtensions”). This makes it easy for a handler to 
 forward this operation, by e.g. performing Object[operation](obj).


A simple way to abstract out do this concrete operation to do one of the 
following set of possible concrete operations is to pass the name as a string. 
Yes, there's a certain aesthetic that says that's icky, but JS makes it so 
convenient that it's the obvious thing to do.

Dave

On Oct 17, 2011, at 4:15 PM, Allen Wirfs-Brock wrote:

 
 On Oct 17, 2011, at 3:34 PM, David Herman wrote:
 
 
 IMHO the single property name space of es-current is a feature, not a bug.
 
 I tend to agree. There are expressibility problems, too. For example, if you 
 have an object that uses . for its API and [] for some other data, then what 
 do you do when you want to dynamically compute an API name?
 
 In most languages, this would fall into the realm of the reflection API.
 
 What is the actual frequency of such driven API member selection.  If it is 
 high (particularly, high than the utility of good collections) that we may be 
 exposing other problems we need to look at more closely.
 
 
 I would hope not
 
 eval(obj. + computeName())
 
 But I don't see any obvious ways out of this that aren't pretty convoluted.
 
 
 I'll give you four:
 
 1) Object.getOwnPropertyDescriptor(obj,name).value
 etc.
 
 2) two new reflection functions:
  Object.getProperty(obj,name)
  Object.setProperty(obj,name,value)
 
 3) build upon the possible alternative private name property syntax
  let foo='foo';
  obj@foo
 or perhaps
  obj@('foo')
 
 4)  a switch statement:
switch (computedAPIName) {
   case 'property1':
  obj.property1(/*args */);
  break;
   case 'property2':
  obj.property2(/*args */);
  break;
   case 'property3':
  // etc.
 }
 
 All of these start from the perspective that this sort of reflective API 
 access should be quite rare.  
 
 Allen
 
 
 
 
 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Direct proxies strawman

2011-10-18 Thread David Herman
There are other alternatives, such as supporting both alternatives with two 
different entry points (con: API proliferation), taking an optional boolean 
flag indicating to return the pair (con: too dynamic a type), taking an 
optional outparam object (con: what is this? C?). OK, so most of those 
suggestions suck. :) But there are bigger questions to settle before we need to 
settle this one.

Dave

On Oct 18, 2011, at 10:54 AM, Mark S. Miller wrote:

 On Tue, Oct 18, 2011 at 9:51 AM, Andreas Rossberg rossb...@google.com wrote:
  Good point. Yet another reason why I prefer the alternate Proxy.temporaryFor
  API I sketched in reply to Dave Herman. That API does not necessarily suffer
  from this issue.
 
 Yes, I think that interface, while less slick, is the right one.
 
 Interesting. Naming aside, I also like the Proxy.temporaryFor API better. But 
 when Tom raised it, I argued against it for the reason he mentions: I thought 
 it would run into more resistance. If no one feels strongly against 
 Proxy.temporaryFor, I was wrong to anticipate trouble and we should do that 
 instead.
 
  
 -- 
 Cheers,
 --MarkM
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Direct proxies strawman

2011-10-18 Thread David Herman
 
 We could even allow for direct proxies to acquire non-standard internal 
 properties from their target object. This could be a useful principle when 
 wrapping host objects.
 
 This seems important in order to make host methods work, e.g., the ones that 
 access the [[Value]] property. I guess you could code around it by proxying 
 those methods as well?
 
 What's the [[Value]] property? I'm not sure I understand.

Er, sorry, [[PrimitiveValue]]. (It was called [[Value]] in Edition 3.) Section 
8.6.2, Table 9.

 But I have some serious reservations about it. For one, tying the notion of 
 becomeability to extensibility seems sub-optimal. I'm not sure you always 
 want an object to be non-extensible when you want it to be non-becomeable. 
 And a serious practical issue is whether host objects could be becomeable. 
 I'm pretty sure that's going to be a serious problem for implementations.
 
 I agree in principle that attachability or becomeability is distinct from 
 extensibility. But from a usability POV, what's the alternative? To introduce 
 yet another bit, and to put the onus on defensive objects by requiring them 
 to do Object.freeze(Object.cantTrap(myObject))? To me, that seems worse than 
 tying non-extensibility to non-becomeability.

One alternative is not to include Proxy.attach. :)

But I need to think about this more; I'm not sure those are the only options. 
Maybe they are. You've had more time to think about this than I have. :)

 It’s still as easy to create such “virtual” proxies: just pass a fresh empty 
 object (or perhaps even null?)
 
 Please, make it null. So much more pleasant (and avoids needless allocation).
 
 (The only downside of allowing null to mean no target would be if you 
 wanted to future-proof for virtualizable primitives, including a 
 virtualizable null.)
 
 Avoiding needless allocation is why I proposed null, indeed. But there are 
 some unresolved issues here: if the target is null, how should the proxy 
 determine its typeof, [[Class]] and [[Prototype]]?

object, Object, and null. :)

 This needs more thought.

Sure, I can believe that. But as long as there are reasonable defaults for all 
these things, I would much prefer to allow null.

 IMHO, if we would buy into direct proxies, I see no need to continue 
 supporting Proxy.create{Function}.

Agreed.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Direct proxies strawman

2011-10-19 Thread David Herman
These are all good points. I'm not sure (1) is worth bringing back in all the 
we won't let you say things you can't enforce complexity, but (2) is maybe 
non-obvious enough not to be worth it. I'm backing off my please make it null 
position now. :) It actually seems pretty reasonable just to require an object, 
and this also leaves the door open to potentially expanding the API to allow 
primitive value proxies at some point down the road.

Dave

On Oct 19, 2011, at 3:30 AM, Tom Van Cutsem wrote:

 2011/10/19 David Bruant bruan...@gmail.com
 Le 19/10/2011 10:57, Andreas Rossberg a écrit :
 If I understand the proposal correctly, you cannot avoid the
 allocation, because the target is used as a backing store for fixed
 properties.
 Indeed. The target is merged with the fixedProps of the FixedHandler 
 proposal prototype implementation [1]
 
 That was in the old proposal. In the latest design [2], you don't necessarily 
 need to think of the target as a backing store, it's more as if the proxy 
 wants to keep the target in sync with the handler.
 
 If one would provide null as a target, then either:
 (1) the proxy could throw when it would otherwise need to access the target 
 (e.g. to check for non-configurable properties).
 (2) the proxy implicitly does create an empty target object, to be used as a 
 backing store for fixed properties, so to speak.
 
 I would much prefer (1): if you then want to create a fully virtual object 
 that only exposes configurable properties, there is no allocation overhead. 
 If you want (2), just call Proxy.for(Object.create(null), handler).
  
 This is a reason why a lot of checks that appear in Tom's Proxy.for are 
 actually not necessary (and won't be a performance burden) since they are 
 performed by some native code implementing internal methods (ES5.1 - 8.12 for 
 native objects, but also the custom [[DefineOwnProperty]] for arrays if the 
 target is one, etc.)
 
 Not sure what you mean by not necessary: it's true that I expect most 
 checks to be fast since they are very similar to checks that need to be 
 performed by existing built-ins, but that's not the same as stating that the 
 checks are not necessary.
 
 Cheers,
 Tom
 
 [2] 
 http://code.google.com/p/es-lab/source/browse/trunk/src/proxies/DirectProxies.js
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: decoupling [ ] and property access for collections

2011-10-20 Thread David Herman
 [1] http://wiki.ecmascript.org/doku.php?id=strawman:dicts [D.H. already 
 mentioned that this proposal does not reflect his current thinking, so beware]

FWIW, I don't really know what my current thinking is. :)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: yield and Promises

2011-10-21 Thread David Herman
You can disagree with anything if you're allowed to change the terms of the 
discussion. :)

Brendan said JS is run-to-completion, which means that if you call a function 
and control returns to you, no intervening threads of control have executed in 
the meantime. But then you changed his example to this:

 //#1
 assert_invariants();
 function callBack () {
 assert_invariants(); // perhaps yes, perhaps no. There's no guarantee.
 };
 setTimeout(callBack, 1e3);
 return;

Now matter how you line up the whitespace, the semantics of a function does not 
guarantee that the function will be called right now. When a programmer 
explicitly puts something in a function (the function callBack here), they are 
saying here is some code that can be run at any arbitrary time. They are 
expressing that *explicitly*. Whereas in a semantics with 
fibers/coroutines/call/cc:

 //#2
 assert_invariants();
 f(); //might suspend execution
 assert_invariants(); // perhaps yes, perhaps no. There's no guarantee either.
 return;

the mere *calling* of any function is *implicitly* giving permission to suspend 
the *entire continuation* (of the current event queue turn) and continue it at 
any point later on, after any other threads of control may have executed.

If you want to claim these two things are equivalent, I feel pretty confident 
predicting this conversation will quickly descend into the Turing tarpit...

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: yield and Promises

2011-10-21 Thread David Herman
Hi Kris,

Your proposal has a lot of similarities to

http://wiki.ecmascript.org/doku.php?id=strawman:deferred_functions

which was proposed this past spring.

I'm not sure I follow what's top-down vs bottom-up about the two different 
approaches. Let me suggest some terminology that has emerged in the proposal 
process: I'll use generators to mean any single-frame, one-shot continuation 
feature that's independent of the host event queue, and deferred functions to 
mean any single-frame, one-shot continuation feature that is tied to the host 
event queue by means of being automatically scheduled.

 Generators directly solve a problem that is much less significant in normal 
 JS coding. While it is exciting that generators coupled with libraries give 
 us a much better tool for asynchronous use cases (the above can be coded with 
 libraryFunction(function*(){...}), my concern is that the majority use case 
 is the one that requires libraries rather than the minority case, and does 
 not promote interoperability. 

It's true that generators require libraries in order to use them for writing 
asynchronous code in direct style. And I agree with you and Alex and Arv that 
there is a cost to not standardizing on those libraries. There are different 
frameworks with similar but incompatible idioms for Deferred objects, Promises, 
and the like, and they could be standardized.

 A couple years later, I believe the landscape has dramatically changed, and 
 we indeed do have significant convergence on a promise API with the 
 thenable interface. From Dojo, to jQuery, to several server side libraries, 
 and apparently even Windows 8's JS APIs (from what I understand) all share an 
 intersection of APIs that include a then() method as a method to define a 
 promise and register a callback for when a promise is fulfilled (or fails). 
 This is an unusual level of convergence for a JS community that is so 
 diverse. I believe this gives evidence of well substantiated and tested 
 interface that can be used for top-controlled single-frame continuations that 
 can easily be specified, understood and used by developers.

But there's more to it than just the interface. You fix a particular scheduling 
semantics when you put deferred functions into the language. I'm still learning 
about the difference between the Deferred pattern and the Promises pattern, but 
the former seems much more stateful than the latter: you enqueue another 
listener onto an internal mutable queue. I'm not sure how much state can be 
avoided with listeners (at the end of the day, callbacks have to be invoked in 
some particular order), but that concerned me when I saw the deferred functions 
proposal. I can't prove to you that that scheduling policy isn't the right one, 
but I'm not ready to say it is.

So I'm not sure all scheduling policies are created equal. And with generators, 
at least people have the freedom to try out different ones. I'm currently 
trying one with task.js, and I hope others will try to come up with their own. 
(There's also the added benefit that by writing the scheduler in JS, you can 
instrument and build cool tools like record-and-reply debugging.)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Minimalist Classes

2011-10-31 Thread David Herman
Hi Jeremy,

Thanks for the proposal. I've been advocating a minimalist approach to classes 
for a while now; I think it's a good goal. A few of us sketched out something 
similar on a whiteboard in the last face-to-face meeting; at least, it used the 
object literal body. We hadn't thought of two of your ideas:

1) allowing class expressions to be anonymous
2) allowing the RHS to be an arbitrary expression

I like #1; we have a lot of agreement around classes being first-class, and 
that just fits right in.

But with #2 I'm not clear on the intended semantics. You say this could be 
desugared but don't provide the details of the desugaring. The RHS is an 
arbitrary expression that evaluates to the prototype object (call it P), but 
the extends clause is meant to determine P's [[Prototype]], right? Do you 
intend to mutate P's [[Prototype]]? I wouldn't agree to that semantics.

Another thing that this doesn't address is super(). Right now in ES5 and 
earlier, it's pretty painful to call your superclass's constructor:

class Fox extends Animal {
constructor: function(stuff) {
Animal.call(this, stuff);
...
}
}

In general, I think your arbitrary-expression-RHS design is incompatible with 
the super keyword, which needs to be lexically bound to its class-construction 
site. I'll have to think about it, though.

However, I think the approach of an object literal body (and only an object 
literal body) works well. I suspect you could still implement all your examples 
of dynamically-computed classes, although probably with a little more work in 
some cases.

Dave

On Oct 31, 2011, at 6:57 PM, Jeremy Ashkenas wrote:

 'Evening, ES-Discuss.
 
 After poking a stick in the bees' nest this morning (apologies, Allen), and 
 in the spirit of loyal opposition, it's only fair that I throw my hat in the 
 ring. 
 
 Here is a proposal for minimalist JavaScript classes that enable behavior 
 that JavaScripters today desire (as evidenced by libraries and languages 
 galore), without adding any new semantics beyond what already exists in ES3.
 
 https://gist.github.com/1329619
 
 Let me know what you think, and feel free to fork.
 
 Cheers,
 Jeremy Ashkenas
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Minimalist Classes

2011-10-31 Thread David Herman
 class Fox extends Animal {
   dig: function() {}
 }
 
 Fox becomes a constructor function with a `.prototype` that is set to an 
 instance of Animal that has been constructed without calling the Animal() 
 constructor. (The usual temporary-constructor-to-hold-a-prototype two step 
 shuffle). All of the own properties from the RHS are merged into the empty 
 prototype. The only mutation here is of a brand new object.

OK.

 Animal is not a prototype here, it's a class (constructor function) in its 
 own right.

Sure, I just didn't realize you wanted copying. I thought you were doing 
something like

P.__proto__ = SuperClass.prototype;

But IIUC, you're proposing a semantics where you construct a brand new object P 
whose __proto__ is SuperClass.prototype and then copy all the own-properties of 
the RHS into P. That's fine as far as it goes, but super still doesn't work 
(see below).

 Another thing that this doesn't address is super(). Right now in ES5 and 
 earlier, it's pretty painful to call your superclass's constructor:
 
 In general, I think your arbitrary-expression-RHS design is incompatible with 
 the super keyword, which needs to be lexically bound to its 
 class-construction site. I'll have to think about it, though.
 
 super() is a separate (but very much needed) issue -- that should still make 
 it in to JS.next regardless of if a new class syntax does. Likewise, super() 
 calls should not be limited only to objects that have been build with our new 
 classes, they should work with any object that has a prototype (a __proto__).

I wish it were possible, but Allen has convinced me that you can't make super 
work via a purely dynamic definition. It has to be hard-wired to the context in 
which it was created. Let me see if I can make the argument concisely.

Semantics #1: super(...args) ~=~ this.__proto__.METHODNAME.call(this.__proto__, 
...args)

This semantics is just wrong. You want to preserve the same |this| as you move 
up the chain of super calls, so that all this-references get the most derived 
version of all other properties.

Semantics #2: super(...args) ~=~ this.__proto__.METHODNAME.call(this, ...args)

This semantics is just wrong. It correctly preserves the most derived |this|, 
but if it tries to continue going up the super chain, it'll start right back 
from the bottom; you'll get an infinite recursion of the first super object's 
version of the method calling itself over and over again.

Semantics #3: super(...args) ~=~ this.__proto__.METHODNAME.callForSuper(this, 
this.__proto__, ...args)

This semantics introduces a new implicit where did I leave off in the super 
chain? argument into the call semantics for every function in the language. 
It's sort of the correct dynamic semantics, but it introduces an unacceptable 
cost to the language. JS engine implementors will not accept it.

Semantics #4: super(...args) ~=~ LEXICALLYBOUNDPROTO.call(this, ...args)

This semantics properly goes up the super chain via lexical references only, so 
it avoids the infinite recursion of #2. It doesn't introduce any new cost to 
the call semantics of JS, so it doesn't have the cost of #3.

 Depending on whether you implement it as a reference to the prototype, or a 
 reference to the prototype's implementation of the current function -- an 
 approach that I would prefer -- the general scheme is:
 
 Take the current object's __proto__ and apply the method of the same name 
 there against the current object. If you know the name of the property, and 
 you have the reference to the prototype, I don't see why this would preclude 
 dynamic definitions. 

I thought the exact same thing when I first thought about super. But sadly, as 
Allen taught me, this is semantics #2, which leads to infinite super-call 
recursion.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Minimalist Classes

2011-10-31 Thread David Herman
 But IIUC, you're proposing a semantics where you construct a brand new object 
 P whose __proto__ is SuperClass.prototype and then copy all the 
 own-properties of the RHS into P.
 
 Not quite. P is a constructor function (class object), SuperClass is a 
 constructor function. Unless I'm confused, P's .prototype is an instance of 
 SuperClass,

Oh sorry, we're just miscommunicating about what's labeled what -- I was using 
P to name the .prototype of the subclass constructor. Let me be concrete:

class SubClass extends SuperClass RHS

I was using P as the name of the new object created and stored as 
SubClass.prototype. Then P.__proto__ is SuperClass.prototype, and the 
own-properties of the object that RHS evaluates to are copied into P. Instances 
O of SubClass have O.__proto__ === P.

 and therefore instances of P have an RHS own-property-filled __proto__ object 
 that itself has a __proto__ object pointing at SuperClass.prototype. Fun 
 stuff.

Wh...

 Indeed, super() is tricky. For what it's worth, CoffeeScript's class syntax 
 requires literal (non-expression) class body definitions in part to make 
 Semantics #4 possible, with purely lexical super calls. Your example's 
 LEXICALLYBOUNDPROTO is CoffeeScript's ClassObject.__super__.
 
 Fortunately, y'all have the ability to bend the runtime to your will. To 
 solve the super() problem, you can simply have the JavaScript engine keep 
 track of when a function has called through the `super()` boundary, and from 
 that point on downwards in that method's future call stack, add an extra 
 `__proto__` lookup to each super resolution. When the inner `super()` is then 
 called in the context of the outer `this`, the result will be a variant of #2:
 
 this.__proto__.__proto__.METHODNAME.call(this, ...args)
 
 ... and it should work.

This doesn't sound right to me. What happens if you call the same method on 
another object while the super-resolution is still active for the first call? 
IOW, this sounds like it has similar problems to dynamic scope; the behavior of 
a function becomes sensitive to the context in which it's called, which is 
unmodular.

 There may be something wrong with the above -- but dynamic super() should be 
 a solveable problem for JS.next, even if not entirely desugar-able into ES3 
 terms.

The problem isn't so much whether it's possible to come up with a semantics by 
changing the runtime; I'm sure we could do that. The problem is finding a way 
to get the semantics you want without taxing the performance all other function 
calls in the language. (Also known as a pay-as-you-go feature: if you don't 
use the feature, it shouldn't cost you anything.) We don't know how to do that 
for super().

So I guess in theory I agree it'd be nice if super() and class could be 
designed completely orthogonally, but in practice they affect each other. But 
at the same time, I think a class syntax where the body is restricted to be 
declarative is actually a nice sweet spot anyway. You can still dynamically 
create classes just like always, but the declarative form gives you a sweet and 
simple syntax for the most common case.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Minimalist Classes

2011-11-01 Thread David Herman
 I think one piece of this is worth reiterating: As long as JS.next classes 
 are mostly sugar for prototypes, and prototypes aren't going to be deprecated 
 or removed in the next version of JavaScript (two propositions that I think 
 most of us can get behind) ... then it's very important that super() and new 
 class syntax *aren't* coupled. An ES6 super() that fails to work at all with 
 regular prototypes would be a serious problem. It would make interoperating 
 between vanilla prototypes and prototypes-built-by-classes much more 
 difficult than necessary, and the feel of the language as a whole much more 
 fragmented.

I agree that we should avoid introducing things you can do with classes that 
you can't do with prototypes.

 If you agree, then a super() that resolves dynamically is the way forward.

But I disagree with this claim. Allen's triangle operator (which is 
semantically well-designed but I believe needs a better syntax) gives you just 
this ability to do super() with prototypes and without classes. And classes are 
still sugar.

 I don't think that an efficient, pay-as-you-go dynamic super() will be easy, 
 but with the technical chops of TC39 at your disposal, it should be possible. 
 Expanding the rough sketch from earlier messages:
 
   * If a function doesn't use super(), there is no cost, and no change in 
 semantics.
   * The first-level super() call is easy, just use the method of the same 
 name on the __proto__.
   * When passing into a super(), add a record to the call stack that contains 
 [the current object, the name of the method, and the next level __proto__].
   * When returning from a super(), pop the record from the call stack.
   * When making a super() call, check the call stack for a record about the 
 current object and method name, and use the provided __proto__ instead of 
 this.__proto__ if one exists.

This approach still gets confused if you recursively call the same method on 
the same object in the middle of a super-dispatch. Bottom line: it's not 
equivalent to the behavior where every function call receives an extra argument 
(and which no one in TC39 knows how to implement in a pay-as-you-go manner).

 So I guess in theory I agree it'd be nice if super() and class could be 
 designed completely orthogonally, but in practice they affect each other. But 
 at the same time, I think a class syntax where the body is restricted to be 
 declarative is actually a nice sweet spot anyway. You can still dynamically 
 create classes just like always, but the declarative form gives you a sweet 
 and simple syntax for the most common case.
 
 It's definitely the most common case, but a JavaScript class syntax that is 
 only able to create instances with a static shape would be severely limited 
 compared to current prototypes.

Not at all! Current prototypes have to be created as objects. How do you create 
an object? Either with an object literal, which has a static shape, or via a 
constructor call; anything else you do with assignments. With literal classes 
you have exactly the same options at your disposal.

 Many existing libraries and applications would be unable to switch to such a 
 syntax. One familiar example off the top of my head is Underscore.js:
 
 http://documentcloud.github.com/underscore/docs/underscore.html#section-127

Concrete examples are *super duper awesome* -- thank you so much for bringing 
this into the discussion.

 ...with the minimalist class proposal in this thread, switching this library 
 over to use them would be simple (if not terribly pretty):
 
 class wrapper _.extend({
   constructor: function(obj) {
 this._wrapped = obj;
   }
 }, _)

It's true that with literal classes you wouldn't be able to put all the 
initialization inside the class body, but you could still declare it with a 
class.

class wrapper {
constructor: function(obj) {
this._wrapped = obj;
}
}

_.extend(wrapper.prototype, /* whatever you like */);

The rest you could do as-is; the difference here is minimal, and IMO it's a 
good thing to distinguish the fixed structure from the dynamically computed 
structure. Anyway, JS has plenty of precedent for that.

Dave

PS I also think the extend pattern could be done with a cleaner 
generalization of monocle-mustache:

wrapper.prototype .= _;

But that's fodder for another thread...

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Globalization API working draft

2011-11-02 Thread David Herman
 I think we probably have an interesting question for Dave and Sam about how 
 to support version evolution of modules.  Is there a module equivalent of 
 monkey patching. What if we have an implementation that exposes a V1 module 
 (particularly a built-in module) and code that depends upon upon a V2 of that 
 same module that has an expanded export list.  Is there anyway for that code 
 to patch the module to add the extra exported APIs it would like to use?

ES6 modules are not extensible, for a number of reasons including compile-time 
variable checking. But of course API evolution is critical, and it works; it 
just works differently. Monkey-patching says let the polyfill add the module 
exports by mutation, e.g.:

// mypolyfill.js
...
if (!SomeBuiltinModule.newFeature) {
load(someotherlib.js, function(x) {
SomeBuiltinModule.newFeature = x;
});
}

you instead say let the polyfill provide the exports, e.g.:

// mypolyfill.js
...
export let newFeature = SomeBuiltinModule.newFeature;
if (!newFeature) {
load(someotherlib.js, function(x) {
newFeature = x;
});
}

The difference is that clients import from the polyfill instead of importing 
from the builtin module. I'm not 100% satisfied with this, but it's not any 
more code than monkey-patching.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Globalization API working draft

2011-11-03 Thread David Herman
Yes, good point about loaders. I would like a standard HTML way of specifying a 
loader to use, so you could simply say:

meta loader=polyfill.js/

and from then on your clients don't have to change a thing.

Dave

On Nov 3, 2011, at 2:00 AM, Andreas Rossberg wrote:

 On 3 November 2011 01:12, David Herman dher...@mozilla.com wrote:
 ES6 modules are not extensible, for a number of reasons including 
 compile-time variable checking. But of course API evolution is critical, and 
 it works; it just works differently. Monkey-patching says let the polyfill 
 add the module exports by mutation, e.g.:
 
// mypolyfill.js
...
if (!SomeBuiltinModule.newFeature) {
load(someotherlib.js, function(x) {
SomeBuiltinModule.newFeature = x;
});
}
 
 you instead say let the polyfill provide the exports, e.g.:
 
// mypolyfill.js
...
export let newFeature = SomeBuiltinModule.newFeature;
if (!newFeature) {
load(someotherlib.js, function(x) {
newFeature = x;
});
}
 
 The difference is that clients import from the polyfill instead of importing 
 from the builtin module. I'm not 100% satisfied with this, but it's not any 
 more code than monkey-patching.
 
 I believe the more modular and more convenient solution (for clients)
 is to create an adapter module, and let clients who care about new
 features import that instead of the original builtin. With module
 loaders, you should even be able to abstract that idiom away entirely,
 i.e. the importing code doesn't need to know the difference. It is
 easy to maintain such adaptors as a library.
 
 This is a common approach in module-based languages. It is a more
 robust solution than monkey patching, because different clients can
 simply import different adapters if they have conflicting assumptions
 (or, respectively, have a different loader set up for them).
 
 One issue perhaps is that the modules proposal doesn't yet provide a
 convenient way to wrap an entire module. Something akin to include
 in ML, which is a bit of a two-edged sword, but perhaps too useful
 occasionally to ignore entirely.
 
 /Andreas

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Lecture series on SES and capability-based security by Mark Miller

2011-11-04 Thread David Herman
This is the only one I've seen that seems like it should work, but it depends 
on whether SES/Caja/etc have some sort of way of neutering __proto__. Just from 
hacking around, I don't see much way of censoring it in SpiderMonkey.

MarkM, do you have any tricks for censoring __proto__?

Dave

On Nov 4, 2011, at 10:51 AM, Jorge wrote:

 On 03/11/2011, at 23:55, Mark S. Miller wrote:
 3) Although SES is *formally* an object-capability language, i.e., it has 
 all the formal properties required by the object-capability model, it has 
 bad usability properties for writing defensive abstractions, and therefore 
 bad usability properties for use as an object-capability language or for 
 serious software engineering. One example:
 
 In a SES environment, or, for present purposes, an ES5/strict environment in 
 which all primordial built-in objects are transitively frozen, say Alice 
 uses the following abstraction:
 
function makeTable() {
  var array = [];
  return Object.freeze({
add: function(v) { array.push(v); },
store: function(i, v) { array[i] = v; },
get: function(i) { return array[i]; }
  });
}
 
 Say she uses it to make a table instance with three methods: add, store, 
 and get. She gives this instance to Bob. Alice and Bob are mutually 
 suspicious. All of us as programmers, looking at this code, can tell that 
 Alice intended the table abstraction to encapsulate the array. Given just a 
 table instance, can Bob nevertheless obtain direct access to the underlying 
 array?
 
 Yes, this:
 
 function makeTable() {
  var array = [];
  return Object.freeze({
add: function(v) { array.push(v); },
store: function(i, v) { array[i] = v; },
get: function(i) { return array[i]; }
  });
 }
 
 o= makeTable();
 o.add(1);
 o.add(2);
 o.add(3);
 o.add('Yay!');
 
 o.store('__proto__', {push:function () { console.log(this) }});
 o.add();
 
 Gives:
 
 [ 1, 2, 3, 'Yay!' ]
 -- 
 Jorge.
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Lecture series on SES and capability-based security by Mark Miller

2011-11-08 Thread David Herman
 Perhaps __proto__ should not be writeable in use strict?
 
 That's a great idea! This never occurred to me, and I have not heard anyone 
 suggest this. Thanks!

Doesn't work.

obj[(function(__){return __ + proto + __})(__)]

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: for own(...) loop (spin-off from Re: for..in, hasOwnProperty(), and inheritance)

2011-11-08 Thread David Herman
 Still there, but write it out fully, to compare to the cited text:
 
   import keys from @iter;
   for (i of keys(o)) {
 body
   }
 
 Unless we default-import a standard prelude,

I think we should.

 this is a bit much compared to add own as a modifier after for in for/in (not 
 for/of) loops.

For for-of semantics is one of the places we've bought into a do-over for a 
JS form. It muddies the waters to say for-of is the new for-in but also 
halfway reform for-in at the same time. I don't really think

for (i of keys(o)) {
/body/
}

is such a burden (two characters, as you say).

Instead of taking a hard-to-use-right form like for-in and partly taming it, 
I'd rather suggest people simply move to for-of, and have the default keys 
iterator Do The Right Thing and only iterate over own, enumerable property 
names (thanks to Yehuda and Arv for straightening us out on this point 
recently).

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Standard Prelude (was: for own(...) loop (spin-off from Re: for..in, hasOwnProperty(), and inheritance))

2011-11-08 Thread David Herman
Let's answer this once we have the module-ized version of the standard library. 
Which I've been promising for far too long (mea culpa). Will get started on 
this tonight.

Dave

On Nov 8, 2011, at 9:04 PM, Brendan Eich wrote:

 On Nov 8, 2011, at 8:39 PM, David Herman wrote:
 
 Instead of taking a hard-to-use-right form like for-in and partly taming it, 
 I'd rather suggest people simply move to for-of, and have the default keys 
 iterator Do The Right Thing and only iterate over own, enumerable property 
 names (thanks to Yehuda and Arv for straightening us out on this point 
 recently).
 
 I'm with you -- for-of is the new for-in, let is the new var.
 
 So, what is imported as part of the standard prelude when one opts into 
 ES.next?
 
 module Name from @name;
 import {iterator, keys, values, items} from @iter;
 
 ?
 
 /be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: for own(...) loop (spin-off from Re: for..in, hasOwnProperty(), and inheritance)

2011-11-09 Thread David Herman
On Nov 9, 2011, at 1:33 PM, Quildreen Motta wrote:

 On 09/11/11 19:20, Brendan Eich wrote:
 
 And if you need to break out of forEach, just, umm, don't use forEach. It's 
 the wrong tool for the job.
 
 Clearly people like the forEach array extra in conjunction with Object.keys. 
 With block-lambdas they could have their cake and break from it too (and the 
 call would be paren-free to boot).
 
 That sounds like something to look forward to.

I agree! :)

 Though, did TC39 reach a consensus on having or not block-lambdas or just a 
 shorter function syntax?

It's still a topic of discussion, not on the ES6 plate but ongoing work.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Almost) everything is expression

2011-11-10 Thread David Herman
 Brendan and Dave mention explicit semicolon. Yes, it's seems so by the 
 grammar (though, have to check more precisely), but it can be acceptable 
 price.

It's a serious price, though. Today if I write:

if (q) { ... }
else { ... }
(f())

then ASI kicks in after the else body. If we make if-statements into 
expressions, then either the above becomes a single expression, which is a 
serious and subtle backwards-incompatible change, or we define lookahead 
restrictions on ExpressionStatement, and introduce a refactoring hazard:

x = if (q) { ... }
else { ... }
(f()) // oops, this is now a parameter list on the RHS of 
the assignment!

I'm not positive, but that seems like a serious issue to me.

 Though, it can be visually _really_ ambiguous with object initialisers in 
 case of using labels inside the block:

A JS grammar needs to be formally unambiguous, so it requires very careful 
specification. Syntax design for JS is very tricky.

 Nope, have to think more on this...

You might want to take a look at this:

http://wiki.ecmascript.org/doku.php?id=strawman:block_vs_object_literal

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Almost) everything is expression

2011-11-11 Thread David Herman
On Nov 11, 2011, at 3:48 AM, François REMY wrote:

 I think you strongly underestimate the distinction problem. ... It's 
 completelty unclear to me. If there's no way to tell what the return 
 statement of the block is, there's no way to implement your proposal.

It's actually quite easy to implement Dmitry's proposal, because it's already 
specified by the ECMAScript semantics! All statements can produce completion 
values. You can test this yourself: take any statement, quote it as a string, 
and put it in eval, and you'll get a value (if it doesn't loop infinitely, exit 
the program, or throw an exception, of course).

As Andreas said, there's a subtler issue of whether there's a simple structure 
to value-producing substatements, and there's a few problematic cases, but 
there's already a plan to clean that up at

http://wiki.ecmascript.org/doku.php?id=harmony:completion_reform

But that's mostly a secondary issue, to keep things more regular and to be more 
compatible with tail calls.

To answer your specific example:

   let a = if (foo) {
  print('a is foo');
  foo;
   } else {
  // do some longer stuff
   };
 
 How do you know foo is an expression that should be assigned to a and 
 that print('a...') is not?

Because the then-branch of the if is a block, and the completion value of a 
block is specified in ECMAScript to be the last completion value produced by 
its body statements.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Almost) everything is expression

2011-11-11 Thread David Herman
On Nov 11, 2011, at 8:19 AM, Mark S. Miller wrote:

 On Fri, Nov 11, 2011 at 7:40 AM, gaz Heyes gazhe...@gmail.com wrote:
 On 11 November 2011 15:33, Mark S. Miller erig...@google.com wrote:
 let a = ({
 
  print('doing stuff');
  100;
 });
 
 How do you know the difference between a blank block statement and a object 
 literal? Surely it becomes an expression once an assignment occurs anyway. 
 
 Doh! Sorry, I completely mis-thought that. Nevermind.

Your idea of mandatory parens is still valid (if, IMO, a bit unsatisfyingly 
verbose) for most statement forms. It's only the block-statement-expression 
that doesn't work. Hence my do-expressions:

http://wiki.ecmascript.org/doku.php?id=strawman:do_expressions

or Brendan's subtly-disambiguated-block-statement-expressions:

http://wiki.ecmascript.org/doku.php?id=strawman:block_vs_object_literal

If Brendan's idea can be made to work, and it's not too confusing, I'm pretty 
sure I'd prefer it over do-expressions. You could simply write:

let a = {
print('doing stuff');
100
};

How gorgeous is that?

But I suspect as we work on evolving the syntax of object literals, it'll get 
harder to keep them disambiguated. For example, is this:

let a = {
foo(x)
{
alert(x)
}
}

...equivalent to this?

let a = {
foo: function(x)
{
alert(x);
}
};

...or this?

let a = {
foo(x);
{
alert(x);
}
};

So I just don't know if it's feasible.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Almost) everything is expression

2011-11-11 Thread David Herman
On Nov 11, 2011, at 9:50 AM, Allen Wirfs-Brock wrote:

 do-expression is a very good solution

Why thank you! ;-)

 How gorgeous is that?
 
 not very...
 
 but if I see any of:
   let a = do {...
   let a = {|| ...
   let a  = { ... 
 I immediately know what follows. That is gorgeous...

I'm not sure I buy that `let x = do { f(); 12 }` is *prettier* than `let x = { 
f(); 12 }` but I do agree that it's less subtle, given the existence of object 
literals. And `do` is as short a keyword as you can get in JS, so it's a pretty 
minimal amount of extra noise.

 So I just don't know if it's feasible.
 
 And being technically feasible does not make it desirable.

Of course. I didn't just mean technically feasible, I meant I don't know if 
it's feasible from a design perspective.

 Just remember the phrase: readability ambiguity

Fair enough. Personally, I think there are situations where a disambiguation 
that's *technically* subtle can actually be totally intuitive to the human eye 
and doesn't cause readability problems in practice. But I agree with you that 
this is not likely one of those cases. There's just too much overlap between 
what can go inside an object literal and what can go inside block statements.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Almost) everything is expression

2011-11-11 Thread David Herman
 How gorgeous is that?
 
 It's normal and consistent with other blocks, I'd say.

Sorry, that was an (American?) English colloquialism -- a rhetorical question 
meaning that's gorgeous!

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Using monocle mustache for chaining.

2011-11-11 Thread David Herman
On Nov 11, 2011, at 10:50 AM, Erik Arvidsson wrote:

 We've all looked at jQuery code and envied the conciseness of its chaining 
 APIs. Most of us also looked at it and thought; Yuk, making everything a 
 method of jQuery and always return the jQuery object is ugly.

Beauty is in the eye of the beholder... jQuery made combinator libraries cool, 
and for that this functional programmer is permanently indebted to jresig.

But really, there's a world of difference between the chaining done by 
libraries like jQuery and the chaining you get from this operator. The former 
is pure, the latter is a mutation. The .{ syntax obscures this fact, and I find 
that inexcusable.

I'm not saying mutation is bad -- JS is no pure functional language, nor should 
it be. But creating an assignment operator that doesn't suggest syntactically 
that it's an assignment, and worse, advertising it as a chaining operator as 
if it were the same thing as what jQuery does, makes the same mistake of 
masking mutation that so many imperative programming languages make. Languages 
should be up front when they use mutation and not try to hide it or confuse 
programmers into not knowing when they're doing it.

Just for contrast, here's what your example might look like with a more 
explicit assignment syntax:

document.querySelector('#my-element') .= {
style .= {
'color': 'red',
'padding': '5px'
},
textContent: 'Hello'
};

But that nested thingy smells awfully funny to me. This reminds me of excessive 
uses of point-free style in Haskell, where people do back-flips just to avoid 
creating variable names for intermediate results. Variables aren't evil! 
Sometimes it's just cleaner to use a local variable:

let elt = document.querySelector('#my-element');
elt.style .= {
'color': 'red',
'padding': '5px'
};
elt.textContent = 'Hello';

Worse, notice that you have provided *only* an ability to do deep mutation 
here, and no way to do a functional update (such as a copy with the specified 
changes, or a prototype extension with the changes) on nested structure. My 
spidey-sense was already tingling with monocle-mustache, but this nesting is 
another turn for the worse.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Almost) everything is expression

2011-11-11 Thread David Herman
I would translate How X is that? as that is very X! :)

Dave

On Nov 11, 2011, at 12:26 PM, Dmitry Soshnikov wrote:

 On 11.11.2011 23:44, David Herman wrote:
 How gorgeous is that?
 It's normal and consistent with other blocks, I'd say.
 Sorry, that was an (American?) English colloquialism -- a rhetorical 
 question meaning that's gorgeous!
 
 offtopic
 
 And what does it mean? :) I translated it as how do you like it, ah? or 
 isn't it just nice?. On what I say, yeah it's nice and consistent with 
 other blocks.
 
 Anyway, thanks for noticing, basically I know English (American ;)), though 
 not such interesting colloquialisms.
 
 /offtopic
 
 Dmitry.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: An array destructing specification choice

2011-11-11 Thread David Herman
Late to the party, but I've brought more booze.

On Nov 5, 2011, at 2:41 PM, Brendan Eich wrote:

 We have:
 
 1. Should an array pattern always query 'length'?
 
 2. If the answer to (1) is no, then should ... in an array pattern query 
 'length'?
 
 On reflection and at this point in the thread, with your reply in mind, my 
 prefs in order: [no, yes], [no, no]. In no case do I favor [yes]. I'm 
 refutably matching [no, _] :-P.

I feel strongly that the appropriate semantics is [no, yes].

Here's my reasoning. Arrays are a multi-purpose data structure in JS. Sometimes 
they are used for fixed-size tuples, and sometimes they are used for dynamic 
length arrays. (Similarly, objects are used both for fixed-size records and for 
dynamic size dictionaries.)

When you use a fixed-length tuple in JS, you do not query the .length property. 
When you use a dynamic-length array, you do.

When you use a fixed-size record in JS, you do not use object enumeration. When 
you use a dynamic-size dictionary in JS, you do.

Destructuring is meant to provide elegant syntax for all of these use cases. 
The syntax of [] destructuring is for fixed-length tuples if there is no 
ellipsis, and for dynamic-length arrays if there is an ellipsis. That's what 
the ellipsis is good for; distinguishing the case where you know statically how 
many elements you expect from the case where you don't.

More concretely, here's the rough desugaring I expect. I'll use 〰〰 as 
meta-ellipsis (thanks, Unicode!). I'll just specify the special case where each 
element is an identifier. It's straightforward to generalize to arbitrary 
nested destructuring patterns and hole patterns.

A pattern of the form

[a0, a1, 〰〰, ak]

desugars to

a0 = %v[0];
a1 = %v[1];
〰〰
ak = %v[k];

A pattern of the form

[a0, a1, 〰〰, ak, ...r]

desugars to

a0 = %v[0];
a1 = %v[1];
〰〰
ak = %v[k];
let %length = %v.length;
r = [ %v[i] for i of [k+1, 〰〰, %length - 1] if (i in %v) ];

This can be generalized further to allow a fixed number of patterns *after* the 
ellipsis as well:

A pattern of the form

[a0, a1, 〰〰, ak, ...r, bn, bn-1, 〰〰, b0]

desugars to

a0 = %v[0];
a1 = %v[1];
〰〰
ak = %v[k];
let %length = %v.length;
r = [ %v[i] for i of [k+1, 〰〰, %length - n - 2] if (i in %v) ];
bn = %v[%length - n - 1];
bn-1 = %v[%length - (n - 1) - 1];
〰〰
b0 = %v[%length - 0 - 1];

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Almost) everything is expression

2011-11-11 Thread David Herman
On Nov 11, 2011, at 2:51 PM, Mike Samuel wrote:

 If statements as expressions goes forward, we should look into
 tweaking completion values.
 
 IMHO, a code maintainer who sees
 
resource = ..., foo(resource)
 
 would expect to be able to wrap the use of resource in a try finally thus
 
resource = ..., (try { foo(resource) } finally { release(resource) })
 
 without changing the completion value of the expression.

Good catch! (no pun intended)

I'll add this to

http://wiki.ecmascript.org/doku.php?id=harmony:completion_reform

Thanks,
Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: An array destructing specification choice

2011-11-11 Thread David Herman
On Nov 11, 2011, at 3:36 PM, Allen Wirfs-Brock wrote:

 On Nov 11, 2011, at 3:17 PM, David Herman wrote:
 
 A pattern of the form
 
   [a0, a1, 〰〰, ak, ...r]
 
 desugars to
 
   a0 = %v[0];
   a1 = %v[1];
   〰〰
   ak = %v[k];
   let %length = %v.length;
 
 do we sample the length here or at the very beginning? It presumably only 
 matter if a %v[n]  is an accessor with side-effects that modify %v.  
 Generally, the array functions sample length at the beginning before 
 processing any elements.

Beginning seems fine to me.

 This can be generalized further to allow a fixed number of patterns *after* 
 the ellipsis as well:
 
 A pattern of the form
 
   [a0, a1, 〰〰, ak, ...r, bn, bn-1, 〰〰, b0]
 
 We currently haven't specified this syntactic form.  I'm not sure if it adds 
 enough value to justify the added conceptual complexity.

I think it's a pretty big win, and I'd argue it's totally intuitive. The great 
thing about destructuring is that you can intuit the semantics without actually 
having to understand the details of the desugaring/semantics.

Also: we'll definitely want to allow it for splicing, so the grammar will have 
to allow it already, and symmetry/consistency argue for allowing it in 
destructuring too. Likewise for function formals and actuals.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: An array destructing specification choice

2011-11-11 Thread David Herman
On Nov 11, 2011, at 4:23 PM, Axel Rauschmayer wrote:

 It would be nice if r was optional:
 [..., b0, b1, b2] = arr

Agreed. Pure win, no downside.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Using monocle mustache for chaining.

2011-11-11 Thread David Herman
On Nov 11, 2011, at 4:18 PM, Rick Waldron wrote:

 Dave, if nesting were out of the question and monocle-mustache operator 
 always looked like an object literal as they currently exist, would it still 
 be as vile? With that form, I'm a big fan.

I'm of multiple minds (a condition I'm gradually getting accustomed to).

1) I'm really against the current syntax, because it hides the fact that it's a 
mutation. Assignments masquerading as declarative forms bring out Angry Dave 
(you wouldn't like me when I'm angry...)

2) I like it better if it includes = in it, e.g.:

obj .= { foo: 1, bar: 2 };

3) But that makes me really wish for a more general expression form, so I could 
also do e.g.:

obj1 .= obj2;

I've even written up a strawman for it:

http://wiki.ecmascript.org/doku.php?id=strawman:batch_assignment_operator

4) Allen has pointed out that this is problematic for private names. For many 
use cases, it'd be fine if .= just didn't copy any private names -- done. But 
Allen's class pattern wanted to be able to specify private names. So he 
restricted the RHS to look like a literal.

5) Still, I have to say, the restricted RHS just seems ad hoc and unsatisfying.

Anyway, I'm still mulling.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: An array destructing specification choice

2011-11-11 Thread David Herman
On Nov 11, 2011, at 5:31 PM, Axel Rauschmayer wrote:

 Also: we'll definitely want to allow it for splicing, so the grammar will 
 have to allow it already, and symmetry/consistency argue for allowing it in 
 destructuring too. Likewise for function formals and actuals.
 
 
 Using it for splicing suggests a construction analog:
 
 let r = [2,3,4]
 let arr = [0,1,..r, 5, 6, 7]

That's what I meant by splicing.

 The grammar seems to support this, but I’ve never seen it in an example.
 
 I might also be useful in parameter lists:
 
 function foo(first, ...middle, last) {
 } 

That's what I meant by function formals and actuals.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Alternative syntax for |

2011-11-16 Thread David Herman
On Nov 16, 2011, at 12:11 PM, Dmitry Soshnikov wrote:

 Yes, I understand, but it doesn't answer the question -- why do we need 
 _additional_ keyword

Infix operators can be conditional keywords. That's the current plan for is, 
IINM.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-16 Thread David Herman
|

On Nov 16, 2011, at 11:27 PM, Mark S. Miller wrote:

 
 
 On Wed, Nov 16, 2011 at 11:24 PM, David Herman dher...@mozilla.com wrote:
 Someone who shall remain nameless shot this down when I floated it privately. 
 But I just have to throw this out there, because I kind of can't stop myself 
 falling in love with it...
 
 We used to have this (mis-)feature for dynamically extending scope chains, 
 and despite being ill-conceived, it did have this elegant syntax spelled 
 with. In ES5 strict, we banned that feature, and it's not coming back for 
 ES6, or ever.
 
 Now we want a (good) feature for dynamically extending prototype chains. And 
 here's this old keyword, just lying around unused...
 
obj with { foo: 12 } with { bar: 13 } with { baz: 17 }
 
 I don't get it yet. What do you mean by dynamically extending prototype 
 chains? What does the above expression do and evaluate to?
 
  
 
 So? Who's with me?
 
 Dave
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 
 
 -- 
 Cheers,
 --MarkM

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-16 Thread David Herman
On Nov 16, 2011, at 11:28 PM, Dmitry Soshnikov wrote:

 However, we nevertheless had/have the semantics for `with', and it may cause 
 confusion.

Right, that's the natural objection. But... with-statements are dead, long live 
with-expressions!

 Moreover, you need to specify that [noNewLineHere] should be inserted

I don't think there's any need -- you'd only get with-expressions in ES6, and 
with-statements don't exist.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-16 Thread David Herman
On Nov 16, 2011, at 11:27 PM, Mark S. Miller wrote:

 On Wed, Nov 16, 2011 at 11:24 PM, David Herman dher...@mozilla.com wrote:
obj with { foo: 12 } with { bar: 13 } with { baz: 17 }
 
 I don't get it yet. What do you mean by dynamically extending prototype 
 chains? What does the above expression do and evaluate to?

My first answer was glib, sorry. I'm proposing `with' as a replacement syntax 
for |. So the above expression evaluates to the same as

obj | { foo: 12 } | { bar: 13 } | { baz: 17 }

which in turn, if I've got this right, would be equivalent to

Object.create(Object.create(Object.create(obj, { foo: { value: 12,
enumerable: true,
configurable: true,
writable: true } }),
{ bar: { value: 13,
 enumerable: true,
 configurable: true,
 writable: true } }),
  { baz: { value: 17,
   enumerable: true,
   configurable: true,
   writable: true } })

since in this example I only used the object literal variant. (The function, 
array, etc variants do things that Object.create can't do.)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 3:53 AM, Dmitry Soshnikov wrote:

 Once again, it's absolutely the same approach which I showed yesterday with 
 using `extends' 
 (https://mail.mozilla.org/pipermail/es-discuss/2011-November/018478.html).

My point has absolutely nothing to do with semantics and everything to do with 
syntax. And `extends` fails completely as the syntax. It's backwards -- the 
prototype doesn't extend the own-properties!

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 12:10 AM, Russell Leggett wrote:

 since in this example I only used the object literal variant. (The function, 
 array, etc variants do things that Object.create can't do.)
 
 I think this is ultimately the downfall of 'with' as a complete replacement 
 for | or extends. It works pretty well on objects but no others.
 
SomeFunc with function(){...}
 
 Does not read nearly as well.

Interesting. I don't think it reads badly, but I can see it not being as 
intuitive as the object literal form. But lots of operators would look 
confusing if you didn't know what they mean (e.g., || or  or ^ or %). Once 
you know that `with` simply means prototype extension, I don't think it reads 
that badly. Subjective, I guess.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 3:17 AM, Axel Rauschmayer wrote:

obj with { foo: 12 } with { bar: 13 } with { baz: 17 }
 
 I like the idea! As it is syntactically different in this role, errors should 
 be easy to spot.
 
 But I think `with` “points in the wrong direction” (object `obj` *with* 
 prototype `proto`). That is, to me, it suggests a pointer going from 
 prototype to object.

Well, ultimately the directionality is arbitrary. In my proposal it's 
prototype `obj` *with* instance `blah`. This results in LTR whereas the other 
way ends up RTL. JS is built around English, so LTR seems more appropriate 
(with all due sympathy to our Hebrew-speaking programmers!).

 The above example demonstrates just how well the | operator works. The main 
 objection to it is that it looks wrong in some fonts? Unless there is a 
 general grawlix objection, something arrow-y would be great.

Every font I ever see it in looks terrible, and there is a general grawlix 
objection. This is largely an aesthetic thing, but aesthetics matter and people 
react very strongly against | or funky Unicode symbols. (The latter just ain't 
gonna happen.)

 Is there a list of symbols that have already been rejected? Seeing the 
 preposition “with, I feel like suggesting “of” (prototype `proto` *of* 
 object `obj`), but I think that has been rejected before (and is taken by the 
 for loop).
 
  obj of { foo: 12 } of { bar: 13 } of { baz: 17 }

We're already using `of` for a different purpose (for-of), and it just reads 
wrong here.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 5:13 AM, Russell Leggett wrote:

 Look closer - it is being used as a prefix operator, not an infix operator.
 
 extends Proto {...}

There have been a few alternatives discussed in the previous thread. IMO, in 
each one of them, `extends` is awkward. The one you're talking about:

extends p { foo: 1, bar: 2, baz: 3 }

is stilted -- English is SVO (subject-verb-object) not VSO. We don't say loves 
JavaScript Russ but rather Russ loves JavaScript. Plus we don't have any 
binary prefix expression operators in JS so that sticks out, too.

If it's infix, you have to make `extends' RTL:

{ foo: 1, bar: 2, baz: 3 } extends p

As I said in response to Axel, LTR is preferable in JS.

But finally, there's the grammatical issue: `extends` makes sense as part of 
either a declaration or a boolean operator. In a declaration we'd be declaring 
*that* X extends Y. In a boolean operator we'd be asking *whether* X extends Y. 
But what we're talking about here is an operator for constructing a new object: 
the result of X extended *with* Y. Grammatically, extendedwith or extendedby 
would fit better, but of course look terrible.

 This works well with any of the proposal class as operator
 
 class extends Proto {...}
 
 class Point2d extends Point {...} 

I'm not tying my suggestion to classes at all. I like classes fine, and don't 
think they need to be built up from tiny lego pieces. We can use `extends` as 
part of the class syntax without having to have `extends` mean something on its 
own.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 5:16 AM, Dmitry Soshnikov wrote:

 And `extends` fails completely as the syntax.
 
 This is why it's so wide-spread in other languages for inheritance, right? ;)

In other languages it's not a stand-alone operator but a part of class syntax. 
(I don't know Ruby, so maybe you'll correct me there.)

 `extends' is here for years in many langs for exactly to chain an object with 
 its prototype or class (the langs which first come in mind are: Java, PHP, 
 Scala, CoffeeScript, Dart, others). It's not used to copy own properties!
 
 As well, as it's not used to copy own properties in Ruby (in case if you want 
 to argue with that Object.extend copies own properties in JS libs)! -- 
 exactly from there was borrowed Object.extend to Prototype.js. In Ruby it 
 just chains an object with yet another prototype (hidden meta-class which 
 inherits from mixin module).
 
 So `extends' is for inheritance. And was even reserved in ES.

I've never had a problem with using `extends` as part of a `class` syntax. 
There's no need for `extends` to stand on its on.

 Hope I made it more clear.

Sorry I misunderstood which syntax you were promoting (prefix rather than 
infix). I explained in my reply to Russ why I think it doesn't work.

 Once again (sorry, Axel Rauschmayer, you may skip this phrase ;)), I'm not 
 against `with', I'm just worry about that it had/has diff. semantics. But. We 
 may re-use it for mixins.

I'm not sure how much I agree with your argument about confusion with the old 
semantics. Re: once again -- I *did* reply to that point last night, and you 
didn't reply to that. But maybe my answer was too cute. To elaborate:

On the one hand: for JS programmers who already know the language, I don't 
think it's that confusing to learn that `with` statements were banned and now 
there's a new `with` expression. For newbies, they arguably don't have to learn 
the old form, so there's nothing to confuse.

On the other hand: there is the fact that ES3 and non-strict ES5 code will 
exist indefinitely into the future, so when reading code, people would probably 
end up having to disambiguate what they're looking at based on the language 
version. That's a valid concern.

I also agree that `with` fits very well as future syntax for mixins or traits, 
and that's a direction we ought to work towards (post-ES6). And it might also 
be confusing to use `with` as both a part of the class syntax *and* an 
expression operator.

As I say, I'm not sure how much I agree, but they're valid concerns.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 5:37 AM, Axel Rauschmayer wrote:

 [cc-ing es-discuss again]
 
 On Nov 17, 2011, at 14:20 , Russell Leggett wrote:
 
 If | changed to allow non-literal RHS values, I could see it getting more 
 use
 
 obj | comparable | enumerable | {...}
 
 but right now, that has a big hurdle and I've yet to see anybody but me 
 propose a solution.

Allen's semantics for | depends on the RHS being a literal, because it infers 
the [[Class]] and such from the literal, and because it takes any private names 
from the object literal form.

 True, that’s the catch. Then it works for composing an inheritance hierarchy 
 (as in mixins as abstract subclasses).
 
 Another idea for `extends` (if there is more than one object that is being 
 extended):
 
  extends(comparable, enumerable, foo, bar) { ... }

I'm not sure what the semantics of this would be. Are you inventing 
multiple-prototype inheritance? That's not going to happen.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 6:20 AM, Axel Rauschmayer wrote:

 I'm not sure what the semantics of this would be. Are you inventing 
 multiple-prototype inheritance? That's not going to happen.
 
 
 Single inheritance, a prototype chain composed from the given objects, in the 
 given order. An infix operator is probably better for this, though.

That would have to mutate the prototypes of existing objects. We're not going 
to add new ways to mutate prototypes.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 6:26 AM, Mike Samuel wrote:

 2011/11/17 David Herman dher...@mozilla.com:
obj with { foo: 12 } with { bar: 13 } with { baz: 17 }
 
 Does the below fit your syntax and isn't it lexically ambiguous with
 the old with?
 
 obj
 with ({ foo: 12 })
 {}

This was discussed above; there's no ambiguity if the new language doesn't have 
with statements. Quildreen and Dmitry have both objected to the confusion to 
the reader caused by the ambiguity between language versions, though.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Alternative syntax for |

2011-11-17 Thread David Herman
On Nov 17, 2011, at 8:00 AM, Jason Orendorff wrote:

 On Wed, Nov 16, 2011 at 1:12 PM, Erik Arvidsson
 erik.arvids...@gmail.com wrote:
 One thing that all of these discussions are missing is the hoisting
 property of function and any possible future classes. If we use let
 Point = ... we lose all hoisting and the order of your declarations
 starts to matter and we will end up in the C mess where forward
 references do not work.
 
 Can you give sample code where this is really a problem?

People take advantage of the fact that they can define their functions wherever 
they want in JS and they'll already be initialized. It's perfectly reasonable 
to have a bunch of classes and want to group them together thematically, and 
not have to sort them in order of initialization.

 I think it's a problem in C/C++ because of early binding and because
 the C/C++ parser has to recognize typenames.
 
 In ES, the scope of let Point = ... is the enclosing block, right?
 Forward references should work fine.

This isn't about scope, it's about at what point they're initialized. If you 
write:

let x = new C();
let C = class /* whatever */;

you won't get a scope error but a runtime initialization error. Whereas if you 
write:

let x = new C();
class C { ... }

it'll work fine.

I'm with Arv 150% on this.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 6:41 AM, Dmitry Soshnikov wrote:

 And uses `.extend' instance method (inherited from Object) for imperative 
 delegation-based mixing.

Sure, so that's just a method then, not an `extends` keyword.

 OK, though, I'd like again to notice Scala:
 
 object foo extends bar {
   ...
 }
 
 class Foo extends bar {
   ...
 }

Right; in Scala it's still only part of a declaration syntax. Just in terms of 
being readable English, I don't think `extends` works as an operator. I think 
it only works as part of a declaration (I declare that X extends Y), or it 
would work as a boolean operator (does X extend Y?), but it doesn't work as 
an operator that creates a new object. You could do something like

extend X with Y

but that would require a new reserved work `extend`. And it would be awkward to 
chain:

extend (extend X with Y) with Z

 the same I proposed to ES (to avoid new keyword `object' I used simple `let' 
 or `var'):
 
 let foo extends bar {
   x: 100
 }

There are a number of things I don't like about this, but primarily the fact 
that it changes the uniform syntax of `let` or `var`. Notice how Scala always 
uses a special prefix keyword that is built to work with `extends`.

 Thus, answering your mail (sorry for not answered before), I can't say, 
 whether `extends' is infix or prefix. I don't completely understood on can 
 be conditional keywords, but what prevents `extends' to be the same?

Oops, late night brain freeze. I meant to say contextual keywords. When 
there's a grammar context that doesn't allow arbitrary identifiers, we can 
allow specific identifiers with custom meanings without them actually being 
reserved words. That's what we're doing for of in the for-of loop, and that's 
what we're doing for the is and isnt operators. They aren't reserved words.

But it doesn't work for prefix operators unless you use a reserved word. We 
have `extends` as a reserved word, so that works. But I don't like it for 
reasons of English, as I explained above (and in an earlier email).

Dave


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 12:56 PM, Brendan Eich wrote:

 This would require migration through two steps. One to ES5 strict to get rid 
 of the with above (which relies on ASI). The second to ES.next or whatever 
 retasks 'with'.

I don't understand this-- that's already the case, since there's no 
with-statement in ES6.

 Also, using 'with' around object literals makes me want functional record 
 update. IIRC we've talked about that before.

That's one of the things I like about `with` for this: prototype extension is 
already a great mechanism for functional update on objects.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Alternative syntax for |

2011-11-17 Thread David Herman
On Nov 17, 2011, at 10:17 AM, Jason Orendorff wrote:

 I'm with Allen. If ES classes can contain any initialization code, I
 think it should run in program order, interleaved with top-level
 statements. Anything else is just confusing.

This is a great point, which I'd overlooked (not sure if Allen already said 
that and I missed it). But I'm not sure whether it argues for no declarative 
classes, for no syntax for statics, or for a restricted syntax for statics 
(e.g., only static methods).

 Note that classdefs in Ruby and Python aren't hoisted, and nobody
 complains. In those languages classdefs very often contain procedural
 code, for many purposes.

Good comparison, thanks. More grist...

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 1:30 PM, Brendan Eich wrote:

 If I have code of the kind Mike Samuel showed:
 
 obj
 with ({ foo: 12 })
 {}
 
 and I migrate directly into ES-whatever with 'with' as you propose (instead 
 of |), then I do not get an early error.

Understood.

 Also, using 'with' around object literals makes me want functional record 
 update. IIRC we've talked about that before.
 
 That's one of the things I like about `with` for this: prototype extension 
 is already a great mechanism for functional update on objects.
 
 Prototype extension or delegation is not the same as FRU at all -- the 
 delegating object can shadow proto-properties,

That's precisely what makes it analogous to FRU. You functionally update (i.e., 
update without mutating) by shadowing.

 the chaining is observable many ways (methods on the prototype, not only 
 reflection APIs), there are two objects not one.

Yes it's observable, but it's a very natural fit. It's how I do FRU today, 
using Object.create.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: with

2011-11-17 Thread David Herman
On Nov 17, 2011, at 2:08 PM, David Herman wrote:

 On Nov 17, 2011, at 1:30 PM, Brendan Eich wrote:
 
 If I have code of the kind Mike Samuel showed:
 
 obj
 with ({ foo: 12 })
 {}
 
 and I migrate directly into ES-whatever with 'with' as you propose (instead 
 of |), then I do not get an early error.
 
 Understood.

So I'm ready to give up, because of this issue and because of the fact of 
pre-ES6 and post-ES6 code co-existing indefinitely into the future (and the 
corresponding confusion that could ensue for reading code).

Shame, because I think it's gorgeous.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Globlization and Object.system.load

2011-11-17 Thread David Herman
We intend to have a synchronous API for accessing the built-in modules (those 
beginning with @ in their URL), as well as a synchronous way to access 
modules that have already been loaded. This went by briefly in July:

https://mail.mozilla.org/pipermail/es-discuss/2011-July/015929.html

I'm behind in work on fleshing out the API. I'll get back on this ASAP and 
update the list.

Dave

On Nov 17, 2011, at 3:23 PM, Brendan Eich wrote:

 I didn't make up load from whole cloth, there's a proposal at
 
 http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders
 
 Module people should weigh in.
 
 /be
 
 
 On Nov 17, 2011, at 3:08 PM, Roozbeh Pournader wrote:
 
 I was talking with Nebojša about the Object.system.load interface for
 loading globalization, thinking from the user side.
 
 Brendan's email suggested something like this:
 
 Object.system.load = function(name, callback) {
 if (name === '@g11n') {
   callback(v8Locale);
 }
 };
 
 That would make something like this the minimum code needed to use the 
 module:
 
 var g11n;
 Object.system.load(@g11n, function (g11n_module) {
  g11n = g11n_module;
 });
 
 What if we define load to be something like the following?
 
 Object.system.load = function(name, callback) {
 if (name === '@g11n') {
   if (callback) {
 callback(v8Locale);
   } else {
 return v8Locale;
   }
 }
 };
 
 That way, a user who can afford the delay, or know that this is
 immediate in Chrome, can simply do:
 
 var g11n = Object.system.load(@g11n);
 
 While the users who want the callback, can call it using the proper method.
 
 Roozbeh
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Globlization and Object.system.load

2011-11-17 Thread David Herman
On Nov 17, 2011, at 3:48 PM, Roozbeh Pournader wrote:

 On Thu, Nov 17, 2011 at 3:08 PM, Roozbeh Pournader rooz...@google.com wrote:
 That would make something like this the minimum code needed to use the 
 module:
 
 var g11n;
 Object.system.load(@g11n, function (g11n_module) {
   g11n = g11n_module;
 });
 
 I guess I was wrong about the minimum code. The minimum code is
 something like this:
 
 Object.system.load(@g11n, function (g11n) {
  // put all the code that uses g11n in here
 });
 
 I actually like this. I think I should just take back my comment...

I agreed with you before. :) When you know for sure that it can't block, you 
should have a synchronous API. Something like:

var g11n = Object.system.loaded[@g11n];

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nov 17 meeting notes

2011-11-17 Thread David Herman
On Nov 17, 2011, at 5:20 PM, Rick Waldron wrote:

 On Thu, Nov 17, 2011 at 7:40 PM, Waldemar Horwat walde...@google.com wrote:
 Array.from(a) is superfluous because it's expressed even simpler as
 [... a].  DaveH withdrew it.
 
 Perhaps Array.from() was either misunderstood or miscommunicated. I had 
 prepared a complete step-by-step production of the function's semantics and 
 documented them here:

It turns out that [...arrayLikeThingy] does exactly the same thing; it 
constructs a new Array from the contents of any array-like object.

 https://gist.github.com/1074126
 
 These steps support back compat to older JS (and DOM) implementations for 
 converting _any_ array looking object (arguments, DOM NodeLists, DOMTokenList 
 (classList), typed arrays... etc.) into a new instance of a real array. 
 
 This is a real problem, in real JavaScript, in the real world. Considering 
 the positive response from actual developers in the JS community, I'd like to 
 ask that it be reconsidered.

The reason why we decided to table the statics was that we had some serious 
questions about inheritance of statics and how they should behave, which is 
part of the ongoing discussions about classes. Given that spread (the ... 
syntax) gives you exactly the behavior you want, and it's actually very clear 
and even more concise than Array.from, it didn't seem worth taking more time 
discussing it now.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Globalization API discussion

2011-11-19 Thread David Herman
On Nov 19, 2011, at 5:50 PM, Brendan Eich wrote:

 On Nov 19, 2011, at 2:20 PM, Rick Waldron wrote:
 
 Q. We don't use option parameter like that in JS (see previous point for 
 actual example)
 
 Using an object-as-option parameter is a very common API design pattern in 
 real-world JavaScript today - why anyone would say otherwise is confounding. 
 
 Right. For example, ES5's property descriptor and property descriptor map 
 parameters.

It was me. I didn't say JS doesn't use options objects. I said the G11n library 
was using them wrong. They were doing:

if (!ops) {
ops = { foo: defFoo, bar: defBar, baz: defBaz };
}

instead of e.g.:

if (!ops)
ops = {};
if (typeof ops.foo === undefined)
ops.foo = defFoo;
if (typeof ops.bar === undefined)
ops.bar = defBar;
if (typeof ops.baz === undefined)
ops.baz = defBaz;

IOW, it shouldn't be all or nothing, but rather each property of the options 
object is separately optional.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Globalization API discussion

2011-11-20 Thread David Herman
On Nov 20, 2011, at 2:24 PM, Brendan Eich wrote:

 On Nov 20, 2011, at 11:16 AM, Allen Wirfs-Brock wrote:
 
 Actually, I think you would want to say:
 
   function frob(arg1, arg2, {foo = defFoo, bar = defBar, baz = defBaz}={}) {
 
 Thanks.
 
 
 It may be that for destructuring, in general,  we want to treat a 
 null/undefined RHS as { }.  Eg:
 
 let {a=1,b=2,c=3} = undefined;
 //should this throw or should this be the same as:
 let {a=1,b=2,c=3} = { };
 
 I would not add more implicit magic to JS. E4X had junk like this in it, 
 which only ever concealed bugs.

I'm of two minds about this. In the abstract, I agree with Brendan; fail-soft 
conceals bugs. But in reality, our destructuring logic is incredible fail-soft. 
Hardly anything in destructuring is treated as an error. And the syntax really 
*wants* to match the common pattern. So I'm torn.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


<    1   2   3   4   5   6   7   >