Re: On dropping @names

2012-12-29 Thread Andreas Rossberg
On 28 December 2012 20:53, David Herman  wrote:
> On Dec 28, 2012, at 11:47 AM, Andreas Rossberg  wrote:
>> That seems clean, useful, consistent, and fairly easy to understand. 
>> Introducing extra rules for 'let'? Not so much.
>
> But TDZ does introduce extra rules! Especially with disallowing assignment 
> temporally before initialization.

I have to disagree, see my other reply.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread Brendan Eich

David Herman wrote:

>  For the first group (function, module), there is no problem. For the second 
(let, const, class, private -- although TBH, I forgot the reason why 'class' is in 
this group), we have temporal dead zone, where accessing a variable before its 
initialization is an error.


The class's `extends` clause has to be evaluated and can have arbitrary user 
code, side effects, etc.


Oops, forgot that! Good point.


  Similar for possible future clauses like computed property value expressions.

And static fields with initializers.


>  That seems clean, useful, consistent, and fairly easy to understand. 
Introducing extra rules for 'let'? Not so much.


But TDZ does introduce extra rules! Especially with disallowing assignment 
temporally before initialization.


Wait, Andreas was stipulating TDZ with its extra rules but noting they 
apply to non-function, non-module binding forms. So apples to apples, he 
is noting that if let lacks TDZ, then we have something extra.


You could counter-argue that this something extra is already in the pot 
due to var.


I'm still ok with TDZ, but apprehensive of performance fault myth and 
truth based on early implementations of it that are not sufficiently 
optimized (if only for gamed benchmarks).


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread Brendan Eich

Andreas Rossberg wrote:
For the second (let, const, class, private -- although TBH, I forgot 
the reason why 'class' is in this group),


To future-proof against class static syntax that can be effectful.

/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread Brendan Eich

Andreas Rossberg wrote:
1. Those where initialization can be performed at the start of the 
scope (which is what I meant by "hoisted" above), 


The h-word will die hard. I think most people use it to mean this, i.e., 
function hoisting. With "var" context it means binding but not 
initialization hoisting, and that adds confusion.


Personal position: I'm not going to drop the word "hoist" and variants, 
but I'll try to specify binding vs. init (which implies binding), and to 
what scope, in the future.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread David Herman
On Dec 28, 2012, at 11:47 AM, Andreas Rossberg  wrote:

> We can identify two classes of lexical declarations:
> 
> 1. Those where initialization can be performed at the start of the scope 
> (which is what I meant by "hoisted" above), and the bound variable can safely 
> be accessed throughout the entire scope.

Thanks for the clarification.

> 2. Those where the initialization cannot happen before the respective 
> declaration has been reached (because it may depend on effectful 
> computations).
> 
> For the first group (function, module), there is no problem. For the second 
> (let, const, class, private -- although TBH, I forgot the reason why 'class' 
> is in this group), we have temporal dead zone, where accessing a variable 
> before its initialization is an error.

The class's `extends` clause has to be evaluated and can have arbitrary user 
code, side effects, etc. Similar for possible future clauses like computed 
property value expressions.

> That seems clean, useful, consistent, and fairly easy to understand. 
> Introducing extra rules for 'let'? Not so much.

But TDZ does introduce extra rules! Especially with disallowing assignment 
temporally before initialization.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread Andreas Rossberg
On 28 December 2012 19:54, David Herman  wrote:

> On Dec 28, 2012, at 2:29 AM, Andreas Rossberg  wrote:
>
> >> Can these "plenty" be enumerated? Apart from const, which ones have
> TDZs?
> >
> > All declarations whose initialization cannot be hoisted. My
> understanding is that that would be 'const', 'class' and 'private',
> although we have just dropped the latter from ES6. There might potentially
> be additional ones in future versions.
>
> Wait, can you specify which meaning of "hoist" you mean here? All of
> const, class, and private would still be scoped to their entire containing
> block, right? They just wouldn't be pre-initialized before the block starts
> executing.
>

Yes, sorry for using the confusing term again. Maybe we should just ban it.

I think we all agree that all lexical declarations should have the same
lexical visibility -- which is the entire block. But there will be
differences in life time, i.e. when initialization takes place.

We can identify two classes of lexical declarations:

1. Those where initialization can be performed at the start of the scope
(which is what I meant by "hoisted" above), and the bound variable can
safely be accessed throughout the entire scope.

2. Those where the initialization cannot happen before the respective
declaration has been reached (because it may depend on effectful
computations).

For the first group (function, module), there is no problem. For the second
(let, const, class, private -- although TBH, I forgot the reason why
'class' is in this group), we have temporal dead zone, where accessing a
variable before its initialization is an error.

That seems clean, useful, consistent, and fairly easy to understand.
Introducing extra rules for 'let'? Not so much.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread David Herman
On Dec 28, 2012, at 2:29 AM, Andreas Rossberg  wrote:

>> Can these "plenty" be enumerated? Apart from const, which ones have TDZs?
> 
> All declarations whose initialization cannot be hoisted. My understanding is 
> that that would be 'const', 'class' and 'private', although we have just 
> dropped the latter from ES6. There might potentially be additional ones in 
> future versions.

Wait, can you specify which meaning of "hoist" you mean here? All of const, 
class, and private would still be scoped to their entire containing block, 
right? They just wouldn't be pre-initialized before the block starts executing.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread David Herman
On Dec 28, 2012, at 2:11 AM, Andreas Rossberg  wrote:

> 
>> That doesn't prove that it was a *bug*. That's a question about the 
>> programmer's intention. In fact, I don't think you can. For example, I 
>> mentioned let-binding at the bottom:
>> 
>> {
>> console.log(x);
>> let x;
>> }
>> 
>> It the programmer intended that to print undefined, then TDZ would break the 
>> program. Before you accuse me of circularity, it's *TDZ* that doesn't have 
>> JavaScript historical precedent on its side. *You're* the one claiming that 
>> programs that ran without error would always be buggy.
> 
> Hold on. First of all, that example is in neither of the two forms whose 
> equivalence you were asking about. Second, all I was claiming in reply is 
> that one of those two forms is necessarily buggy in all cases where the 
> equivalence does not hold. So the above is no counter example to that.

Oh, okay, then I just misread what your claim about what would always be buggy. 
Sorry about that.

> Instead, it falls into the "weird use case" category that I acknowledged will 
> always exist, unless you make 'let' _exactly_ like 'var'.

OK, this is progress. So we agree that there will be styles that will break, 
and your position is that those styles don't need to be supported. I want to be 
clear that I think that's a totally reasonable position, I'm just concerned 
about risk.

I have an additional concern about UBI vs RBA but I'll start a separate thread 
on that.

> Your line of argument is 'let' is not like 'var', thereby people will 
> probably reject it. While I understand your concern, I do not see any 
> evidence that TDZ specifically will tip that of. So far, I've only heard the 
> opposite reaction.

It's that if they do something benign and are confused by the error they get, 
they'll abandon it and say "I couldn't figure out how to make let work, so I 
went back to var." Or that people will have to learn "you should use let in all 
your new code, but now you have to learn these additional rules." It's easy to 
say in the abstract "yeah I'd prefer a version of let that catches my bugs," 
but what will happen when the error reporting produces false positives?

> Moreover, if you drive that argument to its logical conclusion then 'let' 
> should just be 'var'. Don't you think that you are drawing a somewhat 
> arbitrary line to define what you consider 'var'-like enough?

Well, I guess I'm still trying to figure out where we should draw that line. I 
would like to believe we can find a place that catches more bugs, but I'm not 
convinced we're there yet. Bear with me? More in a new thread in a few 
minutes...

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread Andreas Rossberg
On 28 December 2012 11:22, Brendan Eich  wrote:

> Andreas Rossberg wrote:
>
>> As for TDZ precedent, ES6 will have plenty of "precedent" of other
>> lexical declaration forms that uniformly have TDZ and would not allow an
>> example like the above.
>>
>
> Can these "plenty" be enumerated? Apart from const, which ones have TDZs?


All declarations whose initialization cannot be hoisted. My understanding
is that that would be 'const', 'class' and 'private', although we have just
dropped the latter from ES6. There might potentially be additional ones in
future versions.

But actually, what I perhaps should have said is that there is no other
declaration that allows uninitialized access. That holds for all lexical
declarations.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread Brendan Eich

Andreas Rossberg wrote:
As for TDZ precedent, ES6 will have plenty of "precedent" of other 
lexical declaration forms that uniformly have TDZ and would not allow 
an example like the above.


Can these "plenty" be enumerated? Apart from const, which ones have TDZs?

/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread Andreas Rossberg
On 28 December 2012 07:10, David Herman  wrote:

> On Dec 27, 2012, at 2:13 PM, Andreas Rossberg  wrote:
>
> >>> It's true that with TDZ, there is a difference between the two forms
> above, but that is irrelevant, because that difference can only be observed
> for erroneous programs (i.e. where the first version throws, because 'x' is
> used by 'stmt').
> >>
> >> Can you prove this? (Informally is fine, of course!) I mean, can you
> prove that it can only affect buggy programs?
> >
> > Well, I think it's fairly obvious. Clearly, once the
> assignment/initialization "x = e" has been (successfully) executed, there
> is no observable difference in the remainder of the program. Before that
> (including while evaluating e itself), accessing x always leads to a TDZ
> exception in the first form. So the only way it can not throw is if stmt
> and e do not access x, in which case the both forms are equivalent.
>
> That doesn't prove that it was a *bug*. That's a question about the
> programmer's intention. In fact, I don't think you can. For example, I
> mentioned let-binding at the bottom:
>
> {
> console.log(x);
> let x;
> }
>

> It the programmer intended that to print undefined, then TDZ would break
> the program. Before you accuse me of circularity, it's *TDZ* that doesn't
> have JavaScript historical precedent on its side. *You're* the one claiming
> that programs that ran without error would always be buggy.
>

Hold on. First of all, that example is in neither of the two forms whose
equivalence you were asking about. Second, all I was claiming in reply is
that one of those two forms is necessarily buggy in all cases where the
equivalence does not hold. So the above is no counter example to that.
Instead, it falls into the "weird use case" category that I acknowledged
will always exist, unless you make 'let' _exactly_ like 'var'.

As for TDZ precedent, ES6 will have plenty of "precedent" of other lexical
declaration forms that uniformly have TDZ and would not allow an example
like the above. I think it will be rather difficult to make a convincing
argument that having 'let' behave completely differently from all other
lexical declarations is less harmful and confusing than behaving
differently from 'var' -- which is not a lexical declaration at all, so
does not raise the same expectations.

Here's what it comes down to. Above all, I want let to succeed. The
> absolute, #1, by-far-most-important feature of let is that it's block
> scoped.


I think introducing 'let' would actually be rather questionable if it was
(1) almost as error-prone as 'var', and at the same time, (2) had a
semantics that is inconsistent with _both_ 'var' and all other lexical
declarations (which is what you are proposing). (Not to mention
future-proofness.)

Your line of argument is 'let' is not like 'var', thereby people will
probably reject it. While I understand your concern, I do not see any
evidence that TDZ specifically will tip that of. So far, I've only heard
the opposite reaction.

Moreover, if you drive that argument to its logical conclusion then 'let'
should just be 'var'. Don't you think that you are drawing a somewhat
arbitrary line to define what you consider 'var'-like enough?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


whither TDZ for 'let' (was: On dropping @names)

2012-12-27 Thread Brendan Eich
This thread needs a new subject to be a spin-off on the very important 
topic of to TDZ or not to TDZ 'let'. IMHO you neatly list the risks below.


I had swallowed TDZ in the face of these risks. I'm still willing to do 
so, for Harmony and for greater error catching in practice. I strongly 
suspect declare-at-the-bottom and other odd styles possible with 'var' 
won't be a problem for 'let' adoption. However, we need implementors to 
optimize 'let' *now* and dispell the first item below.


/be

David Herman wrote:

Here's what it comes down to. Above all, I want let to succeed. The absolute, 
#1, by-far-most-important feature of let is that it's block scoped. TDZ, while 
clearly adding the bonus of helping catch bugs, adds several risks:

- possible performance issues

- possibly rejecting non-buggy programs based on existing JavaScript 
programming styles

Are those risks worth taking? Can we prove that they won't sink let? "It's fairly 
obvious" doesn't give me a lot of confidence, I'm afraid.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread David Herman
On Dec 27, 2012, at 2:13 PM, Andreas Rossberg  wrote:

>>> It's true that with TDZ, there is a difference between the two forms above, 
>>> but that is irrelevant, because that difference can only be observed for 
>>> erroneous programs (i.e. where the first version throws, because 'x' is 
>>> used by 'stmt').
>> 
>> Can you prove this? (Informally is fine, of course!) I mean, can you prove 
>> that it can only affect buggy programs?
> 
> Well, I think it's fairly obvious. Clearly, once the 
> assignment/initialization "x = e" has been (successfully) executed, there is 
> no observable difference in the remainder of the program. Before that 
> (including while evaluating e itself), accessing x always leads to a TDZ 
> exception in the first form. So the only way it can not throw is if stmt and 
> e do not access x, in which case the both forms are equivalent.

That doesn't prove that it was a *bug*. That's a question about the 
programmer's intention. In fact, I don't think you can. For example, I 
mentioned let-binding at the bottom:

{
console.log(x);
let x;
}

It the programmer intended that to print undefined, then TDZ would break the 
program. Before you accuse me of circularity, it's *TDZ* that doesn't have 
JavaScript historical precedent on its side. *You're* the one claiming that 
programs that ran without error would always be buggy.

Here's what it comes down to. Above all, I want let to succeed. The absolute, 
#1, by-far-most-important feature of let is that it's block scoped. TDZ, while 
clearly adding the bonus of helping catch bugs, adds several risks:

- possible performance issues

- possibly rejecting non-buggy programs based on existing JavaScript 
programming styles

Are those risks worth taking? Can we prove that they won't sink let? "It's 
fairly obvious" doesn't give me a lot of confidence, I'm afraid.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Brendan Eich

David Herman wrote:

ES1 in 1995


"JS" if you please! No "ES" till 1996 November at earliest, really till 
June 1997.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 23:38, Andreas Rossberg  wrote:

> I don't feel qualified to talk for Scheme, but all Ocaml I've ever
> seen (SML uses more verbose 'let' syntax anyway) formatted the above as
>
> let sq = x * x in
>> print ("sq: " ^ toString sq ^ "\n");
>>
>> let y = sq / 2 in
>> print ("y: " ^ toString y ^ "\n")
>
>
> Similarly, in Haskell you would write
>
> do
>
>let sq = x * x
>>putStr ("sq: " ++ show sq ++ "\n")
>>
>>let y = sq / 2
>>putStr ("y: " ++ show y ++ "\n")
>
>
Don't know where the empty lines in the middle of both examples are coming
from, weird Gmail quote-editing glitch that didn't show up in the edit box.
Assume them absent. :)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 21:23, David Herman  wrote:

> On Dec 27, 2012, at 1:51 AM, Andreas Rossberg  wrote:
>
> > I think hoisting can mean different things, which kind of makes this
> debate a bit confused.
>
> Yep. Sometimes people mean "the scope extends to a region before the
> syntactic position where the declaration appears," sometimes they mean "the
> scope extends to the function body," and sometimes they mean "function
> declaration bindings are dynamically initialized before the containing
> function body or script begins executing."
>

Maybe we shouldn't speak of hoisting for anything else but the var case. As
I mentioned elsewhere, I rather like to think of it recursive (i.e.
letrec-style) block scoping. :)


> > There is var-style hoisting. Contrary to what Rick said, I don't think
> anybody can seriously defend that as an "excellent" feature. First, because
> it hoists over binders, but also second, because it allows access to an
> uninitialized variable without causing an error (and this being bad is
> where Dave seems to disagree).
>
> Are you implying that my arguments are not serious? :-(
>

You are not defending the first part, are you? ;)


> > Then there is the other kind of "hoisting" that merely defines what the
> lexical scope of a declaration is. The reason we need this
> backwards-extended scope is because we do not have an explicit let-rec or
> something similar that would allow expressing mutual recursion otherwise --
> as you mention. But it does by no means imply that the uninitialized
> binding has to be (or should be) accessible.
>
> No, it doesn't. I'm not interested in arguments about the "one true way"
> of programming languages. I think both designs are perfectly defensible.
> All things being equal, I'd prefer to have my bugs caught for me. But in
> some design contexts, you might not want to incur the dynamic cost of the
> read(/write) barriers -- for example, a Scheme implementation might not be
> willing/able to perform the same kinds of optimizations that JS engines do.
> In our context, I think the feedback we're getting is that the cost is
> either negligible or optimizable, so hopefully that isn't an issue.
>

Right, from our implementation experience in V8 I'm confident that it isn't
in almost any practically relevant case -- although we haven't fully
optimised 'let', and consequently, it currently _is_ slower, so admittedly
there is no proof yet.

But the other issue, which I worry you dismiss too casually, is that of
> precedent in the language you're evolving. We aren't designing ES1 in 1995,
> we're designing ES6 in 2012 (soon to be 2013, yikes!). People use the
> features they have available to them. Even if the vast majority of
> read-before-initialization cases are bugs, if there are some cases where
> people actually have functioning programs or idioms that will cease to
> work, they'll turn on `let`.
>
> So here's one example: variable declarations at the bottom. I certainly
> don't use it, but do others use it? I don't know.
>

Well, clearly, 'let' differs from 'var' by design, so no matter what,
you'll probably always be able to dig up some weird use cases that it does
not support. I don't know what to say to that except that if you want 'var'
in all its beauty then you know where to find it. :)

>> - It binds variables without any rightward drift, unlike functional
> programming languages.
> >
> > I totally don't get that point. Why would a rightward drift be inherent
> to declarations in "functional programming languages" (which ones, anyway?).
>
> Scheme:
>
> (let ([sq (* x x)])
>   (printf "sq: ~a~n" sq)
>   (let ([y (/ sq 2)])
> (printf "y: ~a~n" y)))
>
> ML:
>
> let sq = x * x in
>   print ("sq: " ^ (toString sq) ^ "\n");
>   let y = sq / 2 in
> print ("y: " ^ (toString y) ^ "\n")
>

I don't feel qualified to talk for Scheme, but all Ocaml I've ever
seen (SML uses more verbose 'let' syntax anyway) formatted the above as

let sq = x * x in
> print ("sq: " ^ toString sq ^ "\n");
> let y = sq / 2 in
> print ("y: " ^ toString y ^ "\n")


Similarly, in Haskell you would write

do

   let sq = x * x
>putStr ("sq: " ++ show sq ++ "\n")
>let y = sq / 2
>putStr ("y: " ++ show y ++ "\n")
>

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 21:08, David Herman  wrote:

> On Dec 27, 2012, at 1:23 AM, Andreas Rossberg  wrote:
> >> var x;
> >> if (...) { x = ... }
> >> if (x === undefined) { ... }
> >>
> >> If you want to use let instead, the === if-condition will throw. You
> would instead have to write:
> >>
> >> let x = undefined;
> >> if (...) { x = ... }
> >> if (x === undefined) { ... }
> >
> > That is not actually true, because AFAICT, "let x" was always understood
> to be equivalent to "let x = undefined".
>
> Well that's TDZ-UBI. It *is* true for TDZ-RBA. Maybe I was the only person
> who thought that was a plausible semantics being considered, but my claim
> (P => Q) is true. Your argument is ~P. Anyway, one way or another hopefully
> everyone agrees that TDZ-RBA is a non-starter.
>

Even with TDZ-RBA you can have that meaning for "let x" (and that semantics
would be closest to 'var'). What TDZ-RBA gives you, then, is the
possibility to also assign to x _before_ the declaration.

But anyway, I think we agree that this is not a desirable semantics, so it
doesn't really matter.

> It's true that with TDZ, there is a difference between the two forms
> above, but that is irrelevant, because that difference can only be observed
> for erroneous programs (i.e. where the first version throws, because 'x' is
> used by 'stmt').
>
> Can you prove this? (Informally is fine, of course!) I mean, can you prove
> that it can only affect buggy programs?
>

Well, I think it's fairly obvious. Clearly, once the
assignment/initialization "x = e" has been (successfully) executed, there
is no observable difference in the remainder of the program. Before that
(including while evaluating e itself), accessing x always leads to a TDZ
exception in the first form. So the only way it can not throw is if stmt
and e do not access x, in which case the both forms are equivalent.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread David Herman
On Dec 27, 2012, at 1:51 AM, Andreas Rossberg  wrote:

> I think hoisting can mean different things, which kind of makes this debate a 
> bit confused.

Yep. Sometimes people mean "the scope extends to a region before the syntactic 
position where the declaration appears," sometimes they mean "the scope extends 
to the function body," and sometimes they mean "function declaration bindings 
are dynamically initialized before the containing function body or script 
begins executing."

> There is var-style hoisting. Contrary to what Rick said, I don't think 
> anybody can seriously defend that as an "excellent" feature. First, because 
> it hoists over binders, but also second, because it allows access to an 
> uninitialized variable without causing an error (and this being bad is where 
> Dave seems to disagree).

Are you implying that my arguments are not serious? :-(

> Then there is the other kind of "hoisting" that merely defines what the 
> lexical scope of a declaration is. The reason we need this backwards-extended 
> scope is because we do not have an explicit let-rec or something similar that 
> would allow expressing mutual recursion otherwise -- as you mention. But it 
> does by no means imply that the uninitialized binding has to be (or should 
> be) accessible.

No, it doesn't. I'm not interested in arguments about the "one true way" of 
programming languages. I think both designs are perfectly defensible. All 
things being equal, I'd prefer to have my bugs caught for me. But in some 
design contexts, you might not want to incur the dynamic cost of the 
read(/write) barriers -- for example, a Scheme implementation might not be 
willing/able to perform the same kinds of optimizations that JS engines do. In 
our context, I think the feedback we're getting is that the cost is either 
negligible or optimizable, so hopefully that isn't an issue.

But the other issue, which I worry you dismiss too casually, is that of 
precedent in the language you're evolving. We aren't designing ES1 in 1995, 
we're designing ES6 in 2012 (soon to be 2013, yikes!). People use the features 
they have available to them. Even if the vast majority of 
read-before-initialization cases are bugs, if there are some cases where people 
actually have functioning programs or idioms that will cease to work, they'll 
turn on `let`.

So here's one example: variable declarations at the bottom. I certainly don't 
use it, but do others use it? I don't know.

>> - It automatically makes forward references work, so you can:
>> * order your definitions however it best "tells the story of your code," 
>> rather than being forced to topologically sort them by scope dependency
>> * use (mutual) recursion
> 
> Right, but that is perfectly well supported, and more safely so, with TDZ.

My point here was just about hoisting (perhaps a bit OT, but the question came 
up whether hoisting is bad) -- specifically, of having declarations bind 
variables in a scope that extends to a surrounding region that can cover 
expressions that occur syntactically earlier than the declaration itself. TDZ 
is orthogonal.

>> - It binds variables without any rightward drift, unlike functional 
>> programming languages.
> 
> I totally don't get that point. Why would a rightward drift be inherent to 
> declarations in "functional programming languages" (which ones, anyway?).

Scheme:

(let ([sq (* x x)])
  (printf "sq: ~a~n" sq)
  (let ([y (/ sq 2)])
(printf "y: ~a~n" y)))

ML:

let sq = x * x in
  print ("sq: " ^ (toString sq) ^ "\n");
  let y = sq / 2 in
print ("y: " ^ (toString y) ^ "\n")

ES6:

let sq = x * x;
console.log("sq: " + sq);
let y = sq / 2;
console.log("y: " + y);

Obviously functional programming languages can do similar things to what ES6 
does here; I'm not saying "functional programming sucks." You know me. :)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread David Herman
On Dec 27, 2012, at 1:23 AM, Andreas Rossberg  wrote:

>> Let's start with TDZ-RBA. This semantics is *totally untenable* because it 
>> goes against existing practice. Today, you can create a variable that starts 
>> out undefined and use that on purpose:
> 
> I think nobody ever proposed going for this semantics, so we can put that 
> aside quickly. However:

OK, well, it wasn't clear to me.

>> var x;
>> if (...) { x = ... }
>> if (x === undefined) { ... }
>> 
>> If you want to use let instead, the === if-condition will throw. You would 
>> instead have to write:
>> 
>> let x = undefined;
>> if (...) { x = ... }
>> if (x === undefined) { ... }
> 
> That is not actually true, because AFAICT, "let x" was always understood to 
> be equivalent to "let x = undefined".

Well that's TDZ-UBI. It *is* true for TDZ-RBA. Maybe I was the only person who 
thought that was a plausible semantics being considered, but my claim (P => Q) 
is true. Your argument is ~P. Anyway, one way or another hopefully everyone 
agrees that TDZ-RBA is a non-starter.

>> This is an assumption that has always existed for `var` (mutatis mutantum 
>> for the function scope vs block scope). You can move your declarations 
>> around by hand and you can write code transformation tools that move 
>> declarations around.
> 
> As Dominic has pointed out already, this is kind of a circular argument. The 
> only reason you care about this for 'var' is because 'var' is doing this 
> implicitly already. So programmers want to make it explicit for the sake of 
> clarity. TDZ, on the other hand, does not have this implicit widening of life 
> time, so no need to make anything explicit.

OK, I'll accept that Crock's manual-hoisting style only matters for `var`. I 
just want to be confident that there are no other existing benefits that people 
get from the equivalence (either in programming patterns or 
refactoring/transformation patterns) that will break.

> It's true that with TDZ, there is a difference between the two forms above, 
> but that is irrelevant, because that difference can only be observed for 
> erroneous programs (i.e. where the first version throws, because 'x' is used 
> by 'stmt').

Can you prove this? (Informally is fine, of course!) I mean, can you prove that 
it can only affect buggy programs?

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 05:36, David Herman  wrote:

> On Dec 26, 2012, at 7:40 PM, Brendan Eich  wrote:
>
> >> Many also believe that hoisting is an excellent feature, not a
> weirdness.
> >
> > For functions, I can defend hoisting, although if I had had more time, I
> might have done a let ... in ... or BCPL'ish equivalent form that groups
> the recursive bindings. For vars hoisting is pretty much an implementation
> abstraction leak in JS1 :-P.
>
> I absolutely can defend hoisting. (It's hoisting to *function scope*
> that's the issue, not hoisting itself.)


I think hoisting can mean different things, which kind of makes this debate
a bit confused.

There is var-style hoisting. Contrary to what Rick said, I don't think
anybody can seriously defend that as an "excellent" feature. First, because
it hoists over binders, but also second, because it allows access to an
uninitialized variable without causing an error (and this being bad is
where Dave seems to disagree).

Then there is the other kind of "hoisting" that merely defines what the
lexical scope of a declaration is. The reason we need this
backwards-extended scope is because we do not have an explicit let-rec or
something similar that would allow expressing mutual recursion otherwise --
as you mention. But it does by no means imply that the uninitialized
binding has to be (or should be) accessible.

Here's the rationale:
>
> - JS is dynamically scoped, so having an implicit dummy value isn't a
> problem for the type system.
>

I think that's red herring. An implicit dummy is bad regardless of types.
It rather speaks for types that they would prevent it. :)

- It automatically makes forward references work, so you can:
> * order your definitions however it best "tells the story of your code,"
> rather than being forced to topologically sort them by scope dependency
> * use (mutual) recursion
>

Right, but that is perfectly well supported, and more safely so, with TDZ.

- It binds variables without any rightward drift, unlike functional
> programming languages.
>

I totally don't get that point. Why would a rightward drift be inherent to
declarations in "functional programming languages" (which ones, anyway?).

So yes, you are right, we need hoisting. Nobody seems to disagree with
that. We only seem to disagree about what kind of hoisting we mean by that.
You have argued for a more var-like hoisting, but I honestly still cannot
see why you would consider that a good thing.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 01:50, David Herman  wrote:

> On Dec 11, 2012, at 2:45 AM, Andreas Rossberg  wrote:
> > The question, then, boils down to what the observation should be: a
>
 > runtime error (aka temporal dead zone) or 'undefined'. Given that
> > choice, the former is superior in almost every way, because the latter
> > prevents subtle initialisation errors from being caught early, and is
> > not an option for most binding forms anyway.
>
> You only listed good things (which I agree are good) about TDZ, but you
> don't list its drawbacks. I believe the drawbacks are insurmountable.
>

> Let's start with TDZ-RBA. This semantics is *totally untenable* because it
> goes against existing practice. Today, you can create a variable that
> starts out undefined and use that on purpose:
>

I think nobody ever proposed going for this semantics, so we can put that
aside quickly. However:


> var x;
> if (...) { x = ... }
> if (x === undefined) { ... }
>
> If you want to use let instead, the === if-condition will throw. You would
> instead have to write:
>
> let x = undefined;
> if (...) { x = ... }
> if (x === undefined) { ... }
>

That is not actually true, because AFAICT, "let x" was always understood to
be equivalent to "let x = undefined".


OK, so now let's consider TDZ-UBI. This now means that an initializer is
> different from an assignment, as you say:
>
> > They are initialisations, not assignments. The difference, which is
> > present in other popular languages as well, is somewhat important,
> > especially wrt immutable bindings.
>
> For `const`, I agree that some form of TDZ is necessary. But `let` is the
> important, common case. Immutable bindings (`const`) should not be driving
> the design of `let`. Consistency with `var` is far more important than
> consistency with `const`.
>

There is not just 'let' and 'const' in ES6, but more than a handful of
declaration forms. Even with everything else not mattering, I think it
would be rather confusing if 'let' had a different semantics completely
different from all the rest.

And for `let`, making initializers different from assignments breaks a
> basic assumption about hoisting. For example, it breaks the equivalence
> between
>
> { stmt ... let x = e; stmt' ... }
>
> and
>
> { let x; stmt ... x = e; stmt' ... }
>
> This is an assumption that has always existed for `var` (mutatis mutantum
> for the function scope vs block scope). You can move your declarations
> around by hand and you can write code transformation tools that move
> declarations around.
>

As Dominic has pointed out already, this is kind of a circular argument.
The only reason you care about this for 'var' is because 'var' is doing
this implicitly already. So programmers want to make it explicit for the
sake of clarity. TDZ, on the other hand, does not have this implicit
widening of life time, so no need to make anything explicit.

It's true that with TDZ, there is a difference between the two forms above,
but that is irrelevant, because that difference can only be observed for
erroneous programs (i.e. where the first version throws, because 'x' is
used by 'stmt').

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-26 Thread David Herman
On Dec 26, 2012, at 8:36 PM, David Herman  wrote:

> - JS is dynamically scoped, so having an implicit dummy value isn't a problem 
> for the type system.

Holy misnomers, batman. I meant dynamically typed. May Bob Harper have mercy on 
my soul. [1]

Dave

[1] He won't. He'll still be mad at me after my correction.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-26 Thread David Herman
On Dec 26, 2012, at 7:40 PM, Brendan Eich  wrote:

>> Many also believe that hoisting is an excellent feature, not a weirdness.
> 
> For functions, I can defend hoisting, although if I had had more time, I 
> might have done a let ... in ... or BCPL'ish equivalent form that groups the 
> recursive bindings. For vars hoisting is pretty much an implementation 
> abstraction leak in JS1 :-P.

I absolutely can defend hoisting. (It's hoisting to *function scope* that's the 
issue, not hoisting itself.) Here's the rationale:

- JS is dynamically scoped, so having an implicit dummy value isn't a problem 
for the type system.

- It automatically makes forward references work, so you can:
* order your definitions however it best "tells the story of your code," rather 
than being forced to topologically sort them by scope dependency
* use (mutual) recursion

- It binds variables without any rightward drift, unlike functional programming 
languages.

This is such a simple, practical, and elegant win that Scheme goes halfway 
towards the JS hoisting semantics by having nested definitions:

(lambda ()   ;; function() {
  (define (f) (g))   ;; let f = function() { return g() }
  (define (g) (f))   ;; let g = function() { return f() }
  (f))   ;; return f() }

and last I heard, Racket in fact has gone the rest of the way -- it's moved 
*towards* the hoisting semantics by allowing definitions (i.e., declarations) 
and expressions to intermingle:

(lambda ()   ;; function () {
  (define (f) (g))   ;; let f = function() { return g() }
  (printf "hello world~n")   ;; console.log("hello world");
  (define (g) (f))   ;; let g = function() { return f() }
  (f))   ;; return f() }

Yes, that's right, JS beat those hoity-toity Schemers to it by over a decade! 
And in fact, the initial binding of a variable in Racket is the # 
value! Sound familiar? :)

To be fair, I haven't kept up with the changes in Racket in recent years, so I 
don't know if there are some cases where it does a dynamic error instead of 
returning #. But my point here is just that JS isn't alone in doing 
hoisting. It's actually a very sensible design -- it falls out naturally in a 
non-lazy, dynamically typed language with any kind of mutually recursive 
bindings.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-26 Thread Brendan Eich

Rick Waldron wrote:

On Wednesday, December 26, 2012, Brandon Benvie wrote:

I guess to sum up what I think Domenic was saying: people hoist
var declarations so that their code acts the way the engine is
going to execute it in order to prevent a mismatch between
expectations and result. If there wasn't a reason to do that (AKA
TDZ-UBI) then it wouldn't be done, because it's not otherwise
desirable to do.


Conversely, many also believe there is benefit in having a single 
place in a function to locate all of the formal parameter names 
and initialized identifiers. Assignment close to use also embraced by 
this pattern.


Well, vacuously if you force initialization in the declaration, but 
otherwise, manually hoisting can move a var very far from its uses.


Subjectively, this makes it easier to identify free vars used in the 
function.


I've heard that too. It's a bit of a chore to hoist manually, though, 
and I've noticed that those who say they do it tend to grow unhoisted 
declarations in their code over time.



Many also believe that hoisting is an excellent feature, not a weirdness.


For functions, I can defend hoisting, although if I had had more time, I 
might have done a let ... in ... or BCPL'ish equivalent form that groups 
the recursive bindings. For vars hoisting is pretty much an 
implementation abstraction leak in JS1 :-P.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-26 Thread Brandon Benvie
When you use var.,.well we know the result. I think we should basically
name using hoisted var as "Crockford's js" because he has a patent on
hoisting var declarations, and if you're using var and not hoisting you're
"doing it wrong"[citation]. If you are, in es6, using var and hoisting,
you're doing what Crockford says but you're definitely doing it wrong, and
now you're just "behind the times".


On Wed, Dec 26, 2012 at 9:11 PM, Rick Waldron wrote:

>
>
> On Wednesday, December 26, 2012, Brandon Benvie wrote:
>
>> I guess to sum up what I think Domenic was saying: people hoist var
>> declarations so that their code acts the way the engine is going to execute
>> it in order to prevent a mismatch between expectations and result. If there
>> wasn't a reason to do that (AKA TDZ-UBI) then it wouldn't be done, because
>> it's not otherwise desirable to do.
>>
>>
>>
> Conversely, many also believe there is benefit in having a single place in
> a function to locate all of the formal parameter names and initialized
> identifiers. Assignment close to use also embraced by this pattern.
> Subjectively, this makes it easier to identify free vars used in the
> function.
>
> Many also believe that hoisting is an excellent feature, not a weirdness.
>
> Rick
>
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-26 Thread Rick Waldron
On Wednesday, December 26, 2012, Brandon Benvie wrote:

> I guess to sum up what I think Domenic was saying: people hoist var
> declarations so that their code acts the way the engine is going to execute
> it in order to prevent a mismatch between expectations and result. If there
> wasn't a reason to do that (AKA TDZ-UBI) then it wouldn't be done, because
> it's not otherwise desirable to do.
>
>
>
Conversely, many also believe there is benefit in having a single place in
a function to locate all of the formal parameter names and initialized
identifiers. Assignment close to use also embraced by this pattern.
Subjectively, this makes it easier to identify free vars used in the
function.

Many also believe that hoisting is an excellent feature, not a weirdness.

Rick
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-26 Thread Brandon Benvie
I guess to sum up what I think Domenic was saying: people hoist var
declarations so that their code acts the way the engine is going to execute
it in order to prevent a mismatch between expectations and result. If there
wasn't a reason to do that (AKA TDZ-UBI) then it wouldn't be done, because
it's not otherwise desirable to do.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: On dropping @names

2012-12-26 Thread Domenic Denicola
> From: es-discuss-boun...@mozilla.org [mailto:es-discuss-boun...@mozilla.org] 
> On Behalf Of David Herman
> Sent: Wednesday, December 26, 2012 19:50
 
> I imagine your reply is: don't do that transformation; place your `let` 
> declarations as late as possible before they are going to be used. I guess I 
> need to be convinced that the equivalence is of no value.

I would agree with this putative reply, and state that the equivalence is 
indeed of no value. What follows is my subjective feelings on why that is; I 
hope they are helpful, but I certainly don't claim this perspective is 
objectively better.

My perspective is as someone coming from other languages. Hoisting is, in my 
experience, a uniquely JavaScript weirdness. (Admittedly I don't have 
experience with too many other languages.) It often makes the top 3 list of 
JavaScript gotchas, hardest language features to understand, interview 
questions, etc. Having semantics where UBI is prohibited would bring back some 
sanity to the proceedings, and in an "always use let" world, eliminates 
hoisting almost entirely. (The remaining case is function declarations, which 
are much less confusing since there is no assignment involved.)

The fact that var declarations are manually hoisted to the top of the function 
by some is a direct consequence of the equivalence you mention; it's 
programmers trying to "do what the computer would do anyway." I would guess 
that what most programmers *want* is the ability to declare variables as close 
as possible to their actual use. They manually hoist, instead, because the 
language does not support the semantics they desire; in other words, 
declaration-close-to-use is "deceptive" in that your code looks like it's doing 
something, but the computer will hoist, making the code do something slightly 
different. If we gave them semantics that supported declare-close-to-use, viz. 
TDZ-UBI semantics, everything would be happy and there would be rejoicing in 
the streets. If we just hoisted to the top of the block, then the ability to 
declare close to use while maintaining parallel semantics to those the computer 
will do anyway is lost (in many cases).

That said, `let` has a lot going for it even without TDZ-UBI. E.g. the ability 
to use blocks to prevent scope pollution, or the sane per-loop bindings. So I'm 
a fairly-happy clam as-is. Just wanted to put in a word in favor of TDZ-UBI.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-26 Thread David Herman
On Dec 11, 2012, at 2:45 AM, Andreas Rossberg  wrote:

Late to the party, sorry. First, let's be clear there are a couple possible 
semantics that could be called "temporal dead zone." At this point I'm not sure 
what various people had in mind; I suspect different people may have understood 
it differently.

* Read-before-assignment error (TDZ-RBA):

Any read of the variable before it has been assigned throws an error. It can be 
assigned anywhere, either in the declaration's initializer or in any other 
assignment expression. If control executes a declaration without an 
initializer, it leaves the variable in the uninitialized state.

* Use-before-initialization error (TDZ-UBI)

Any read or write of the variable before it has been assigned by its 
initializer throws an error. It can only be initially assigned by its 
initializer. If control executes a declaration without an initializer, it 
leaves the variable initialized to `undefined`.

> The question, then, boils down to what the observation should be: a
> runtime error (aka temporal dead zone) or 'undefined'. Given that
> choice, the former is superior in almost every way, because the latter
> prevents subtle initialisation errors from being caught early, and is
> not an option for most binding forms anyway.

You only listed good things (which I agree are good) about TDZ, but you don't 
list its drawbacks. I believe the drawbacks are insurmountable.

Let's start with TDZ-RBA. This semantics is *totally untenable* because it goes 
against existing practice. Today, you can create a variable that starts out 
undefined and use that on purpose:

var x;
if (...) { x = ... }
if (x === undefined) { ... }

If you want to use let instead, the === if-condition will throw. You would 
instead have to write:

let x = undefined;
if (...) { x = ... }
if (x === undefined) { ... }

Not only does that look superfluous to existing JavaScript programmers, since 
they never had to write that out before, but *their code will be rejected by 
JSHint*. That's actually flagged as bad practice. We cannot and must not 
introduce new constructs that *require* programmers to use idioms that have 
already been rejected by the community.

OK, so now let's consider TDZ-UBI. This now means that an initializer is 
different from an assignment, as you say:

> They are initialisations, not assignments. The difference, which is
> present in other popular languages as well, is somewhat important,
> especially wrt immutable bindings.

For `const`, I agree that some form of TDZ is necessary. But `let` is the 
important, common case. Immutable bindings (`const`) should not be driving the 
design of `let`. Consistency with `var` is far more important than consistency 
with `const`.

And for `let`, making initializers different from assignments breaks a basic 
assumption about hoisting. For example, it breaks the equivalence between

{ stmt ... let x = e; stmt' ... }

and

{ let x; stmt ... x = e; stmt' ... }

This is an assumption that has always existed for `var` (mutatis mutantum for 
the function scope vs block scope). You can move your declarations around by 
hand and you can write code transformation tools that move declarations around.

It's certainly how I understand hoisting in JavaScript, and it's how I describe 
it in my book:

http://gyazo.com/df93d0944dff0d9487b81c3cf6802e92

In fact, Doug has taught countless programmers to hoist their declarations to 
the beginning of their function, and many (perhaps including Doug?) will 
probably do the analogous this with let, manually hoisting them to the 
beginning of the block. This transformation will actually defeat the error 
checking, since the variables will then become initialized to `undefined`.

I imagine your reply is: don't do that transformation; place your `let` 
declarations as late as possible before they are going to be used. I guess I 
need to be convinced that the equivalence is of no value.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-11 Thread Andreas Rossberg
On 10 December 2012 21:59, Claus Reinke  wrote:
>> Second, it doesn't eliminate the need for temporal dead zones at all.
>
> You could well be right, and I might have been misinterpreting what
> "temporal dead zone" (tdz) means.
> For a letrec, I expect stepwise-refinement-starting-from-undefined
> semantics, so I can use a binding anywhere in scope but may or may
> not get a value for it. While the tdz seems to stipulate that a binding for
> a variable in scope doesn't really exist and may not be accessed until its
> binding (explicit or implicitly undefined) statement is evaluated.

Not sure what you mean by
"stepwise-refinement-starting-from-undefined". JavaScript is both
eager and impure, and there is no tradition of imposing syntactic
restrictions on recursive bindings. Consequently, any binding can have
effects, and the semantics must be sequentialised according to textual
order. Short of sophisticated static analysis (which we can't afford
in a jitted language), there is no way to prevent erroneous forward
accesses from being observable at runtime.

The question, then, boils down to what the observation should be: a
runtime error (aka temporal dead zone) or 'undefined'. Given that
choice, the former is superior in almost every way, because the latter
prevents subtle initialisation errors from being caught early, and is
not an option for most binding forms anyway.

>> So what does it gain? The model we have now simply is that every scope is
>> a letrec (which is how JavaScript has always worked, albeit
>> with a less felicitous notion of scope).
>
> That is a good way of looking at it. So if there are any statements
> mixed in between the definitions, we simply interpret them as
> definitions (with side-effecting values) of unused bindings, and
>
> { let x = 0;
>  let z = [x,y]; // (*)
>  x++;
>  let y = x;  let __ = console.log(z);
> }
>
> is interpreted as
>
> { let x = 0;
>  let z = [x,y]; // (*)
>  let _ = x++;
>  let y = x;
>  let __ = console.log(z);
> }

Exactly. At least that's my preferred way of looking at it.

> What does it mean here that y is *dead* at (*), *dynamically*?
> Is it just that y at (*) is undefined, or does the whole construct throw a
> ReferenceError, or what?

Throw, see above.

> If tdz is just a form of saying that y is undefined at (*), then I can
> read the whole block as a letrec construct. If y cannot be used until its
> binding initializer statement has been executed, then I seem to have a
> sequence of statements instead.

It inevitably is an _impure_ letrec, which is where the problems come in.

> Of course, letrec in a call-by-value language with side-effects is tricky.
> And I assume that tdz is an attempt to guard against unwanted surprises. But
> for me it is a surprise that not only can side-effects on the right-hand
> sides modify bindings (x++), but that bindings are interpreted as
> assignments that bring in variables from the dead.

They are initialisations, not assignments. The difference, which is
present in other popular languages as well, is somewhat important,
especially wrt immutable bindings. Furthermore, temporal dead zone
also applies to assignments. So at least, side effects (which cannot
easily be disallowed) can only modify bindings after they have been
initialised.

None of these problems would go away by having explicit recursion.
Unless you impose far more severe restrictions.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-10 Thread Claus Reinke

   let lhs = rhs; statements
   // non-recursive, scope is statements

   let { declarations }; statements// recursive, scope is
  // declarations 
and statements


   let { // group of mutually recursive bindings, *no statements*

   [x,y] = [42,Math.PI]; // initialization, not assignment

   even(n) { .. odd(n-1) .. } // using short method form
   odd(n) { .. even(n-1) .. } // for non-hoisting functions

   class X { .. }
   class C extends S { .. new X( odd(x) ) .. }
   class S { }
   };
   if (even(2)) console.log(  new C() );


First of all, this requires whole new syntax for the let body. 


Yes and no - I'm borrowing definition syntax from other parts of
the language. Part of the appeal of having a declarations-only block
was to be able to use things like short method form there. The main 
appeal was to have no statements or hoisted constructs between 
declarations in a "letrec".


[by separating recursive and non-recursive forms, the non-recursive
form would have no rhs-undefineds for the ids being defined, which
would circumvent the separate, lexical form of dead zone]

Second, it doesn't eliminate the need for temporal dead zones at all. 


You could well be right, and I might have been misinterpreting what
"temporal dead zone" (tdz) means. 

For a letrec, I expect stepwise-refinement-starting-from-undefined 
semantics, so I can use a binding anywhere in scope but may or may
not get a value for it. While the tdz seems to stipulate that a binding 
for a variable in scope doesn't really exist and may not be accessed 
until its binding (explicit or implicitly undefined) statement is evaluated.


So what does it gain? The model we have now simply is that every 
scope is a letrec (which is how JavaScript has always worked, albeit

with a less felicitous notion of scope).


That is a good way of looking at it. So if there are any statements
mixed in between the definitions, we simply interpret them as
definitions (with side-effecting values) of unused bindings, and

{ let x = 0;
 let z = [x,y]; // (*)
 x++;
 let y = x; 
 let __ = console.log(z);

}

is interpreted as

{ let x = 0;
 let z = [x,y]; // (*)
 let _ = x++;
 let y = x;
 let __ = console.log(z);
}

What does it mean here that y is *dead* at (*), *dynamically*?
Is it just that y at (*) is undefined, or does the whole construct 
throw a ReferenceError, or what? 


If tdz is just a form of saying that y is undefined at (*), then I can
read the whole block as a letrec construct. If y cannot be used 
until its binding initializer statement has been executed, then I 
seem to have a sequence of statements instead.


Of course, letrec in a call-by-value language with side-effects is 
tricky. And I assume that tdz is an attempt to guard against 
unwanted surprises. But for me it is a surprise that not only can 
side-effects on the right-hand sides modify bindings (x++), but 
that bindings are interpreted as assignments that bring in 
variables from the dead.


The discussion of dead zone varieties in

https://mail.mozilla.org/pipermail/es-discuss/2008-October/007807.html

was driven by the interplay of old-style, hoisted, definitions with
initialization desugaring to assignment. The former mimics a letrec,
with parallel definitions, the latter means a block of sequential
assignments.

So I was trying to get the old-style hoisting and initialization by
assignment out of the picture, leaving a block of recursive
definitions that has a chance of being a real letrec. Perhaps
nothing is gained wrt temporal dead zones. But perhaps this is a 
way to clean up the statement/definition mix, profit from short 
definition forms and provide for non-recursive let without

lexical deadzone.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-07 Thread Andreas Rossberg
On 6 December 2012 22:26, Claus Reinke  wrote:
>>> I was hoping for something roughly like
>>>
>>>let lhs = rhs; statements
>>>// non-recursive, scope is statements
>>>
>>>let { declarations }; statements// recursive, scope is
>>> declarations and statements
>>
>> Problem is that you need mutual recursion between different binding forms,
>> not just 'let' itself.
>
> Leaving legacy var/function out of it, is there a problem with
> allowing mutually recursive new declaration forms in there?
>
>let { // group of mutually recursive bindings
>
>[x,y] = [42,Math.PI]; // initialization, not assignment
>
>even(n) { .. odd(n-1) .. } // using short method form
>odd(n) { .. even(n-1) .. } // for non-hoisting functions
>
>class X { .. }
>class C extends S { .. new X( odd(x) ) .. }
>class S { }
>};
>if (even(2)) console.log(  new C() );

First of all, this requires whole new syntax for the let body. Second,
it doesn't eliminate the need for temporal dead zones at all. So what
does it gain? The model we have now simply is that every scope is a
letrec (which is how JavaScript has always worked, albeit with a less
felicitous notion of scope).

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-06 Thread Claus Reinke

I was hoping for something roughly like

   let lhs = rhs; statements
   // non-recursive, scope is statements

   let { declarations }; statements// recursive, scope is
declarations and statements


Problem is that you need mutual recursion between different 
binding forms, not just 'let' itself.


Leaving legacy var/function out of it, is there a problem with
allowing mutually recursive new declaration forms in there?

   let { // group of mutually recursive bindings

   [x,y] = [42,Math.PI]; // initialization, not assignment

   even(n) { .. odd(n-1) .. } // using short method form
   odd(n) { .. even(n-1) .. } // for non-hoisting functions

   class X { .. }
   class C extends S { .. new X( odd(x) ) .. }
   class S { }
   };
   if (even(2)) console.log(  new C() );

Or did I misunderstand your objection?
Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-06 Thread Brendan Eich

Claus Reinke wrote:

Perhaps it helps the meeting participants with their concerns.


Sorry, it didn't. We are going without symbol usability in ES6, but 
discussion might still rescue it -- but probably not at this late date. 
We can at least get it in shape for ES7, and implementations can 
prototype it. We should not wait to discuss it, so thanks for stirring 
the pot.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-06 Thread Andreas Rossberg
On 6 December 2012 17:25, Claus Reinke  wrote:
> I was hoping for something roughly like
>
>let lhs = rhs; statements
>// non-recursive, scope is statements
>
>let { declarations }; statements// recursive, scope is
> declarations and statements

Problem is that you need mutual recursion between different binding
forms, not just 'let' itself.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-06 Thread Claus Reinke

I would have preferred if let had not been modeled after var so much, but
that is another topic.


It is as clean as it can get given JS. 


I was hoping for something roughly like

   let lhs = rhs; statements
   // non-recursive, scope is statements

   let { declarations }; statements 
   // recursive, scope is declarations and statements


No hoisting needed to support recursion, no temporal deadzones,
no problem with referring to old x when defining x non-recursively.
And less mixing of declarations and statements.

And you may be surprised to hear that there are some voices who 
actually would have preferred a _more_ var-like behaviour.


Well, in the beginning let was meant to replace var, so it had to
be more or less like it for an easy transition. Later, even that transition
was considered too hard, so var an let coexist, giving more freedom
for let design. At least, that is my impression.


The program equivalences are the same, up to annoying additional
congruences you need to deal with for nu-binders, which complicate
matters. Once you actually try to formalise semantic reasoning (think
e.g. logical relations), it turns out that a representation with a
separate store is significantly _easier_ to handle. Been there, done
that.


Hmm, I used to find reasoning at term level quite useful (a very long
time ago, I was working on a functional logic language, which had 
something like nu-binders for logic variables). Perhaps it depends on

whether one reasons about concrete programs (program development)
or classes of programs (language-level proofs).


gensym is more "imperative" in terms of the simplest implementation:
create a globally unused symbol.


Which also happens to be the simplest way of implementing
alpha-conversion. Seriously, the closer you look, the more it all
boils down to the same thing.


Yep. Which is why I thought to speak up when I saw those concerns
in the meeting notes;-)


Not under lambda-binders, but under nu-binders - they have to.

If was explaining that the static/dynamic differences that seem to make
some meeting attendees uncomfortable are not specific to nu-scoped
variables, but to implementation strategies. For lambda-binders, one can get
far without reducing below them, but if one lifts that restriction,
lambda-bound variables appear as runtime constructs, too, just as for
nu-binders and nu-bound variables (gensym-ed names).


Not sure what you're getting at precisely, but I don't think anybody
would seriously claim that nu-binders are useful as an actual
implementation strategy.


More as a user-level representation of whatever implementation
strategy is used behind the scenes, just as lambda-binders are a
user-level representation of efficient implementations. 


But to clarify the point:

Consider something like: 


   (\x. (\y. [y, y]) x)

Most implementations won't reduce under the \x., nor will they
bother to produce any detailed result, other than 'function'. So
those x and y are purely static constructs.

However, an implementation that does reduce under the \x.
will need to deal with x as a dynamic construct, passing it to
\y. to deliver the result (\x. [x,x]).

Now, the same happens with nu-binders, or private names:
after bringing them in scope, computation continues under
the nu-binder, so there is a dynamic representation (the
generated symbol) of the variable.

My point is that there isn't anything worrying about variables
appearing at dynamic constructs, nor is it specific to private
names - normal variables appearing to be static is just a
consequence of limited implementations. What is static is
the binding/scope structure, not the variables.

Since we mostly agree, I'll leave this here. Perhaps it helps
the meeting participants with their concerns.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-06 Thread Andreas Rossberg
On 5 December 2012 19:19, Claus Reinke  wrote:
>> their operational generativity perhaps being a mismatch with their
>> seemingly static meaning in certain syntactic forms,
>
> This appears to be ungrounded. See below.

Personally, I also consider that a non-issue, but it was concern that
was raised.

>>> Implicit scoping in a language with nested scopes has never been a
>>> good idea (even the implicit var/let scopes in JS are not its strongest
>>> point). Prolog got away with it because it had a flat program structure
>>> in the beginning, and even that fell down when integrating Prolog-like
>>> languages into functional one, or when adding local sets of answers.
>>
>> Indeed. (Although I don't think we have implicit let-scopes in JS.)
>
> There are few enough cases (scope to nearest enclosing block unless there is
> an intervening conditional or loop construct,

If you mean something like

  if (bla) let x;

then that is not actually legal.

> to nearest for loop body if it
> appears in the loop header, to the right in a comprehension) that the
> difference might not matter.
> I would have preferred if let had not been modeled after var so much, but
> that is another topic.

It is as clean as it can get given JS. And you may be surprised to
hear that there are some voices who actually would have preferred a
_more_ var-like behaviour.

>>> So I'm not sure how your concerns are being addressed by
>>> merely replacing a declarative scoping construct by an explicitly
>>> imperative gensym construct?
>>
>> We have the gensym construct anyway, @-names were intended to be merely
>> syntactic sugar on top of that.
>
> Yes, so my question was how removing the sugar while keeping
> the semantics is going to address the concerns voiced in the meeting
> notes.

The concern was that the sugar has issues, not symbol semantics as such.


>> Scope extrusion semantics actually is equivalent to an allocation
>> semantics. The only difference is that the store is part of your term
>> syntax instead of being a separate runtime environment, but it does
>> not actually make it more declarative in any deeper technical sense.
>> Name generation is still an impure effect, albeit a benign one.
>
> For me, as a fan of reduction semantics, having all of the semantics
> explainable in the term syntax is an advantage!-) While it is simple to map
> between the two approaches, the nu-binders are more "declarative" in terms
> of simpler program equivalences: for gensym,
> one needs to abstract over generated symbols and record sharing
> of symbols, effectively reintroducing what nu-binders model directly.

The program equivalences are the same, up to annoying additional
congruences you need to deal with for nu-binders, which complicate
matters. Once you actually try to formalise semantic reasoning (think
e.g. logical relations), it turns out that a representation with a
separate store is significantly _easier_ to handle. Been there, done
that.

> gensym is more "imperative" in terms of the simplest implementation:
> create a globally unused symbol.

Which also happens to be the simplest way of implementing
alpha-conversion. Seriously, the closer you look, the more it all
boils down to the same thing.

>>> As Brendon mentions, nu-scoped variables aren't all that different
>>> from lambda-scoped variables. It's just that most implementations
>>> do not support computations under a lambda binder, so lambda
>>> variables do not appear to be dynamic constructs to most people,
>>> while nu binders rely on computations under the binders, so a
>>> static-only view is too limited.
>>
>> I think you are confusing something. All the classical name calculi
>> like pi-calculus or nu-calculus don't reduce/extrude name binders
>> under abstraction either.
>
> Not under lambda-binders, but under nu-binders - they have to.
>
> If was explaining that the static/dynamic differences that seem to make
> some meeting attendees uncomfortable are not specific to nu-scoped
> variables, but to implementation strategies. For lambda-binders, one can get
> far without reducing below them, but if one lifts that restriction,
> lambda-bound variables appear as runtime constructs, too, just as for
> nu-binders and nu-bound variables (gensym-ed names).

Not sure what you're getting at precisely, but I don't think anybody
would seriously claim that nu-binders are useful as an actual
implementation strategy.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-05 Thread Rick Waldron
On Tue, Dec 4, 2012 at 8:28 AM, Claus Reinke wrote:

>
>>> Could you please document the current state of concerns, pros and
> cons that have emerged from your discussions so far?



Nov 28 TC39 Meeting Notes:
https://github.com/rwldrn/tc39-notes/blob/master/es6/2012-11/nov-28.md#syntactic-support-for-private-names

These were also posted to this list. I just read through them right now and
I'm confident I captured the issues raised in that discussion as they apply
to this thread.

Rick
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-05 Thread Claus Reinke

There were various mixed concerns, like perhaps requiring implicit
scoping of @-names to be practical in classes, 


Like implicitly scoping this, super, and arguments, this would cause
problems with nested scopes. Unless the name of the class was made
part of the implicitly named scope reference?

their operational generativity perhaps being a mismatch with their 
seemingly static meaning in certain syntactic forms, 


This appears to be ungrounded. See below.

potential ambiguities with what @x actually denotes in certain 
contexts. And probably more. Most of that should be in the meeting 
minutes.


Can't say about ambiguities. And I started asking because I couldn't
find (valid) reasons in the minutes;-)


Implicit scoping in a language with nested scopes has never been a
good idea (even the implicit var/let scopes in JS are not its strongest
point). Prolog got away with it because it had a flat program structure
in the beginning, and even that fell down when integrating Prolog-like
languages into functional one, or when adding local sets of answers.


Indeed. (Although I don't think we have implicit let-scopes in JS.)


There are few enough cases (scope to nearest enclosing block unless 
there is an intervening conditional or loop construct, to nearest for 
loop body if it appears in the loop header, to the right in a 
comprehension) that the difference might not matter. 

I would have preferred if let had not been modeled after var so 
much, but that is another topic.



Symbols will definitely still be usable as property names, that's
their main purpose.

The main technical reason that arbitrary objects cannot be used indeed
is backwards compatibility. The main moral reason is that using
general objects only for their identity seems like overkill, and you
want to have a more targeted and lightweight feature.


Having specific name objects sounds like the right approach.


So I'm not sure how your concerns are being addressed by
merely replacing a declarative scoping construct by an explicitly
imperative gensym construct?


We have the gensym construct anyway, @-names were intended 
to be merely syntactic sugar on top of that.


Yes, so my question was how removing the sugar while keeping
the semantics is going to address the concerns voiced in the meeting
notes.


- explicit scopes (this is the difference to gensym)
- scope extrusion (this is the difference to lambda scoping)


Scope extrusion semantics actually is equivalent to an allocation
semantics. The only difference is that the store is part of your term
syntax instead of being a separate runtime environment, but it does
not actually make it more declarative in any deeper technical sense.
Name generation is still an impure effect, albeit a benign one.


For me, as a fan of reduction semantics, having all of the semantics 
explainable in the term syntax is an advantage!-) While it is simple 
to map between the two approaches, the nu-binders are more 
"declarative" in terms of simpler program equivalences: for gensym,

one needs to abstract over generated symbols and record sharing
of symbols, effectively reintroducing what nu-binders model directly.

gensym is more "imperative" in terms of the simplest implementation:
create a globally unused symbol.


As Brendon mentions, nu-scoped variables aren't all that different
from lambda-scoped variables. It's just that most implementations
do not support computations under a lambda binder, so lambda
variables do not appear to be dynamic constructs to most people,
while nu binders rely on computations under the binders, so a
static-only view is too limited.


I think you are confusing something. All the classical name calculi
like pi-calculus or nu-calculus don't reduce/extrude name binders
under abstraction either.


Not under lambda-binders, but under nu-binders - they have to.

If was explaining that the static/dynamic differences that seem to make
some meeting attendees uncomfortable are not specific to nu-scoped 
variables, but to implementation strategies. For lambda-binders, one 
can get far without reducing below them, but if one lifts that restriction,
lambda-bound variables appear as runtime constructs, too, just as for 
nu-binders and nu-bound variables (gensym-ed names).


Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-04 Thread Brendan Eich

Andreas Rossberg wrote:

Indeed. (Although I don't think we have implicit let-scopes in JS.)


Only in comprehensions and generator expressions, which have an explicit 
outer syntax.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-04 Thread Andreas Rossberg
On 4 December 2012 14:28, Claus Reinke  wrote:
> Could you please document the current state of concerns, pros and
> cons that have emerged from your discussions so far? You don't
> want to have to search for these useful clarifications when this topic
> comes up again (be it in tc39 or in ES6 users asking "where is private?").

There were various mixed concerns, like perhaps requiring implicit
scoping of @-names to be practical in classes, their operational
generativity perhaps being a mismatch with their seemingly static
meaning in certain syntactic forms, potential ambiguities with what @x
actually denotes in certain contexts. And probably more. Most of that
should be in the meeting minutes.

> Implicit scoping in a language with nested scopes has never been a
> good idea (even the implicit var/let scopes in JS are not its strongest
> point). Prolog got away with it because it had a flat program structure
> in the beginning, and even that fell down when integrating Prolog-like
> languages into functional one, or when adding local sets of answers.

Indeed. (Although I don't think we have implicit let-scopes in JS.)

> This leaves the "generativity" concerns - I assume they refer to
> "gensym"-style interpretations? ES5 already has gensym, in the
> form of Object References (eg, Object.create(null)), and Maps
> will allow to use those as keys, right?
>
> The only thing keeping us from using objects as property names
> is the conversion to strings, and allowing Name objects as property
> names is still on the table (as is the dual approach of using a
> WeakMap as private key representation, putting the object in the
> key instead of the key in the object).

Symbols will definitely still be usable as property names, that's
their main purpose.

The main technical reason that arbitrary objects cannot be used indeed
is backwards compatibility. The main moral reason is that using
general objects only for their identity seems like overkill, and you
want to have a more targeted and lightweight feature.

> So I'm not sure how your concerns are being addressed by
> merely replacing a declarative scoping construct by an explicitly
> imperative gensym construct?

We have the gensym construct anyway, @-names were intended to be
merely syntactic sugar on top of that.

> There is a long history of declarative interpretations of gensym-
> like constructs, starting with declarative accounts of logic variables,
> over name calculi (often as nu- or lambda/nu-calculi, with greek
> letter nu for "new names"), all the way to pi-calculi (where names
> are communication channels between processes). Some of these
> calculi support name equality, some support other name features.
>
> The main steps towards a non-imperative account tend to be:
>
> - explicit scopes (this is the difference to gensym)
> - scope extrusion (this is the difference to lambda scoping)

Scope extrusion semantics actually is equivalent to an allocation
semantics. The only difference is that the store is part of your term
syntax instead of being a separate runtime environment, but it does
not actually make it more declarative in any deeper technical sense.
Name generation is still an impure effect, albeit a benign one.

Likewise, scoped name bindings are equivalent to a gensym operator
when names are first-class objects anyway (which they are in
JavaScript).

> As Brendon mentions, nu-scoped variables aren't all that different
> from lambda-scoped variables. It's just that most implementations
> do not support computations under a lambda binder, so lambda
> variables do not appear to be dynamic constructs to most people,
> while nu binders rely on computations under the binders, so a
> static-only view is too limited.

I think you are confusing something. All the classical name calculi
like pi-calculus or nu-calculus don't reduce/extrude name binders
under abstraction either.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-04 Thread Claus Reinke
Recall the main objection was not the generativity of @names mixed with the obj.@foo pun 
(after-dot). It was the usability tax of having to declare


 private @foo;

before defining/assiging

 obj.@foo = foo;

(in a constructor, typically).


Good clarification, thanks. Yes, the more important issue is the tension between having to 
predeclare all the @names in a scope and the danger of implicit scoping. (That said, the 
generativity does worry me. It's a smell.)


| Just to be super-sure we grok one another, it's not the generativity by
| itself (since nested function declarations are the same, as I mentioned
| in the meeting). It is the generativity combined with the obj.@foo pun
| on Good Old obj.foo where 'foo' is a singleton identifier equated to a
| string property name. Right?

Could you please document the current state of concerns, pros and
cons that have emerged from your discussions so far? You don't
want to have to search for these useful clarifications when this topic
comes up again (be it in tc39 or in ES6 users asking "where is private?").

Implicit scoping in a language with nested scopes has never been a
good idea (even the implicit var/let scopes in JS are not its strongest
point). Prolog got away with it because it had a flat program structure
in the beginning, and even that fell down when integrating Prolog-like
languages into functional one, or when adding local sets of answers.

So starting with explicit scoping, adding shortcuts if necessary
(and only after careful consideration), seems the obvious route
suggested by language design history.

This leaves the "generativity" concerns - I assume they refer to
"gensym"-style interpretations? ES5 already has gensym, in the
form of Object References (eg, Object.create(null)), and Maps
will allow to use those as keys, right?

The only thing keeping us from using objects as property names
is the conversion to strings, and allowing Name objects as property
names is still on the table (as is the dual approach of using a
WeakMap as private key representation, putting the object in the
key instead of the key in the object).

So I'm not sure how your concerns are being addressed by
merely replacing a declarative scoping construct by an explicitly
imperative gensym construct?

There is a long history of declarative interpretations of gensym-
like constructs, starting with declarative accounts of logic variables,
over name calculi (often as nu- or lambda/nu-calculi, with greek
letter nu for "new names"), all the way to pi-calculi (where names
are communication channels between processes). Some of these
calculi support name equality, some support other name features.

The main steps towards a non-imperative account tend to be:

- explicit scopes (this is the difference to gensym)
- scope extrusion (this is the difference to lambda scoping)

the former allows to put limits on who can mention/co-create a
name in a program, the latter allows to pass names around, once
created. With gensym, there is only one creator, all sharing comes
from passing the symbol around while expanding its scope (think:
"do { private @name; @name }").

As Brendon mentions, nu-scoped variables aren't all that different
from lambda-scoped variables. It's just that most implementations
do not support computations under a lambda binder, so lambda
variables do not appear to be dynamic constructs to most people,
while nu binders rely on computations under the binders, so a
static-only view is too limited.

I'm not saying that @names are necessary or have the best
form already - just that I would like to understand the concerns
and how they are addressed by the decisions made.

Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread Dave Herman
yes, exactly. symbols themselves are, by design, *gen*sym. :)

Dave

- Original Message -
From: Brendan Eich 
To: David Herman 
Cc: Brandon Benvie , es-discuss discussion 

Sent: Mon, 03 Dec 2012 19:47:24 -0800 (PST)
Subject: Re: On dropping @names

David Herman wrote:
> (That said, the generativity does worry me. It's a smell.)

Just to be super-sure we grok one another, it's not the generativity by 
itself (since nested function declarations are the same, as I mentioned 
in the meeting). It is the generativity combined with the obj.@foo pun 
on Good Old obj.foo where 'foo' is a singleton identifier equated to a 
string property name. Right?

/be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread Brendan Eich

David Herman wrote:

(That said, the generativity does worry me. It's a smell.)


Just to be super-sure we grok one another, it's not the generativity by 
itself (since nested function declarations are the same, as I mentioned 
in the meeting). It is the generativity combined with the obj.@foo pun 
on Good Old obj.foo where 'foo' is a singleton identifier equated to a 
string property name. Right?


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread David Herman
On Dec 3, 2012, at 6:35 PM, Brendan Eich  wrote:

> Recall the main objection was not the generativity of @names mixed with the 
> obj.@foo pun (after-dot). It was the usability tax of having to declare
> 
>  private @foo;
> 
> before defining/assiging
> 
>  obj.@foo = foo;
> 
> (in a constructor, typically).

Good clarification, thanks. Yes, the more important issue is the tension 
between having to predeclare all the @names in a scope and the danger of 
implicit scoping. (That said, the generativity does worry me. It's a smell.)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread Brendan Eich

Brandon Benvie wrote:
I understand that there's limitations on what can be packed into this 
release and in particular this proposal pushes the limits. But I don't 
buy the ES7-is-around-the-corner wager for two reasons.


Neither ES6 (+1 years) nor ES7 (+4) is "around the corner".

You are forgetting that we draft specs and *prototype-implement* well in 
advance of dotting ISO i's and crossing Ecma t's. Same as for "HTML5" 
and other modern standards.


So please do help get @-names back on track. Recall the main objection 
was not the generativity of @names mixed with the obj.@foo pun 
(after-dot). It was the usability tax of having to declare


  private @foo;

before defining/assiging

  obj.@foo = foo;

(in a constructor, typically).

/be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread David Herman
On Dec 3, 2012, at 6:14 PM, Brandon Benvie  wrote:

> I understand that there's limitations on what can be packed into this release 
> and in particular this proposal pushes the limits.

Look, that's all it comes down to. All the various @names proposals have 
problems. No alternative we've talked about is workable, and none of them has 
had enough scrutiny. Syntax seems to be one of the most expensive things to add 
to a language in terms of perceived complexity.

I feel for you. When I first proposed private names (in 2008, I think?) it had 
accompanying syntax. But syntax design is really, really hard. And so very easy 
to screw up. Meanwhile, it was proposed a *year-and-a-half* after the deadline, 
and we have lots of other work to do.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread Mark S. Miller
Ah. Gotcha. Yes, that's a good point. But even though such changes are
costly and less needed by ES7, I expect ES7 will nevertheless have
some further syntax enhancements that don't work on ES6. The main
example is the minimality of min-max classes, These really really need
to be made more usable in ES7.

On Mon, Dec 3, 2012 at 6:28 PM, David Herman  wrote:
> On Dec 3, 2012, at 6:27 PM, "Mark S. Miller"  wrote:
>
>> On Mon, Dec 3, 2012 at 6:14 PM, Brandon Benvie
>>  wrote:
>>> I understand that there's limitations on what can be packed into this
>>> release and in particular this proposal pushes the limits. But I don't buy
>>> the ES7-is-around-the-corner wager for two reasons.
>>>
>>> The first reason is that I believe it's likely going to be a lot harder to
>>> get syntax changes into ES7 than for ES6. ES6 is basically the cruise boat
>>> to new syntax and once that boat sets sail it's going to be another decade
>>> before anyone wants to screw around with breaking syntax changes.
>>
>> What are the breaking syntax changes in ES6?
>
> He's not using the term the way you and I do. :) He means using new syntax 
> breaks when run on old browsers.
>
> Dave
>



-- 
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread Brandon Benvie
Sorry, didn't realize that had another meaning. I mean anything that causes
a syntax error when attempting to run it.


On Mon, Dec 3, 2012 at 9:28 PM, David Herman  wrote:

> On Dec 3, 2012, at 6:27 PM, "Mark S. Miller"  wrote:
>
> > On Mon, Dec 3, 2012 at 6:14 PM, Brandon Benvie
> >  wrote:
> >> I understand that there's limitations on what can be packed into this
> >> release and in particular this proposal pushes the limits. But I don't
> buy
> >> the ES7-is-around-the-corner wager for two reasons.
> >>
> >> The first reason is that I believe it's likely going to be a lot harder
> to
> >> get syntax changes into ES7 than for ES6. ES6 is basically the cruise
> boat
> >> to new syntax and once that boat sets sail it's going to be another
> decade
> >> before anyone wants to screw around with breaking syntax changes.
> >
> > What are the breaking syntax changes in ES6?
>
> He's not using the term the way you and I do. :) He means using new syntax
> breaks when run on old browsers.
>
> Dave
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread Brendan Eich

Axel Rauschmayer wrote:

let iterable = { *[iterator]() { yield 5; } };

Presented without comment...


I'm sorry, but I reject this kind of argument. That code is simply 
more concise than:


   let iterable = { [iterator]: function*() { yield 5 } };


Given that the concise notation means that ': function' is omitted, 
wouldn’t it be better to write:



let iterable = { [iterator]*() { yield 5; } };


Maybe, but I think the star should come first. The problem is that 
function* iterator() { yield 5; } is the named generator function form. 
* after function and before name. * before name preserved in the 
property definition case means *[iterator]() { yield 5; }.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread David Herman
On Dec 3, 2012, at 6:27 PM, "Mark S. Miller"  wrote:

> On Mon, Dec 3, 2012 at 6:14 PM, Brandon Benvie
>  wrote:
>> I understand that there's limitations on what can be packed into this
>> release and in particular this proposal pushes the limits. But I don't buy
>> the ES7-is-around-the-corner wager for two reasons.
>> 
>> The first reason is that I believe it's likely going to be a lot harder to
>> get syntax changes into ES7 than for ES6. ES6 is basically the cruise boat
>> to new syntax and once that boat sets sail it's going to be another decade
>> before anyone wants to screw around with breaking syntax changes.
> 
> What are the breaking syntax changes in ES6?

He's not using the term the way you and I do. :) He means using new syntax 
breaks when run on old browsers.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread Mark S. Miller
On Mon, Dec 3, 2012 at 6:14 PM, Brandon Benvie
 wrote:
> I understand that there's limitations on what can be packed into this
> release and in particular this proposal pushes the limits. But I don't buy
> the ES7-is-around-the-corner wager for two reasons.
>
> The first reason is that I believe it's likely going to be a lot harder to
> get syntax changes into ES7 than for ES6. ES6 is basically the cruise boat
> to new syntax and once that boat sets sail it's going to be another decade
> before anyone wants to screw around with breaking syntax changes.

What are the breaking syntax changes in ES6?



> Introducing breaking syntax changes in every release just isn't going to
> feasible with no language support for gracefully handling unknown syntax.
>
> The second reason is that even if my prediction for syntax changes in ES7
> proves false, anything that does live in ES7 can expect to be unusable for
> practical purposes until ES7 is released. By the time that happens people
> will either have decided symbols with no syntax support aren't worth the
> trouble, or the world will have just that much more ugly code in it.
>
> Syntactic support for Symbols, of all the things on the table that are not
> sure things, is the one that *needs* to be in ES6.
>
>
> On Mon, Dec 3, 2012 at 8:47 PM, Axel Rauschmayer  wrote:
>>
>> let iterable = { *[iterator]() { yield 5; } };
>>
>> Presented without comment...
>>
>>
>> I'm sorry, but I reject this kind of argument. That code is simply more
>> concise than:
>>
>>let iterable = { [iterator]: function*() { yield 5 } };
>>
>>
>> Given that the concise notation means that ': function' is omitted,
>> wouldn’t it be better to write:
>>
>> let iterable = { [iterator]*() { yield 5; } };
>>
>>
>> --
>> Dr. Axel Rauschmayer
>> a...@rauschma.de
>>
>> home: rauschma.de
>> twitter: twitter.com/rauschma
>> blog: 2ality.com
>>
>>
>> ___
>> es-discuss mailing list
>> es-discuss@mozilla.org
>> https://mail.mozilla.org/listinfo/es-discuss
>>
>
>
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>



--
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread Brandon Benvie
I understand that there's limitations on what can be packed into this
release and in particular this proposal pushes the limits. But I don't buy
the ES7-is-around-the-corner wager for two reasons.

The first reason is that I believe it's likely going to be a lot harder to
get syntax changes into ES7 than for ES6. ES6 is basically the cruise boat
to new syntax and once that boat sets sail it's going to be another decade
before anyone wants to screw around with breaking syntax changes.
Introducing breaking syntax changes in every release just isn't going to
feasible with no language support for gracefully handling unknown syntax.

The second reason is that even if my prediction for syntax changes in ES7
proves false, anything that does live in ES7 can expect to be unusable for
practical purposes until ES7 is released. By the time that happens people
will either have decided symbols with no syntax support aren't worth the
trouble, or the world will have just that much more ugly code in it.

Syntactic support for Symbols, of all the things on the table that are not
sure things, is the one that *needs* to be in ES6.


On Mon, Dec 3, 2012 at 8:47 PM, Axel Rauschmayer  wrote:

> let iterable = { *[iterator]() { yield 5; } };
>
> Presented without comment...
>
>
> I'm sorry, but I reject this kind of argument. That code is simply more
> concise than:
>
>let iterable = { [iterator]: function*() { yield 5 } };
>
>
> Given that the concise notation means that ': function' is omitted,
> wouldn’t it be better to write:
>
> let iterable = { [iterator]*() { yield 5; } };
>
>
> --
> Dr. Axel Rauschmayer
> a...@rauschma.de
>
> home: rauschma.de
> twitter: twitter.com/rauschma
> blog: 2ality.com
>
>
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread Axel Rauschmayer
>> let iterable = { *[iterator]() { yield 5; } };
>> 
>> Presented without comment...
> 
> I'm sorry, but I reject this kind of argument. That code is simply more 
> concise than:
> 
>let iterable = { [iterator]: function*() { yield 5 } };


Given that the concise notation means that ': function' is omitted, wouldn’t it 
be better to write:

>> let iterable = { [iterator]*() { yield 5; } };


-- 
Dr. Axel Rauschmayer
a...@rauschma.de

home: rauschma.de
twitter: twitter.com/rauschma
blog: 2ality.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread David Herman
On Dec 3, 2012, at 4:38 PM, Domenic Denicola  
wrote:

> On the subject of ugly code, I believe the killing of @names and the 
> reintroduction of computed properties means that the typical iterator form 
> will be something like:
> 
> let iterable = { *[iterator]() { yield 5; } };
> 
> Presented without comment...

I'm sorry, but I reject this kind of argument. That code is simply more concise 
than:

let iterable = { [iterator]: function*() { yield 5 } };

or:

let iterable = {};
iterable[iterator] = function*() { yield 5 };

So... concise code is concise -- film at 11. :) Sure, arranging ASCII symbols 
next to each other in ways you couldn't do before can be confusing. Remember 
what regular expressions looked like when you first learned them?

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread Mark S. Miller
Until ES7. If we try to solve all problems in ES6, it might not ship
earlier than ES7 anyway.

On Mon, Dec 3, 2012 at 4:38 PM, Domenic Denicola
 wrote:
> On the subject of ugly code, I believe the killing of @names and the
> reintroduction of computed properties means that the typical iterator form
> will be something like:
>
> let iterable = { *[iterator]() { yield 5; } };
>
> Presented without comment...
> 
> From: Brandon Benvie
> Sent: 12/3/2012 18:00
> To: es-discuss discussion
> Subject: On dropping @names
>
> From the meeting notes it appears that support for @names is out, which I
> believe is quite unfortunate. I'd like to either propose a way to resolve
> the issues that caused them to be dropped, or come to an understanding of
> what the reason is that they can't be resurrected.
>
> First, I wanted to touch on the hypothetical "do I have to look up in scope
> to know what prop means in { prop: val }". Without @names Symbols are
> "eternally computed" despite actually being specific values that are
> statically resolvable. From the debugging standpoint, there's not actually a
> way to usefully represent properties with symbol keys. Furthermore, there's
> no way definitively look at source code and be able to determine when a
> property has a string key or a symbol. The only way you can is if you can
> trace back to the original `import Symbol from '@symbol'` used to create the
> value associated with a given variable.  This is the same problem, but even
> worse because without the '@' it can't even be determined if something is a
> string or a symbol, not just what symbol it is.
>
> Unless I'm not interpreting the correct meaning from the notes, the
> assertion was made that @names aren't static. My reading of the proposal
> indicates that you can only declare an @name once in a given scope and that
> you can only assign to it in the declaration. The only hazard that this
> creates is that it's possible to end up with one Symbol that's assigned to
> more than one @name in a given scope. In all other cases they behave as
> expected. Having @name declarations shadow outer ones is the same behavior
> as any other declaration and that's expected behavior.
>
> To address the problem with one symbol initializing multiple @names, @name
> initialization should be limited to the bare minimum. The main (only?)
> reason @name declarations needed an initializer is to facilitate importing
> them. If  @name initialization was limited to import declarations then a
> duplicate check during module loading results in the desired static runtime
> semantics.
>
> Using the MultiMap functional spec from Tab Atkins earlier today, I created
> a side by side comparison with and without @names.
> https://gist.github.com/4198745. Not having syntactic support for Symbols is
> a tough pill to swallow. Using Symbols as computed values with no special
> identifying characteristics results is significantly reduced code
> readability. Really ugly code, in fact.
>
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>



-- 
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-03 Thread David Herman
I think the important thing to recognize is that @names was an *extremely* late 
proposal, and it's just way too late for ES6, but that doesn't at all mean it's 
out for good. The deadline for new features was in the spring of *2011* after 
all! Now, the fact that we considered it at all was due to the fact that we 
were concerned about the same things you're concerned about here. But we have a 
lot to get done for ES6, and there are just too many questions related to 
@names to put them in at this late stage.

In other words, they're not rejected, they're just not happening for ES6.

Dave

On Dec 3, 2012, at 2:59 PM, Brandon Benvie  wrote:

> From the meeting notes it appears that support for @names is out, which I 
> believe is quite unfortunate. I'd like to either propose a way to resolve the 
> issues that caused them to be dropped, or come to an understanding of what 
> the reason is that they can't be resurrected.
> 
> First, I wanted to touch on the hypothetical "do I have to look up in scope 
> to know what prop means in { prop: val }". Without @names Symbols are 
> "eternally computed" despite actually being specific values that are 
> statically resolvable. From the debugging standpoint, there's not actually a 
> way to usefully represent properties with symbol keys. Furthermore, there's 
> no way definitively look at source code and be able to determine when a 
> property has a string key or a symbol. The only way you can is if you can 
> trace back to the original `import Symbol from '@symbol'` used to create the 
> value associated with a given variable.  This is the same problem, but even 
> worse because without the '@' it can't even be determined if something is a 
> string or a symbol, not just what symbol it is.
> 
> Unless I'm not interpreting the correct meaning from the notes, the assertion 
> was made that @names aren't static. My reading of the proposal indicates that 
> you can only declare an @name once in a given scope and that you can only 
> assign to it in the declaration. The only hazard that this creates is that 
> it's possible to end up with one Symbol that's assigned to more than one 
> @name in a given scope. In all other cases they behave as expected. Having 
> @name declarations shadow outer ones is the same behavior as any other 
> declaration and that's expected behavior.
> 
> To address the problem with one symbol initializing multiple @names, @name 
> initialization should be limited to the bare minimum. The main (only?) reason 
> @name declarations needed an initializer is to facilitate importing them. If  
> @name initialization was limited to import declarations then a duplicate 
> check during module loading results in the desired static runtime semantics.
> 
> Using the MultiMap functional spec from Tab Atkins earlier today, I created a 
> side by side comparison with and without @names. 
> https://gist.github.com/4198745. Not having syntactic support for Symbols is 
> a tough pill to swallow. Using Symbols as computed values with no special 
> identifying characteristics results is significantly reduced code 
> readability. Really ugly code, in fact.
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: On dropping @names

2012-12-03 Thread Domenic Denicola
On the subject of ugly code, I believe the killing of @names and the 
reintroduction of computed properties means that the typical iterator form will 
be something like:

let iterable = { *[iterator]() { yield 5; } };

Presented without comment...

From: Brandon Benvie<mailto:bran...@brandonbenvie.com>
Sent: ‎12/‎3/‎2012 18:00
To: es-discuss discussion<mailto:es-discuss@mozilla.org>
Subject: On dropping @names

>From the meeting notes it appears that support for @names is out, which I 
>believe is quite unfortunate. I'd like to either propose a way to resolve the 
>issues that caused them to be dropped, or come to an understanding of what the 
>reason is that they can't be resurrected.

First, I wanted to touch on the hypothetical "do I have to look up in scope to 
know what prop means in { prop: val }". Without @names Symbols are "eternally 
computed" despite actually being specific values that are statically 
resolvable. From the debugging standpoint, there's not actually a way to 
usefully represent properties with symbol keys. Furthermore, there's no way 
definitively look at source code and be able to determine when a property has a 
string key or a symbol. The only way you can is if you can trace back to the 
original `import Symbol from '@symbol'` used to create the value associated 
with a given variable.  This is the same problem, but even worse because 
without the '@' it can't even be determined if something is a string or a 
symbol, not just what symbol it is.

Unless I'm not interpreting the correct meaning from the notes, the assertion 
was made that @names aren't static. My reading of the proposal indicates that 
you can only declare an @name once in a given scope and that you can only 
assign to it in the declaration. The only hazard that this creates is that it's 
possible to end up with one Symbol that's assigned to more than one @name in a 
given scope. In all other cases they behave as expected. Having @name 
declarations shadow outer ones is the same behavior as any other declaration 
and that's expected behavior.

To address the problem with one symbol initializing multiple @names, @name 
initialization should be limited to the bare minimum. The main (only?) reason 
@name declarations needed an initializer is to facilitate importing them. If  
@name initialization was limited to import declarations then a duplicate check 
during module loading results in the desired static runtime semantics.

Using the MultiMap functional spec from Tab Atkins earlier today, I created a 
side by side comparison with and without @names. 
https://gist.github.com/4198745. Not having syntactic support for Symbols is a 
tough pill to swallow. Using Symbols as computed values with no special 
identifying characteristics results is significantly reduced code readability. 
Really ugly code, in fact.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


On dropping @names

2012-12-03 Thread Brandon Benvie
>From the meeting notes it appears that support for @names is out, which I
believe is quite unfortunate. I'd like to either propose a way to resolve
the issues that caused them to be dropped, or come to an understanding of
what the reason is that they can't be resurrected.

First, I wanted to touch on the hypothetical "do I have to look up in scope
to know what prop means in { prop: val }". Without @names Symbols are
"eternally computed" despite actually being specific values that are
statically resolvable. From the debugging standpoint, there's not actually
a way to usefully represent properties with symbol keys. Furthermore,
there's no way definitively look at source code and be able to determine
when a property has a string key or a symbol. The only way you can is if
you can trace back to the original `import Symbol from '@symbol'` used to
create the value associated with a given variable.  This is the same
problem, but even worse because without the '@' it can't even be determined
if something is a string or a symbol, not just what symbol it is.

Unless I'm not interpreting the correct meaning from the notes, the
assertion was made that @names aren't static. My reading of the proposal
indicates that you can only declare an @name once in a given scope and that
you can only assign to it in the declaration. The only hazard that this
creates is that it's possible to end up with one Symbol that's assigned to
more than one @name in a given scope. In all other cases they behave as
expected. Having @name declarations shadow outer ones is the same behavior
as any other declaration and that's expected behavior.

To address the problem with one symbol initializing multiple @names, @name
initialization should be limited to the bare minimum. The main (only?)
reason @name declarations needed an initializer is to facilitate importing
them. If  @name initialization was limited to import declarations then a
duplicate check during module loading results in the desired static runtime
semantics.

Using the MultiMap functional spec from Tab Atkins earlier today, I created
a side by side comparison with and without @names.
https://gist.github.com/4198745. Not having syntactic support for Symbols
is a tough pill to swallow. Using Symbols as computed values with no
special identifying characteristics results is significantly reduced code
readability. Really ugly code, in fact.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss