Re: short-circuiting Array.prototype.reduce

2015-03-26 Thread Kyle Simpson
> Um, that's not exactly what reduction is meant for.

There's lots of different ways `reduce(..)` gets used in the wild; I can list 
several entirely distinct but common idioms right off the top of my head. Just 
because it's not the original intent doesn't mean it's invalid to do so.

To the point of the earlier question I was addressing, I was just giving *an* 
example of real code, in a very specific circumstance, where I would have liked 
early-exit. It was not a broad endorsement of the presented idiom as a general 
pattern.


> The reduce method is designed so that the return values and the accumulator 
> argument do have the same type.

In my observation, there's nothing at all that requires that. This is certainly 
not the only time that I've made effective use of mixing/toggling types during 
reduction.


> In your example, you have somehow mixed an expected boolean result with the 
> item type of the array.

If by "expected boolean result" you mean what `filter(..)` expects to receive, 
actually it doesn't require a strict boolean. It expects a truthy/falsy value 
(check the spec). I like coercion. I use it liberally. The values in my `inner` 
arrays were all truthy values (see below) and `filter(..)` works perfectly fine 
receiving such.


> This leads to several bug in your implementation, which doesn't work

None of those are "bugs" in my implementation, because none of those can happen 
within the constraints of the problem. If you re-read the stated setup for the 
problem I was solving, you'll see the constraints I'm referring to.

BTW, since you brought it up, for the empty `inner` array case to be supported 
(I didn't need it, but...), all I would need to do is `inner.reduce( 
function.., undefined )` (or `false` if you prefer) if I wanted empty arrays 
filtered out, or `inner.reduce( function.., true )` if I wanted empty arrays 
preserved. Easy.


> all operations that return an absorbing element...would benefit

My `false` value trigger on finding an `inner` that should be filtered out is 
conceptually that. From then on in the reduction, all other values are 
"absorbed" (aka ignored, aka overriden) by the `false`. :)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: short-circuiting Array.prototype.reduce

2015-03-26 Thread Bergi

Kyle Simpson schrieb:


Since `reduce(..)` conveniently can compare two adjacent elements if you always 
return the current value, I decided to model the inner check as a `reduce(..)` 
that reduces from the original array value to either a `false` or a truthy 
value (the last element of the inner array element).


Um, that's not exactly what reduction is meant for. The reduce method is 
designed so that the return values and the accumulator argument do have 
the same type. In your example, you have somehow mixed an expected 
boolean result with the item type of the array.

This leads to several bug in your implementation, which doesn't work
* with lists of booleans
* with empty arrays
* with arrays whose last element is falsy

> The example code isn't very compelling either; something more 
real-world would be good


Well, basically all operations that return an absorbing element 
 would benefit from this.


 Bergi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: short-circuiting Array.prototype.reduce

2015-03-26 Thread Kyle Simpson
> The example code isn't very compelling either; something more real-world 
> would be good

I recently ran across a usage of `reduce(..)` that could have benefitted from 
an early return. Figured I'd just drop it here for posterity sake, in case 
anything ever comes of this idea.

I had an array of arrays (2-dimensional array), where each inner array had a 
list of simple string/number elements (none of them falsy) that could have some 
repetition within the inner array. I wanted to filter the outer array (the list 
of the inner arrays) based on whether an inner array had any adjacent 
duplicates. That is, `[1,4,2,4]` is fine to keep but `[2,4,4,5]` should be 
filtered out.

Since `reduce(..)` conveniently can compare two adjacent elements if you always 
return the current value, I decided to model the inner check as a `reduce(..)` 
that reduces from the original array value to either a `false` or a truthy 
value (the last element of the inner array element). This reduction result then 
is how `filter(..)` decides to keep or discard. The reason an early exit would 
be nice is that as soon as you run across an adjacency-duplication, no more 
reduction is necessary -- you can immediately "reduce" to `false`.

Here's how I did it, which worked but which was slightly less appealing:

```js
var outer = [
  // [1,2,1,3,4,2]
  // ["foo","bar","bar",10,"foo"]
  // ..
];

outer = outer.filter(function filterer(inner){
  return inner.reduce(function reducer(prev,current){
if (prev === false || prev === current) return false;
return current;
  });
});
```

The reduction initial-value is omitted, so it's `undefined`, which never 
matches any of the `inner` contents.

The `prev === false` check is the way that I fake the "early exit", by which 
once the reduction value is tripped to `false`, that's always the result for 
the rest of the reduction.

There's lots of other ways to "slice" that problem, I know. But reduction was 
attractive except for its lack of early exit.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Converting strings to template strings

2015-03-26 Thread Kyle Simpson
> What have you been calling the MemberExpression TemplateLiteral and 
> CallExpression TemplateLiteral forms?

Those are two variations of the "Tagged String Literals" form.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Forwarding `return()` in generators

2015-03-26 Thread Axel Rauschmayer
>> It’d be great if all iterables were indeed the same in this regard.
> 
> What do you suggest, that array iterators should not be continuable? So we'd 
> have `.return()` methods on ArrayIterators, StringIterators, MapIterators, 
> and SetIterators, which sets the respective [[Iterated*]] internal property 
> to `undefined`?

Maybe I overstated, maybe documenting whether an iterable produces continuable 
iterators is enough, but it is something to be aware of, something that has to 
be explained in conjunction with the iteration protocol. Similarly: whether an 
iterable restarts iteration every time you call `[Symbol.iterator]()` 
(generators don’t restart, arrays do).

-- 
Dr. Axel Rauschmayer
a...@rauschma.de
rauschma.de



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Forwarding `return()` in generators

2015-03-26 Thread Bergi

Axel Rauschmayer wrote:

a difference between iterators that you have to be aware of and that needs to 
be documented per iterable.


Alternatively, just test `if (typeof iterator.return == "function")` :-)
But yes, awareness is required. Maybe we should dub "closeable 
iterators" as a subtype of the iterator interface?



It’d be great if all iterables were indeed the same in this regard.


What do you suggest, that array iterators should not be continuable? So 
we'd have `.return()` methods on ArrayIterators, StringIterators, 
MapIterators, and SetIterators, which sets the respective [[Iterated*]] 
internal property to `undefined`?


 Bergi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Forwarding `return()` in generators

2015-03-26 Thread Axel Rauschmayer
> We want `return` (Python 2.5+ close) to be optional, though. So an iterator 
> whether implemented by a generator function or otherwise sees no difference 
> -- provided in the generator function implementation you do not yield in a 
> try with a finally. Forcing return from a not-exhausted generator parked at 
> yield other than in try-with-finally does not run any more code in the 
> generator function's body.

`return()` being optional is true for arrays:

```js
function twoLoops(iterable) {
let iterator = iterable[Symbol.iterator]();
for (let x of iterator) {
console.log(x);
break;
}
for (let x of iterator) {
console.log(x);
break;
}
}

twoLoops(['a', 'b', 'c']);
// Output:
// a
// b
```

But it is not true for generators:

```js
function* elements() {
yield 'a';
yield 'b';
yield 'c';
}

twoLoops(elements());
// Output:
// a
```

That is a difference between iterators that you have to be aware of and that 
needs to be documented per iterable. It’d be great if all iterables were indeed 
the same in this regard.

-- 
Dr. Axel Rauschmayer
a...@rauschma.de
rauschma.de



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Converting strings to template strings

2015-03-26 Thread caridy
In general, if you want to have control over those arguments, reusing the 
template without the need of the wrapping function, you will have to provide a 
way to apply logic inside the template string, that complicates things a lot.

As for i18n, ICU messages can solve that part of the puzzle providing structure 
and logic for complex messages. FYI we are working on a proposal for ECMA-402 
to introduce `Intl.MessageFormat()` that will address i18n aspect of this.

/caridy

> On Mar 23, 2015, at 4:52 PM, Andrea Giammarchi  
> wrote:
> 
> Even a function wouldn't scale that well for i18n purpose, 'cause you should 
> write either a function per language or each language switch statement per 
> function.
> 
> I see current ES6 string templates good for debug purpose only, and not much 
> else ... maybe English centric developers tools so few, definitively good use 
> cases, but nothing that useful or powerful for the known Web.
> 
> ```
> 
> var template = ``Hello ${name} !``;
> 
> template({name: 'there'}); // Hello there !
> 
> ```
> 
> But I know, double back-tick might look too like double rainbow: "OMG what 
> does it mean" ??!
> 
> Regards
> 
> 
> 
> On Mon, Mar 23, 2015 at 9:29 PM, Brendan Eich  > wrote:
> Jason Orendorff wrote:
> But from the few data points I have, approximately 100% of web
> developers, when they first hear "template strings are in ES6", think
> that means something like Mustache in the standard library. Some
> initially try to use the feature that way and get frustrated. I expect
> widespread confusion on this point.
> 
> This.
> 
> A function wrapped around a template string, where the function's parameters 
> occur in embedded expressions, goes a long way. But you have to write the 
> function, after teaching people the basics and apologizing for misleading 
> them with the t-word.
> 
> /be
> 
> ___
> es-discuss mailing list
> es-discuss@mozilla.org 
> https://mail.mozilla.org/listinfo/es-discuss 
> 
> 
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-26 Thread Bill Frantz

On 3/26/15 at 8:51 AM, get...@gmail.com (Kyle Simpson) wrote:

As I mentioned near the beginning of this thread, 
`Reflect.parse(..)` would generally suit the proposed use-case, 
except it does a lot of extra work (creating and returning a 
tree -- a value that then I'd be throwing away creating 
unnecessary GC) that feature testing itself doesn't need. It's 
unclear that `Reflect.parse(..)` would provide any additional 
performance gains over the current `eval` / `Function` 
approach, and could even be potentially worse.


I don't see a real need for high performance in these tests. 
AFAICS, they occur once, probably at load time. A smart JS 
implementation might even parse the Reflect.parse() string at 
the same time it is parsing the main set of JS code. As such, 
the extra overhead for CPU and GC will probably be swamped by 
the communication CPU and transmission times.


Not using eval makes it more likely that you will be able to 
perform the tests in "safe" subsets of JS.


Cheers - Bill

---
Bill Frantz| Privacy is dead, get over| Periwinkle
(408)356-8506  | it.  | 16345 
Englewood Ave
www.pwpconsult.com |  - Scott McNealy | Los Gatos, 
CA 95032


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Converting strings to template strings

2015-03-26 Thread Michael Ficarra
What have you been calling the MemberExpression TemplateLiteral and
CallExpression
TemplateLiteral forms?

On Wed, Mar 25, 2015 at 10:59 PM, Kyle Simpson  wrote:

> I've dubbed them "Interpolated String Literals" (or "Interpolated
> Strings") in my writings and materials going forward. I mention "Template
> Literals" and then explain why it's a bad name.
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>



-- 
Shape Security is hiring outstanding individuals. Check us out at
*https://shapesecurity.com/jobs/
*
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Are ES6 modules in browsers going to get loaded level-by-level?

2015-03-26 Thread caridy
The issue you're describing exists today, and it is the main reason why we do 
bundling (a la webpack, browserify, etc.).

ES6 modules are not introducing any new restriction here, it is just the way 
things work. HTTP2 is suppose to help in this regards, but only a little bit. 
In my experience, loading the modules is not even the biggest issue here, but 
executing 500 modules is, because it will require at least an order of magnitud 
more promises to be resolved, plus all the other normalization logic. This is 
where "folding" might help us. Assuming you have a huge tree of modules, we 
could analyze that tree and fold it into few modules that are key for the 
application to function, then fetching and executing those modules should not 
be a big deal, this is very similar to bundling as we know it today but without 
sacrificing module semantic, and loader advanced functionalities.

In any case, this should not prevent us from writing modules using ES6 import 
and export declarations today.

/caridy

> On Mar 26, 2015, at 3:08 PM, Wizek <123.wi...@gmail.com> wrote:
> 
> *I've been redirected from here: 
> https://github.com/tc39/ecma262/issues/27#issuecomment-84474257 
>  . Not sure 
> if this is a good place to ask this question. If not, I'm sorry for the 
> noise. Could you then point me elsewhere perhaps?*
> 
> I've just read this post here: 
> http://www.2ality.com/2014/09/es6-modules-final.html 
> 
> Which claims that the module system will support both sync and async loading. 
> Which I like. But it made me wonder if/how well async loading would work for 
> deeper dependency trees. E.g. if I had a project with 20 level deep 
> dependency tree (at its deepest point) and my server would take on average 
> 200ms to respond, then it would take about 4000ms minimum to execute any/all 
> of my scripts, right? Or is there something I am missing?
> 
> If I interpret the situation correctly, what is the conceptual response to 
> this scenario? Try to limit the tree depth? Concat everything just like it 
> happens often with ES5? Something else?
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Are ES6 modules in browsers going to get loaded level-by-level?

2015-03-26 Thread John Barton
An async load will call for a module that knows--synchronously--its
imports. The server can respond with the entire tree in one response.

The problem comes in avoiding duplicate transmission of modules shared by
multiple async loads. The client will need to signal the server about the
previously loaded modules.

If you search this list back to posts by Ian Hickson you can read about his
HTTP/2 based discussions.

Broadly this subject is considered by "professional" web developers as just
a moot point.  They consider dynamic loading with decisions made in the
client to be undesirable.  They do support what might be called "optional"
loading, where only part of the JS is loaded initially and more JS is
loaded later. But the dependency decisions are all baked into the server
and thus the tree of layers known only in the client does not arise.

HTH,
jjb

On Thu, Mar 26, 2015 at 12:08 PM, Wizek <123.wi...@gmail.com> wrote:

> *I've been redirected from here:
> https://github.com/tc39/ecma262/issues/27#issuecomment-84474257 . Not
> sure if this is a good place to ask this question. If not, I'm sorry for
> the noise. Could you then point me elsewhere perhaps?*
>
> I've just read this post here:
> http://www.2ality.com/2014/09/es6-modules-final.html
> Which claims that the module system will support both sync and async
> loading. Which I like. But it made me wonder if/how well async loading
> would work for deeper dependency trees. E.g. if I had a project with 20
> level deep dependency tree (at its deepest point) and my server would take
> on average 200ms to respond, then it would take about 4000ms minimum to
> execute any/all of my scripts, right? Or is there something I am missing?
>
> If I interpret the situation correctly, what is the conceptual response to
> this scenario? Try to limit the tree depth? Concat everything just like it
> happens often with ES5? Something else?
>
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Fwd: Are ES6 modules in browsers going to get loaded level-by-level?

2015-03-26 Thread Wizek
*I've been redirected from here:
https://github.com/tc39/ecma262/issues/27#issuecomment-84474257 . Not sure
if this is a good place to ask this question. If not, I'm sorry for the
noise. Could you then point me elsewhere perhaps?*

I've just read this post here:
http://www.2ality.com/2014/09/es6-modules-final.html
Which claims that the module system will support both sync and async
loading. Which I like. But it made me wonder if/how well async loading
would work for deeper dependency trees. E.g. if I had a project with 20
level deep dependency tree (at its deepest point) and my server would take
on average 200ms to respond, then it would take about 4000ms minimum to
execute any/all of my scripts, right? Or is there something I am missing?

If I interpret the situation correctly, what is the conceptual response to
this scenario? Try to limit the tree depth? Concat everything just like it
happens often with ES5? Something else?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-26 Thread Kyle Simpson
> doesn't yet solve my use cases, although I can't speak for Kyle.

It would not support my use-case. At least, in the sense that it's an 
all-or-nothing which is counter to what I'm looking for. It's also going to be 
way more processing intensive than just doing an `eval` / `Function` test, 
which defeats the entire point of the proposal.


> a feature that was specificatlly design to enable non-conforming 
> implementations

That's not at all the intent of this feature. More below.


> This sort of feature testing is inherently a short term need. Within a few 
> years, all implementations will support all major features

Within a few years, all implementations will be ES6 compilant, sure. But 
they'll never all be entirely up to date on ES2016, ES2017, ES2018, … as they 
roll out.

This feature testing mechanism is intended to be a rolling window of FT's for 
the gap between when something is standardized (to the point that developers 
could rely on polyfills/transpiles for it) and when it's fully implemented in 
all browsers that your app is running on. This gap could be as short as 6-12 
months and (considering mobile) as long as several years.

On an app-by-app, need-by-need basis, there will *always* be such a gap, and 
FT's let you know what you have available at that moment in that specific 
browser.

This is directly analogous to all other classes of FT's, such as modernizr 
(focused more on HTML/CSS, with JS only as it related to one of those).


> For example, I’m sure nobody today has a need to test 
> Reflect.supports(Symbol.functionExpression) or 
> Reflect.supports(Symbol.tryCatch).

No, they don't. Exactly my point with the rolling window. And exactly why I 
stated that the intent of this feature is *not* about ES6 (or ES5) features, 
but rather about new stuff in ES2016+. It would be my hope that the feature 
testing API proposed could be one of the first things browsers could land 
post-ES6, which would mean devs could soon'ish start using those tests to 
track/cope with the gap between the ES2016 stamp of approval and when all those 
ES2016 features land. And of course the same for ES2017 and beyond.

And since what I'm asking for is stuff that, largely, can already be tested, 
just less efficiently, we could very quickly polyfill `Reflect.supports` to let 
devs use it even earlier.


> would be throw-away work that within in few years would just be legacy baggage

My design intent with my proposal, supporting the string syntax form, is to not 
have a huge table of lookup values that are legacy baggage and thrown away, but 
a general feature that is flexible and continues to be useful going forward.

The few exception cases, if any, like for example a `Symbol.TCO` test or 
whatever, would be very small, and their "burden" of legacy would be quite low 
once we're past the window of them being useful.


> a feature such as Reflect.parse which has other uses 

As I mentioned near the beginning of this thread, `Reflect.parse(..)` would 
generally suit the proposed use-case, except it does a lot of extra work 
(creating and returning a tree -- a value that then I'd be throwing away 
creating unnecessary GC) that feature testing itself doesn't need. It's unclear 
that `Reflect.parse(..)` would provide any additional performance gains over 
the current `eval` / `Function` approach, and could even be potentially worse.

It's also unclear that `Reflect.parse(..)` would ever have any reasonable 
answer for the "hard" tests we've briefly touched on, such as exposing 
semantics like TCO or any other sorts of things we invent which can't 
reasonably be tested by syntax checks or pragmatically tested via runtime code. 
At least `Reflect.supports(..)` *could* have an answer for that.



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Forwarding `return()` in generators

2015-03-26 Thread Brendan Eich

Axel Rauschmayer wrote:
The way that `return()` is handled in generators (via try-catch) means 
that the clean-up action is called in both cases: generator exhaustion 
and generator closure. If you don’t use a generator then a clean-up 
action in `return()` is only called if the iterator is closed, not if 
it is exhausted. Obviously that is a minor technical detail and easy 
to fix (call the clean-up action when you return `{ done: true }`), 
but it’s still an inconsistency (which wouldn’t exist if `return()` 
was called in either case).


We want `return` (Python 2.5+ close) to be optional, though. So an 
iterator whether implemented by a generator function or otherwise sees 
no difference -- provided in the generator function implementation you 
do not yield in a try with a finally. Forcing return from a 
not-exhausted generator parked at yield other than in try-with-finally 
does not run any more code in the generator function's body.


If you, the implementor of an iterator, want to handle close 
(pre-exhaustion break from a for-of construct), implement `return`. If 
you as implementor choose a generator function to implement your 
iterator, you'll want that try-finally.


In this sense, return for generator function, while provided as a method 
of generator iterators, is encoded in the body of the generator function 
at the implementor's discretion (via try-yield-finally -- arguably one 
big try-finally with the guts that use yield in the try block).


HTH,

/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Supporting feature tests directly

2015-03-26 Thread Allen Wirfs-Brock

> On Mar 22, 2015, at 2:00 AM, Kyle Simpson  wrote:
> 
>> I think you're referring to the `eval` function?
> 
> Actually, I'm referring to proposing something new that would substitute for 
> having to hack feature tests with `eval`.
> 
> These are the initial details of my idea, a `Reflect.supports(..)` method: 
> https://gist.github.com/getify/1aac6cacec9cb6861706
> 
> Summary: `Reflect.supports( "(()=>{})" )` or `Reflect.supports( "let x" )` 
> could test **just** for the ability to parse, as opposed to the 
> compilation/execution that `eval(..)` does. It'd be much closer to `new 
> Function(..)` except without the overhead of needing to actually produce the 
> function object (and then have it be thrown away for GC).
> 
> This is inspired by 
> https://developer.mozilla.org/en-US/docs/Mozilla/Projects/SpiderMonkey/Parser_API,
>  where FF has a `Reflect.parse(..)` method that is somewhat like what I'm 
> suggesting, except that for feature tests we don't need the parse tree, just 
> a true/false of if it succeeded.
> 
> An alternate form would be `Reflect.supports( Symbol.arrowFunction )`, where 
> the engine is just specifically saying "yes" I support that feature by 
> recognizing it by its unique built-in symbol name.

I’m pretty skeptical about including this sort of feature testing feature as 
part of standard ECMAScript.  Here are some of the reasons:

ECMAScript doesn’t include the concept of language subsets/supersets (other 
than the Annex B features).  An conforming implementation of a Ecma TC39 
standard is  expected to implement all of the features of current standard. 
Given that perspective, it isn’t clear why we would want to  provide a feature 
that was specificatlly design to enable non-conforming implementations.   
test262 is TC39’s support for testing the standards conformance of an 
implementation. 

This sort  of feature testing is inherently a short term need. Within a few 
years, all implementations will support all major features, so work that does 
into incorporating specific feature detection into the ES standard (for example 
defining something like Symbol.arrowFunction) would be throw-away work that 
within in few years would just be legacy baggage that could never be removed 
from the language.  For example, I’m sure nobody today has a need to test 
Reflect.supports(Symbol.functionExpression) or 
Reflect.supports(Symbol.tryCatch).

On the other hand, a feature such as Reflect.parse which has other uses but 
which also has a potential applicability for feature detection seems reasonable.

Allen

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss