Re: Re: Array.prototype.toObjectByProperty( element=>element.property )

2017-08-07 Thread Jussi Kalliokoski
On Mon, Aug 7, 2017 at 10:24 PM, Naveen Chawla 
wrote:

> However, if we are going straight from iterable to Object, I don't think
> we should be propagating the "[key, value] as an array" pattern.
>

Consistency is good, and especially better than for example adding new
syntax. Subjectively not a big fan of doing single pair computed key
objects either: `return { [key]: value }` vs `return [key, value]`. Arrays
also have a better set of standard library functions available which makes
preprocessing easier, for example reversing keys and values.


> It's unclean partly because it's 2 elements max and IDEs can't inherently
> show warnings if you accidentally added more, unless it has some
> intelligence about the method itself. Partly also because the arrays are
> not immediately readable unless you already know the that the arrays mean
> [key, value].
>

You're probably not referring to hand-writing lists of key-value pairs (in
which case you should just use an object literal to begin with), but yes,
text editors inherently don't help much with code validity to begin with,
but type systems such as flow make it pretty easy[1] to catch this sort of
thing:

```javascript
/* @flow */

function objectFrom(
  iterable : Iterable<[KeyType, ValueType]>
) : {
  [key : KeyType]: ValueType
} {
  const object : { [key : KeyType]: ValueType } = {};

  for (const [key, value] of iterable) {
object[key] = value;
  }

  return object;
}

objectFrom([["key", 2], [null, "value"], [1, 3]]) // <- all good
objectFrom([["key", "value", "other"]]) // <- ERROR: Tuple arity mismatch.
This tuple has 3 elements and cannot flow to the 2 elements of...
objectFrom([1,2,3,4,5].map(x => [x, x])) // <- all good
objectFrom([1,2,3,4,5].map(x => Array(x))) // <- // <- ERROR: Tuple arity
mismatch...
```

IMO it's generally a good idea to use stronger correctness guarantees than
editor syntax highlighting / validation anyway.

Regarding `Map.from(...)`, not sure how that would be useful since the map
constructor already accepts an iterator. Maybe when passing to something
that expects a function, but then adding arity (for example a mapper
function) becomes a hazard. The partial application proposal might help
with this tho: `x.map(Map.from(?))`.

I'd be happy to see something like `Object.from(iterable)` and maybe even
`Map.from(iterable)` in the core library - simple, generic and common. Not
so convinced about the mapper functions - if an engine doesn't bother
optimizing away the extra allocation in `Object.from(x.map(fn))` I don't
see why it would bother optimizing `Object.from(x, fn)` either. Please
correct me if and how I'm wrong though.

[1]:
https://flow.org/try/#0PQKgBAAgZgNg9gdzCYAoVUCuA7AxgFwEs5sw4AjAKwFMCAxAJzgFsAeAaWoE8AVLgB2oAaMADUAhjEzU+ggHwAKVGDCF81BuPIxqYAFxgAkus3bqrANqdeA4WMnTZ1ALpzUASn1gA3srAWAa24vaydnAwkpGVtUAF8fP1wSAGd8MipaNINvfyCuEO4wiIdowTB4gF4fWIBudBUoOAYwBSTsVNzuEQA3EucyKFUTLR1PXxUVChoCQO5+qt6oupVY+rAGanxMBlIpzLrV1D36JmYFCwsAIjzLkQAmZxELbEwYGBFLxelLx-8ARhEAGZnM5PMBgGBWABaMCSGBgADmcDgABMjhkTixzlcbh8vtRbmBLnB8AALDQ-UFgcGQmEAUQASgyAPIMgw8TD8HSwhhqfLMQjJZjifC4UkAOjAPFJgrAWy5ulJ4mSYEBYGoOmY1Gw+BV4mwKLAuH12BJYFgiDlcDl5LAd3Vmu1uoG4td6Om+EYWIsALuQkBQgALEIAKzOcXC-gKAAeYAqcn80ZE0dBYIh0Nhb0RyLRx09p3Ovv9QdD4cjMbjCYAggxNFwY+5G9T0zCaRnGSy2VLOdzxLz8PzBcLRRLXUA
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Deterministic Proposal

2017-06-21 Thread Jussi Kalliokoski
> deterministic function sum(a, b) { return a + b; }
>

Ironically, having the engine decide when to memoize would make the
"deterministic" functions non-deterministic:

deterministic function foo(a, b) { return { a, b }; }
foo(1, 2) === foo(1, 2) // may or may not be true
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Incorporate URL spec into EcmaScript

2017-03-19 Thread Jussi Kalliokoski
I've been thinking that the WhatWG URL spec [1] kinda seems like it should
be part of the ES language now that the spec is really solid and has been
implemented both by browsers and node. After all, URLs are quite a
universal concept in applications of all sorts and especially in contexts
where JS is used and as such I can't think of a single platform that runs
JS and would not benefit from having the URL standard implemented.

WDYT?

[1]: https://url.spec.whatwg.org
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: Weak Graph

2015-11-08 Thread Jussi Kalliokoski
On Sun, 8 Nov 2015 at 13:29 Nelo Mitranim  wrote:

>
> I was just recently working on the same problem as Jussy:
> subscribe/unsubscribe pattern for view components, and immutable data tree
> for efficient updates.
>
> (Working demo [here](https://github.com/Mitranim/chat); the subs/unsubs
> are completely implicit.)
>
> Concluded that weakrefs offer no help with this whatsoever. In fact,
> they're a red herring. You always want to unsubscribe your listeners
> deterministically, exactly when the corresponding view is destroyed.
> Letting them hang around and attempt to update nonexistent views until the
> next GC phase is no good. In fact, React will give you warnings when you
> try to update unmounted components. It also provides the "will mount" /
> "will unmount" lifecycle methods to clean things up.
>
> Pretty sure weakrefs are harmful rather than helpful in this particular
> use case. But I may have missed the point of the sentiment.
>

I think I may have miscommunicated my meaning here. In the idea I've been
working on, the view has no subscription (except at top level) that can
push data to it, so there's nothing to unsubscribe from. The idea being
that the data structures are modeled purely synchronously (current version
+ diff to previous version). The reason for needing reference counting is
different from FRP streams: making sure resources can be collected instead
of leaving an infinite trail. So basically FRP in this comparison is a push
model that needs reference counting to free the leaf nodes whereas the
model I have in mind is pull where the reference counting is needed to free
the root nodes.

- Jussi


P.S. Thanks for the demo, will check it out!


> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak Graph

2015-11-08 Thread Jussi Kalliokoski
On Fri, 6 Nov 2015 at 17:31 Jason Orendorff <jason.orendo...@gmail.com>
wrote:

> On Wed, Nov 4, 2015 at 10:09 AM, Jussi Kalliokoski
> <jussi.kallioko...@gmail.com> wrote:
> > I'm trying to come up with a solution to the problem of rendering lists
> [...]
> > My idea for a solution is that the lists are immutable, contain a
> reference
> > to their parent and a changeset / diff compared to their parent. [...]
>
> Good problem, interesting idea.
>
> > The biggest problem is that this will leak memory like crazy; every
> revision
> > of the list will be preserved.
>
> OK. Perhaps obviously, the only way around this is to mutate the list,
> breaking the chain at a point where nobody cares about the rest of it
> anymore.
>
> The approach you've outlined is to have the GC tell you when to do the
> mutation, but why is that a good idea? You can do it deterministically
> in getLineage().
>

I'm not sure I follow.


> Maybe the concepts here would be clearer if we limited the graph to a
> single linked list.


Perhaps graph as a concept is too wide for expressing this, but it surely
is not a linked list either. It may look like one in some cases though,
when there's only one lineage and no branching. However that is not the use
case here as when application state gets transformed to a view
representation it may have various transformations applied to it, such as
sorting, mapping or filtering.


> Then it looks a lot like a stream, in the
> functional reactive programming sense. Let the user (in this case, the
> renderer) buffer the diffs as needed; it knows when to reset the list.
> And no need for fancy data structures: it could just be an Array.
>

That misses the point, to say the least. The idea of React is that you can
represent the UI as a declarative function of a snapshot of the state, so
the view doesn't have to care about async. With FRP you
subscribe/unsubscribe to asynchronous streams which, not unlike the idea
here, can be transformed just like normal data structures and forked (to
fork an immutable data structure is to just pass the reference). The
difference is that streams are an inherently async structure, while what
I'm trying to do is not. The idea being not only that the easy case of
insert without transforms is O(1), but almost every use case can be further
better optimized by knowing the previous state of the data structure.

Consider this: You have a large list of items as an array, unsorted, as
your state. The view is a paged listing of the items sorted by different
criteria. So basically:

```JS
list
  .sort(ascendingBy("something"))
  .slice(FIRST_INDEX_ON_PAGE, LAST_INDEX_ON_PAGE)
  .map(item => )
```
Now something gets added to the middle of the list. Let's look at what we
can do with this information at each stage, if we know the previous list
and the diff to that:
* We can perform the sort in linear time because we just have to find where
the added item belongs in the list.
* The slice now has a different diff (only the insert position is
different), and
 - if the added item is in view, we can make the insertion index relative
to our view and add a remove for the last item in the list.
 - if the added item is before the view, we return an insert for the item
coming to view and a remove for the item leaving the view.
 - if the added item is after the view, the diff is empty, so we can stop
here.
* We can map just the diffs at the last stage.

You can implement this with streams, but that will just be an unnecessary
abstraction level offering no simplification whatsoever while making the
concept needlessly async. Another significant difference between this and
FRP is that streams require imperative subscribe / unsubscribe, which is
basically just sophisticated reference counting, while having the same
issues (user after free -> update after unmount, leaks). What I have in
mind can also be implemented using reference counting, and in fact will be
to in my initial version, but having a WeakGraph data structure would make
this nasty artifact and source of easy bugs (use after free -> use after
unsubscribe, leaks) go away, just like WeakMap and WeakSet are designed to
do for certain other cases.

- Jussi


>
> -j
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Weak Graph

2015-11-04 Thread Jussi Kalliokoski
Usually my first assumption when I think I need weak references is that I
should use WeakMaps instead. Today I came across an interesting use case
and was wrong for the first time. However it wasn't weak references I
needed either.

I'm trying to come up with a solution to the problem of rendering lists
that's often used as a counter argument for using a framework / view
library instead of direct DOM manipulation, where - even with React -
updating (even appending to) a list is O(n) at best.

My idea for a solution is that the lists are immutable, contain a reference
to their parent and a changeset / diff compared to their parent. This would
allow rendering the whole list initally, then just applying the diffs on
subsequent renders by walking the graph up to the last known ancestor and
combining the changesets. This would make subsequent renders O(1) which is
a great improvement even with small lists.

The biggest problem is that this will leak memory like crazy; every
revision of the list will be preserved. Let's say we have the following
implementation:

```JS
function WeakGraph () {
const parentByNode = new WeakMap();

this.getLineage = function (node, ancestor) {
const lineage = [];

let currentNode = node;
do {
lineage.push(currentNode);
if ( !parentByNode.has(currentNode) ) { throw new Error("node
is not a descendant of ancestor"); }
currentNode = parentByNode.get(currentNode);
} while ( currentNode !== ancestor );

return lineage;
};

this.addNode = function (node, parent) {
parentByNode.set(node, parent);
};
}
```

It provides the needed interface and the unused child revisions get cleaned
up properly. However:

* This is a complete nightmare for GC performance because of cyclical weak
references.
* Any reference to a child will maintain references to all its parents.

However this doesn't necessarily need to be the case because the stored
ancestry is not observable to anything that creates a WeakGraph, except to
the oldest ancestor that has a reference elsewhere.

I'm not sure if this use case alone warrants adding a new feature to the
language, or if I'm just missing something and it can be implemented with
existing constructs or if there should be some other lower level primitive
that would allow building a WeakGraph on the user level.

- Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak Graph

2015-11-04 Thread Jussi Kalliokoski
On Wed, Nov 4, 2015 at 6:19 PM, Isiah Meadows <isiahmead...@gmail.com>
wrote:

> Would this be possible with a mixture of weak references and weak
> collections?
>

I don't think so - the only potential implementation I can think of would
make a weak reference to the parent which would allow the parent to be
GCed, but then all the nodes between the latest revision and the ancestor
would require strong references somewhere in order to be maintained, making
the whole thing kinda pointless because you couldn't rely on it working.
Also this use case doesn't require making GC observable, which I think is
pretty much the only feature of weak references that WeakMaps don't provide.

- Jussi


>
> On Wed, Nov 4, 2015, 11:09 Jussi Kalliokoski <jussi.kallioko...@gmail.com>
> wrote:
>
>> Usually my first assumption when I think I need weak references is that I
>> should use WeakMaps instead. Today I came across an interesting use case
>> and was wrong for the first time. However it wasn't weak references I
>> needed either.
>>
>> I'm trying to come up with a solution to the problem of rendering lists
>> that's often used as a counter argument for using a framework / view
>> library instead of direct DOM manipulation, where - even with React -
>> updating (even appending to) a list is O(n) at best.
>>
>> My idea for a solution is that the lists are immutable, contain a
>> reference to their parent and a changeset / diff compared to their parent.
>> This would allow rendering the whole list initally, then just applying the
>> diffs on subsequent renders by walking the graph up to the last known
>> ancestor and combining the changesets. This would make subsequent renders
>> O(1) which is a great improvement even with small lists.
>>
>> The biggest problem is that this will leak memory like crazy; every
>> revision of the list will be preserved. Let's say we have the following
>> implementation:
>>
>> ```JS
>> function WeakGraph () {
>> const parentByNode = new WeakMap();
>>
>> this.getLineage = function (node, ancestor) {
>> const lineage = [];
>>
>> let currentNode = node;
>> do {
>> lineage.push(currentNode);
>> if ( !parentByNode.has(currentNode) ) { throw new Error("node
>> is not a descendant of ancestor"); }
>> currentNode = parentByNode.get(currentNode);
>> } while ( currentNode !== ancestor );
>>
>> return lineage;
>> };
>>
>> this.addNode = function (node, parent) {
>> parentByNode.set(node, parent);
>> };
>> }
>> ```
>>
>> It provides the needed interface and the unused child revisions get
>> cleaned up properly. However:
>>
>> * This is a complete nightmare for GC performance because of cyclical
>> weak references.
>> * Any reference to a child will maintain references to all its parents.
>>
>> However this doesn't necessarily need to be the case because the stored
>> ancestry is not observable to anything that creates a WeakGraph, except to
>> the oldest ancestor that has a reference elsewhere.
>>
>> I'm not sure if this use case alone warrants adding a new feature to the
>> language, or if I'm just missing something and it can be implemented with
>> existing constructs or if there should be some other lower level primitive
>> that would allow building a WeakGraph on the user level.
>>
>> - Jussi
>> ___
>> es-discuss mailing list
>> es-discuss@mozilla.org
>> https://mail.mozilla.org/listinfo/es-discuss
>>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: self functions

2015-07-28 Thread Jussi Kalliokoski
For classes, you can use decorators [1][1], if they eventually get in in
the language (just one extra symbol):

```JS
function self (object, name, descriptor) {
  var original = descriptor.value;
  return {
...descriptor,
value (...args) {
  original.apply(this, args);
  return this;
},
  };
}

class Foo {
  constructor () {
this.x = 0;
  }

  @self
  method () {
this.x++;
  }
}

console.log(new Foo().method().x) // 1
```

for ES5 style classes, you can just use functions (2 extra symbols):

```JS
function self (original) {
  return function (...args) {
original.apply(this, args);
return this;
  };
}

function Foo () {
  this.x = 0;
}

Foo.prototype.method = self(function () {
  this.x++;
});

console.log(new Foo().method().x) // 1
```

[1]: https://github.com/wycats/javascript-decorators

On Tue, Jul 28, 2015 at 5:06 AM, Bucaran jbuca...@me.com wrote:

 Add a `self` decorator to functions that makes them return `this` by
 default.

 export function self myMethod () {
 // return this by default
 }

 This decorator could be used to make any function bound to the current
 scope `this` as well so:

 func(function self () { // this is bound to my dad’s scope
 })

 Would be roughly equivalent to:

 func(() = { // this is bound to my dad’s scope })

 Although I personally would favor the arrow function syntax, the `self`
 decorator could be
 used to optimize binding generators, so instead of:

 func(function* () { }.bind(this))

 One could write:

 func(function self* () { })

 Similary it could be used in promise handlers, so instead of:

 new Promise(function (resolve, reject) { }.bind(this))

 One could write:

 new Promise(function self (resolve, reject) { })

 It would be even sweeter if you didn’t need to specify the keyword
 function when writing `self` functions:

 new Promise(self (resolve, reject) { })
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Named `this` and `this` destructuring

2015-06-18 Thread Jussi Kalliokoski
On Wed, Jun 17, 2015 at 11:38 PM, Allen Wirfs-Brock al...@wirfs-brock.com
wrote:


 On Jun 17, 2015, at 10:01 AM, Jussi Kalliokoski wrote:
 ...
 
  More examples of the power of the bind syntax can be found in the links,
 but the bind syntax combined with my proposal would for example allow this:
 
  ```JS
  function add (a, b) { return a + b; }
 
  2::add(3) // 5
  ```
 
 and why that better than:
 ```js
 function add(a,b) {return a+b}

 add(2,3);
 ```


Poor example, sorry.

Because of chaining in a way that preservers the natural read order of JS
and fitting in well with the builtins:

```JS

flatten(
items // - data here
.filter(isOk)
.map(toOtherType)
).join( );

```

vs.

```JS

items // - data here
.filter(isOk)
.map(toOtherType)
::flatten()
.join( );

```


As for fitting in (composing) well with the builtins, for example Trine
allows you to do this:

```JS
[[a, b, c], [e, f, g]]::map([].slice); // yields copies of the
arrays
[foo, bar]::map(.repeat::partial(3)) // yields foofoofoo,
barbarbar
```

because all the functions take data in the `this` slot, as most of the
builtins and framework methods do too.



 Every new feature increases the conceptual complexity of a language and to
 justify that it needs to provide a big pay back.


I wholeheartedly agree on this, which is why I stated that it might be too
early for my proposal.


 This doesn't seem to have much of a pay back.


Have no comments to that for my proposal. As for Kevin's proposal, I
disagree; the payback seems great in that one.


 Adding the  and :: doesn't eliminate the need for JS programmer to learn
 about `this` in functions for the various already existing ways to call a
 function with an explicit `this` value.  It just add more new syntax that
 needs to be learned and remembered are move feature interactions that have
 to be understood.


Agreed, however not having to learn `this` isn't the goal of either of
the proposals mentioned in this thread.



 JS doesn't need more syntax and semantics piled on to `this`.


Arguably JS hasn't needed anything since it became turing complete. That
doesn't mean that adding some of the things added since was a bad idea.
It's always a tradeoff, but that's nothing specific to this proposal.


 Ideally some would be taken away.  However, the latter is not possible.


Agree on this too. (As a tangent I'd be curious to know what you'd take
away, were it possible?)



 Allen


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Named `this` and `this` destructuring

2015-06-17 Thread Jussi Kalliokoski
On Wed, Jun 17, 2015 at 10:35 PM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:

 OK **that one** I've no idea what supposes to improve exactly ... I should
 have tried to realize your proposal better, apologies.

 After seeing that, I probably agree with Allen at this point we don't
 really need that kind of syntax around JS (still IMHO, of course)


To each their own. :) I personally really like the bind syntax and have
received a tremendously positive feedback on it - the Trine project alone
has received over 1000 stars on GitHub, in under a week since release (last
Thursday), and it's just showcasing a part of the power of the proposed
syntax.



 Best Regards

 On Wed, Jun 17, 2015 at 6:01 PM, Jussi Kalliokoski 
 jussi.kallioko...@gmail.com wrote:



 On Wed, Jun 17, 2015 at 7:13 PM, Allen Wirfs-Brock al...@wirfs-brock.com
  wrote:


 On Jun 17, 2015, at 8:09 AM, Andrea Giammarchi wrote:

 Mostly every Array extra in ES5 would work with those functions, e.g.

 ```js
 function multiplyPoints (_p2) {
   var { x1: x, y1: y } = this;
   var { x2: x, y2: y } = _p2;
   return { x: x1 * x2, y: y1 * y2 };
 }

 var multiplied = manyPoints.map(multiplyPoints, centralPoint);
 ```

 It's not that common pattern but it gives you the ability to recycle
 functions as both methods or filters or mappers or forEachers and
 vice-versa.

 I personally use those kind of functions quite a lot to be honest, most
 developers keep ignoring Array extra second parameter as context though,
 they probably use a wrapped fat arrow within an IFI with call(context) :D


 It seems to me that  we already can quite nicely express in ES6 the use
 of a function as a method:

 ```js
 function multiplyPoints({x1, y1}, {x2,y2}) {
 return { x: x1 * x2, y: y1 * y2 }
 }

 class Point {
multiply(p2) {return multiplyPoints(this, p2)}
 }
 ```

 or, perhaps a bit more OO

 ```js
 class Point {
static multiply({x1, y1}, {x2,y2}) {
   return new Point(x1 * x2, y1 * y2 )  //or new this(...) if you
 care about subclassing Point
}

multiply(p2) {return Point.multiply(this, p2)}

constructor(x,y) {
   this.x = x;
   this.x = y;
}
 }
 ```

 Regardless of how you express it, if you want the same function to be
 used both as a standalone function and as an method, you are going to have
 to have a line or two of code to install the function as a method.  To me,
 the one-line method definitions used above are about as concise and much
 clearer in intent than `Point.prototype.multiply=multiplyPoints;` or
 whatever other expression you would use to install such a function as a
 method.  And I would expect any high perf JIT to use inlining to completely
 eliminate the indirection so, where it matters, there probably wound't be
 any performance difference.

 Many JS programmers have historically been confused about the JS
 semantics of `this` because it is over-exposed in non-method functions.
 Things like the current proposal increases rather than mitigates the
 potential for such confusion. if you are programming in a functional style,
 don't write functions that use `this`.  If you need to transition from
 to/from OO and functional styles, be explicit as shown above.

 `this` is an OO concept.  FP people, `this` is not for you;  don't use
 it, don't try to fix it.


 But I already am [1][1], and it allows for a much nicer syntax than
 functions that don't use `this`, and also composes well with built-ins
 (other than Object.*) This proposal is building on the proposed function
 bind syntax [2][2].

 More examples of the power of the bind syntax can be found in the links,
 but the bind syntax combined with my proposal would for example allow this:

 ```JS
 function add (a, b) { return a + b; }

 2::add(3) // 5
 ```

 [1]: https://github.com/jussi-kalliokoski/trine
 [2]: https://github.com/zenparsing/es-function-bind


 Allen



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Named `this` and `this` destructuring

2015-06-17 Thread Jussi Kalliokoski
On Wed, Jun 17, 2015 at 10:45 PM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:

 the ::bind syntax is OK (I don't really like that double colon 'cause it's
 not semantic at all with the single colon meaning but I can live with it)
 but having potential bitwise-like operators around to adress a possible
 context ... well, I wouldn't probably use/need that in the short, or even
 long, term.

 Again, just my opinion, listen to others ;-)


Ah sorry, my bad, I misunderstood you. :) To clarify, I've only heard
positive feedback from people of the bind syntax; as for this proposal,
this thread is the first time I hear feedback and it doesn't seem overtly
positive. :P



 On Wed, Jun 17, 2015 at 8:40 PM, Jussi Kalliokoski 
 jussi.kallioko...@gmail.com wrote:

 On Wed, Jun 17, 2015 at 10:35 PM, Andrea Giammarchi 
 andrea.giammar...@gmail.com wrote:

 OK **that one** I've no idea what supposes to improve exactly ... I
 should have tried to realize your proposal better, apologies.

 After seeing that, I probably agree with Allen at this point we don't
 really need that kind of syntax around JS (still IMHO, of course)


 To each their own. :) I personally really like the bind syntax and have
 received a tremendously positive feedback on it - the Trine project alone
 has received over 1000 stars on GitHub, in under a week since release (last
 Thursday), and it's just showcasing a part of the power of the proposed
 syntax.



 Best Regards

 On Wed, Jun 17, 2015 at 6:01 PM, Jussi Kalliokoski 
 jussi.kallioko...@gmail.com wrote:



 On Wed, Jun 17, 2015 at 7:13 PM, Allen Wirfs-Brock 
 al...@wirfs-brock.com wrote:


 On Jun 17, 2015, at 8:09 AM, Andrea Giammarchi wrote:

 Mostly every Array extra in ES5 would work with those functions, e.g.

 ```js
 function multiplyPoints (_p2) {
   var { x1: x, y1: y } = this;
   var { x2: x, y2: y } = _p2;
   return { x: x1 * x2, y: y1 * y2 };
 }

 var multiplied = manyPoints.map(multiplyPoints, centralPoint);
 ```

 It's not that common pattern but it gives you the ability to recycle
 functions as both methods or filters or mappers or forEachers and
 vice-versa.

 I personally use those kind of functions quite a lot to be honest,
 most developers keep ignoring Array extra second parameter as context
 though, they probably use a wrapped fat arrow within an IFI with
 call(context) :D


 It seems to me that  we already can quite nicely express in ES6 the
 use of a function as a method:

 ```js
 function multiplyPoints({x1, y1}, {x2,y2}) {
 return { x: x1 * x2, y: y1 * y2 }
 }

 class Point {
multiply(p2) {return multiplyPoints(this, p2)}
 }
 ```

 or, perhaps a bit more OO

 ```js
 class Point {
static multiply({x1, y1}, {x2,y2}) {
   return new Point(x1 * x2, y1 * y2 )  //or new this(...) if you
 care about subclassing Point
}

multiply(p2) {return Point.multiply(this, p2)}

constructor(x,y) {
   this.x = x;
   this.x = y;
}
 }
 ```

 Regardless of how you express it, if you want the same function to be
 used both as a standalone function and as an method, you are going to have
 to have a line or two of code to install the function as a method.  To me,
 the one-line method definitions used above are about as concise and much
 clearer in intent than `Point.prototype.multiply=multiplyPoints;` or
 whatever other expression you would use to install such a function as a
 method.  And I would expect any high perf JIT to use inlining to 
 completely
 eliminate the indirection so, where it matters, there probably wound't be
 any performance difference.

 Many JS programmers have historically been confused about the JS
 semantics of `this` because it is over-exposed in non-method functions.
 Things like the current proposal increases rather than mitigates the
 potential for such confusion. if you are programming in a functional 
 style,
 don't write functions that use `this`.  If you need to transition from
 to/from OO and functional styles, be explicit as shown above.

 `this` is an OO concept.  FP people, `this` is not for you;  don't use
 it, don't try to fix it.


 But I already am [1][1], and it allows for a much nicer syntax than
 functions that don't use `this`, and also composes well with built-ins
 (other than Object.*) This proposal is building on the proposed function
 bind syntax [2][2].

 More examples of the power of the bind syntax can be found in the
 links, but the bind syntax combined with my proposal would for example
 allow this:

 ```JS
 function add (a, b) { return a + b; }

 2::add(3) // 5
 ```

 [1]: https://github.com/jussi-kalliokoski/trine
 [2]: https://github.com/zenparsing/es-function-bind


 Allen



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss






___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Named `this` and `this` destructuring

2015-06-17 Thread Jussi Kalliokoski
It's probably a bit early for this, but I figured I'd put it out there (I
already proposed this as a tangent in the function bind syntax thread).
This syntax proposal is purely about convenience and subjective
expressiveness (like any feature addition to a Turing complete language).

As I've been building Trine, I noticed that the `this` as data pattern is
extremely powerful and expressive, however the code in the functions
doesn't convey the intent very clearly. For example:

function add (b) { return this + b; }
function * map (fn) { for ( let item of this ) { yield item::fn(); } }

vs.

function add (a, b) { return a + b; }
function * map (iterator, fn) { for ( let item of iterator ) { yield
item::fn(); } }

Also currently neither Flow or TypeScript support type annotating this.
There's discussion [1] [2] in both the projects for allowing `this` to be
specified as a parameter to allow annotating it, e.g.

function add (this : number, b : number) : number { return this + b; }

This leads to my current proposal, i.e. being able to make the first
parameter of the function an alias for `this` by using a special prefix
(). This would not only allow aliasing `this`, but also destructuring and
default values (as well as type annotation in language extensions).

The different forms and their desugarings:

function add (a, b) {
  return a + b;
}

// would desugar to

function add (b) {
  var a = this;
  return a + b;
}


function multiplyTuple ([a, b], multiplier) {
  return [a * multiplier, b * multiplier];
}

// would desugar to
function multiplyTuple (multiplier) {
  var [a, b] = this;
  return [a * multiplier, b * multiplier];
}


function multiplyPoints ({ x1: x, y1: y }, { x2: x, y2: y }) {
  return { x: x1 * x2, y: y1 * y2 };
}

// would desugar to
function multiplyPoints (_p2) {
  var { x1: x, y1: y } = this;
  var { x2: x, y2: y } = _p2;
  return { x: x1 * x2, y: y1 * y2 };
}


// allow passing the element for mocking in tests
function isQsaSupported (dummyElement = document) {
  return typeof dummyElement.querySelectorAll !== undefined;
}


This proposal would also be consistent with the type annotation proposals
for `this` mentioned earlier.

WDYT?

[1] https://github.com/facebook/flow/issues/452
[2] https://github.com/Microsoft/TypeScript/issues/1985
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Example of real world usage of function bind syntax

2015-06-12 Thread Jussi Kalliokoski
On Fri, Jun 12, 2015 at 8:35 AM, Jussi Kalliokoski 
jussi.kallioko...@gmail.com wrote:

 On Thu, Jun 11, 2015 at 5:31 PM, Kevin Smith zenpars...@gmail.com wrote:

 I'm not entirely sure if it's appropriate, but I just published a library
 called Trine[1] that takes advantage and displays the power of the proposed
 function bind syntax. Consider this my upvote for the proposal. :)


 It's definitely appropriate, as long as it's clear to users that the `::`
 syntax is experimental, non-standard and subject to change.  This is
 actually the kind of usage feedback we were hoping to get : )


 Great!


 Has there been any discussion of anyone championing this proposal for
 ES2016? I would very much like to see it land, especially given that I've
 already been using it extensively in production via Babel. :P


 Not sure you should use it in production, just yet...


 It's okay, it's used in a very small product and the worst that can happen
 is that it will bind (heh) us to a specific version of babel until the
 syntax is removed from the codebase.


 I'm the champion for this proposal, and I plan on pursuing it.  I'm not
 sure that it will fit into the ES2016 timeline though, given the time
 remaining and other priorities (like async/await).  To be honest, I'm not
 overly worried about which train it leaves on.

 Since you brought it up...

 I've been considering reducing the scope of the proposal, focusing on the
 immediate apply aspect of the operator, while leaving open the option of
 adding the other features at a later time.  Specifically,

 - Remove the prefix form of the operator.
 - Restrict the syntax such that an argument list is required after the
 right operand.

 In other words, these forms would no longer be valid under the proposal
 (although they could be re-introduced in another proposal):

 let bf1 = ::obj.foo.bar;
 let bf2 = obj::foo;

 But this would still be OK:

 obj::foo(bar);

 Given your experience with the operator and your use cases, would you
 still be in favor of such a minimal proposal?


 I see the most benefits in the immediate invocation forms of the proposal,
 and have used that more extensively than the other forms. However, working
 with React, I've found that the prefix form is also very nice thing to have
 for all the ::this.onClick, etc. things. So yes, I would still be in favor,
 but to me the prefix form is still a pretty cool nice to have.

 Looking at the discussion about partial application thought of as a
 blocker for the syntax, I'd prefer to keep the two separate - for example
 the proposal I made for partial application [1] is compatible with or
 without the bind syntax: `foo::bar(qoo, ???)` would be the same as doing
 `bar.bind(foo, qoo)`.


Forgot to mention that Trine actually implements the placeholder syntax I
proposed: parseInt::partial(_) // returns a unary version of parseInt



 However, if there's even the remote possibility of getting the
 non-immediate forms of the bind syntax do a light binding (i.e. always
 returning the referentially same function), that would be IMO worth
 blocking the non-immediate forms over to see if it could lead anywhere.
 Currently as far as I understand, for example React considers all my event
 listeners changed on every render and removes the listeners and adds them
 again, because they're bound at render-time.

 As a slight offtrack, the slim arrow function would be very handy with
 this style of programming. ;) Also, one thing I noticed while writing this
 library is that being able to name `this` might be interesting. Currently
 even in syntax extensions such as flow, there's no natural way to type
 annotate `this` in standalone functions. However, say we had a syntax like
 this:

 function (a, b) {
   return a + b;
 }

 Where the `this` argument would be accessible in the a parameter, it could
 be simply type annotated as

 function (a : number, b : number) {
   return a + b;
 }

 Regarding polymorphism, I don't think the bind syntax changes things much,
 it just (possibly) makes polymorphism happen more at the `this` slot. Also,
 I wonder if in the case of the bind operator the engines could actually
 hold an inline cache in the shape of the object instead of the function
 itself. I'm also uncertain if the engines currently consider things such as

 function getFromUnknownType (key, value, has, get) {
   if ( has(key) ) { return get(key); }
   return null;
 }

 polymorphic or if they're able to statically verify that actually the
 types in there remain the same regardless of the type of the `key` and
 `value` passed in (unless the passed functions are inlined of course).

 I wouldn't be *too* worried about polymorphism though, since the
 advantages of the syntax allow us to add methods to iterables, which
 means we can do

 products
   ::quickSort(function (b) { return this.price - b.price; })
   ::head(k);

 versus

 products
   .sort(function (a, b) { return a.price - b.price; })
   .slice(0, k

Example of real world usage of function bind syntax

2015-06-11 Thread Jussi Kalliokoski
I'm not entirely sure if it's appropriate, but I just published a library
called Trine[1] that takes advantage and displays the power of the proposed
function bind syntax. Consider this my upvote for the proposal. :)

Has there been any discussion of anyone championing this proposal for
ES2016? I would very much like to see it land, especially given that I've
already been using it extensively in production via Babel. :P

[1] https://github.com/jussi-kalliokoski/trine
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Example of real world usage of function bind syntax

2015-06-11 Thread Jussi Kalliokoski
On Thu, Jun 11, 2015 at 5:31 PM, Kevin Smith zenpars...@gmail.com wrote:

 I'm not entirely sure if it's appropriate, but I just published a library
 called Trine[1] that takes advantage and displays the power of the proposed
 function bind syntax. Consider this my upvote for the proposal. :)


 It's definitely appropriate, as long as it's clear to users that the `::`
 syntax is experimental, non-standard and subject to change.  This is
 actually the kind of usage feedback we were hoping to get : )


Great!


 Has there been any discussion of anyone championing this proposal for
 ES2016? I would very much like to see it land, especially given that I've
 already been using it extensively in production via Babel. :P


 Not sure you should use it in production, just yet...


It's okay, it's used in a very small product and the worst that can happen
is that it will bind (heh) us to a specific version of babel until the
syntax is removed from the codebase.


 I'm the champion for this proposal, and I plan on pursuing it.  I'm not
 sure that it will fit into the ES2016 timeline though, given the time
 remaining and other priorities (like async/await).  To be honest, I'm not
 overly worried about which train it leaves on.

 Since you brought it up...

 I've been considering reducing the scope of the proposal, focusing on the
 immediate apply aspect of the operator, while leaving open the option of
 adding the other features at a later time.  Specifically,

 - Remove the prefix form of the operator.
 - Restrict the syntax such that an argument list is required after the
 right operand.

 In other words, these forms would no longer be valid under the proposal
 (although they could be re-introduced in another proposal):

 let bf1 = ::obj.foo.bar;
 let bf2 = obj::foo;

 But this would still be OK:

 obj::foo(bar);

 Given your experience with the operator and your use cases, would you
 still be in favor of such a minimal proposal?


I see the most benefits in the immediate invocation forms of the proposal,
and have used that more extensively than the other forms. However, working
with React, I've found that the prefix form is also very nice thing to have
for all the ::this.onClick, etc. things. So yes, I would still be in favor,
but to me the prefix form is still a pretty cool nice to have.

Looking at the discussion about partial application thought of as a blocker
for the syntax, I'd prefer to keep the two separate - for example the
proposal I made for partial application [1] is compatible with or without
the bind syntax: `foo::bar(qoo, ???)` would be the same as doing
`bar.bind(foo, qoo)`.

However, if there's even the remote possibility of getting the
non-immediate forms of the bind syntax do a light binding (i.e. always
returning the referentially same function), that would be IMO worth
blocking the non-immediate forms over to see if it could lead anywhere.
Currently as far as I understand, for example React considers all my event
listeners changed on every render and removes the listeners and adds them
again, because they're bound at render-time.

As a slight offtrack, the slim arrow function would be very handy with this
style of programming. ;) Also, one thing I noticed while writing this
library is that being able to name `this` might be interesting. Currently
even in syntax extensions such as flow, there's no natural way to type
annotate `this` in standalone functions. However, say we had a syntax like
this:

function (a, b) {
  return a + b;
}

Where the `this` argument would be accessible in the a parameter, it could
be simply type annotated as

function (a : number, b : number) {
  return a + b;
}

Regarding polymorphism, I don't think the bind syntax changes things much,
it just (possibly) makes polymorphism happen more at the `this` slot. Also,
I wonder if in the case of the bind operator the engines could actually
hold an inline cache in the shape of the object instead of the function
itself. I'm also uncertain if the engines currently consider things such as

function getFromUnknownType (key, value, has, get) {
  if ( has(key) ) { return get(key); }
  return null;
}

polymorphic or if they're able to statically verify that actually the types
in there remain the same regardless of the type of the `key` and `value`
passed in (unless the passed functions are inlined of course).

I wouldn't be *too* worried about polymorphism though, since the advantages
of the syntax allow us to add methods to iterables, which means we can do

products
  ::quickSort(function (b) { return this.price - b.price; })
  ::head(k);

versus

products
  .sort(function (a, b) { return a.price - b.price; })
  .slice(0, k);

which is a difference of O(kn) versus O(n²) in worst case time complexity.

- Jussi


[1] https://esdiscuss.org/topic/syntax-sugar-for-partial-application



 Thanks!

___
es-discuss mailing list
es-discuss@mozilla.org

Re: Trailing commas in arguments list, imports and destructuring

2015-04-25 Thread Jussi Kalliokoski
Thanks! Had completely missed that GH repo's existence. :)

Cool that this is moving forward!

Thanks to Sebastian for the explanation as well!

On Thu, 23 Apr 2015 18:46 Michael Ficarra mfica...@shapesecurity.com
wrote:

 See https://github.com/tc39/ecma262. This proposal is currently at stage
 one. To find out more about what that means, read the process document
 https://docs.google.com/document/d/1QbEE0BsO4lvl7NFTn5WXWeiEIBfaVUF7Dk0hpPpPDzU
 .

 On Wed, Apr 22, 2015 at 8:15 AM, Jussi Kalliokoski 
 jussi.kallioko...@gmail.com wrote:

 I just noticed that Babel support trailing commas in function arguments
 lists, imports and destructuring as well:


 http://babeljs.io/repl/#?experimental=trueevaluate=trueloose=falsespec=falsecode=import%20%7B%0A%20%20voo%2C%0A%20%20doo%2C%0A%7D%20from%20%22.%2Fdat.js%22%3B%0A%0Alet%20%7B%0A%20%20x%2C%0A%20%20y%2C%0A%7D%20%3D%20voo%3B%0A%0Alet%20%5B%0A%20%20z%2C%0A%20%20m%2C%0A%5D%20%3D%20doo%3B%0A%0Afunction%20qoo%20(%0A%20%20x%2C%0A%20%20y%2C%0A)%20%7B%7D

 Is this correct behavior? I'm not

 FWIW as I already use trailing commas object and array literals for
 better diffs, I really like this feature as it comes in handy especially in
 function signatures where you define types (TypeScript/flow style
 annotations), for example:

 function sort T (
 array : ArrayT,
 compareFn : ((left: T, right: T) = number),
 ) : ArrayT {
 ...
 }

 as well as import statements for modules that declare constants:

 import {
 BRAND_COLOR,
 DEFAULT_TEXT_COLOR,
 DARK_GRAY,
 LIGHT_GRAY,
 } from ./constants/COLORS;

 not to mention options object style function signatures:

 class Person {
 constructor ({
 firstName,
 lastName,
 birthDate,
 country,
 city,
 zipCode,
 }) {
 this.firstName = firstName;
 this.lastName = lastName;
 this.birthDate = birthDate;
 this.country = country;
 this.city = city;
 this.zipCode = zipCode;
 }
 }

 To me, the spec language as per Jason's HTML version looks like at least
 for destructuring this is supported, but at least I can't read the spec to
 allow trailing commas in function signatures. At least this doesn't seem to
 be incorporated into the spec:

 https://esdiscuss.org/notes/2014-09/trailing_comma_proposal.pdf

 Is the proposal still on track for ES7 and am I correct in my reading of
 the destructuring allowing trailing commas?

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss




 --
 Shape Security is hiring outstanding individuals. Check us out at 
 *https://shapesecurity.com/jobs/
 https://shapesecurity.com/jobs/*

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Trailing commas in arguments list, imports and destructuring

2015-04-22 Thread Jussi Kalliokoski
I just noticed that Babel support trailing commas in function arguments
lists, imports and destructuring as well:

http://babeljs.io/repl/#?experimental=trueevaluate=trueloose=falsespec=falsecode=import%20%7B%0A%20%20voo%2C%0A%20%20doo%2C%0A%7D%20from%20%22.%2Fdat.js%22%3B%0A%0Alet%20%7B%0A%20%20x%2C%0A%20%20y%2C%0A%7D%20%3D%20voo%3B%0A%0Alet%20%5B%0A%20%20z%2C%0A%20%20m%2C%0A%5D%20%3D%20doo%3B%0A%0Afunction%20qoo%20(%0A%20%20x%2C%0A%20%20y%2C%0A)%20%7B%7D

Is this correct behavior? I'm not

FWIW as I already use trailing commas object and array literals for better
diffs, I really like this feature as it comes in handy especially in
function signatures where you define types (TypeScript/flow style
annotations), for example:

function sort T (
array : ArrayT,
compareFn : ((left: T, right: T) = number),
) : ArrayT {
...
}

as well as import statements for modules that declare constants:

import {
BRAND_COLOR,
DEFAULT_TEXT_COLOR,
DARK_GRAY,
LIGHT_GRAY,
} from ./constants/COLORS;

not to mention options object style function signatures:

class Person {
constructor ({
firstName,
lastName,
birthDate,
country,
city,
zipCode,
}) {
this.firstName = firstName;
this.lastName = lastName;
this.birthDate = birthDate;
this.country = country;
this.city = city;
this.zipCode = zipCode;
}
}

To me, the spec language as per Jason's HTML version looks like at least
for destructuring this is supported, but at least I can't read the spec to
allow trailing commas in function signatures. At least this doesn't seem to
be incorporated into the spec:

https://esdiscuss.org/notes/2014-09/trailing_comma_proposal.pdf

Is the proposal still on track for ES7 and am I correct in my reading of
the destructuring allowing trailing commas?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Syntax sugar for partial application

2015-04-13 Thread Jussi Kalliokoski
On Mon, Apr 13, 2015 at 12:15 AM, liorean lior...@gmail.com wrote:

 On 12 April 2015 at 17:39, Jussi Kalliokoski
 jussi.kallioko...@gmail.com wrote:
  No, «this» is lexically bound to be that of the enclosing lexical
  scope in arrow functions, so it would be whatever that is. But that
  doesn't really matter as the function call to «foo» doesn't use the
  «this» of the arrow function.
 
  Exactly why you get `null` as `this`.

 Which makes the behaviour identical to that of the code as you wrote it.


If you looked at the gist I made, the placeholder syntax creates functions
where `this` remains untouched, so it can be separately bound, unlike in
your examples. See [1] for example of how the syntax desugars to ES6.


  Now, if we were to say your «foo» were actually «foo.bar», and you did
  the same replacement in the arrow function, the «this» value of the
  «bar» call would be «foo», so that's pretty much what is wanted as
  well. The case where this breaks is if you were to replace only the
  «bar» method with the arrow function, in which case it would use the
  lexical «this» instead of «foo», but that's obviously not the right
  transformation to use.
 
   This might not seem like such a big deal until you consider it in
   combination with the proposed bind syntax [1].
  
   Also in your examples, redefining `foo` will lead to different
 results.
   The
   placeholder syntax has a lot more room for optimization in the JIT
   compiler
   (the partially applied result is guaranteed to have no side effects
 for
   example, so the compiler can create a version of the original function
   where
   it can inline the specified arguments; less moving parts, easier to
   optimize).
 
  Yeah, it's susceptible to that problem, yes. Do you want me to fix
  that for you if you really want it?
 
  Your «foo(1, ?, 2);» is equivalent to «((f,a)=f(1,a,2))(foo)».
 
  Your «foo(?, 1, ???);» is equivalent to
 «((f,a,...b)=f(a,1,...b))(foo)».
  Your «foo(1, ???, 2);» is equivalent to
  «((f,...a)=f(...[1,...a,2]))(foo)».
 
 
  Your new examples directly execute the function instead of creating a new
  function. :) Which goes to show how it would be nice to have specific
 syntax
  for this to make it more obvious what's happening.

 Oops. I needed to actually add that extra argument as a separate fat arrow,
«(f=(...a)=f(...[1,...a,2]))(foo)» etc.

  I write my code pretty much the same way. However, it's hard for the
  compiler to trust that you're not changing things, regardless of style.

 Guess it'd be hard for it unless it has the knowledge of whether
 functions are pure or not, yes.

 I'd love for a compiler that can tell that I don't modify my arguments
 and thus optimises code like
 «
 let map= // Usage: map(function)(...array)
 (f,...acc)=(head,...tail)=(
 undefined===head
 ?acc
 :map(f,...acc,f(head))(...tail));
 »

 So that it doesn't actually create the «tail» array every recursion,
 just a narrower and narrower subarray of the same actual array, and
 likewise that the only thing that is done with «acc» is the production
 of an array that is identical to it with an addition of one element at
 its end, so doesn't break it down and rebuild it every recursion. And
 of course tail call optimisation on it, because that code is horrid
 without those optimisations.


Me too, yet while nothing is impossible, this sort of optimization (tail
call optimization aside) would be super difficult to implement, and
expensive too, because the compiler would have to check that the invariants
weren't violated on every call. However, there is hope in the light of
immutable collections proposals where the compiler has solid guarantee that
the collection isn't morphed mid-iteration and can safely use a cursor to a
sub-collection.

[1]
https://gist.github.com/anonymous/5c4f6ea07ad3017d61be#leading-placeholder


 --
 David liorean Andersson
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Syntax sugar for partial application

2015-04-12 Thread Jussi Kalliokoski
On Sat, Apr 11, 2015 at 4:54 AM, liorean lior...@gmail.com wrote:

 On 9 April 2015 at 16:11, Jussi Kalliokoski jussi.kallioko...@gmail.com
 wrote:
  On Thu, Apr 9, 2015 at 4:04 PM, liorean lior...@gmail.com wrote:
 
  Do we really need it?
  Your «foo(1, ?, 2);» is equivalent to «a=foo(1,a,2)».
  Your «foo(?, 1, ???);» is equivalent to «(a,...b)=foo(a,1,...b)».
  Your «foo(1, ???, 2);» is equivalent to «(...a)=foo(...[1,...a,2])».
 
 
  Not exactly. Using the placeholder syntax, `this` remains context
 dependent,
  whereas with your examples you get `null` as `this`.

 No, «this» is lexically bound to be that of the enclosing lexical
 scope in arrow functions, so it would be whatever that is. But that
 doesn't really matter as the function call to «foo» doesn't use the
 «this» of the arrow function.


Exactly why you get `null` as `this`.


 Now, if we were to say your «foo» were actually «foo.bar», and you did
 the same replacement in the arrow function, the «this» value of the
 «bar» call would be «foo», so that's pretty much what is wanted as
 well. The case where this breaks is if you were to replace only the
 «bar» method with the arrow function, in which case it would use the
 lexical «this» instead of «foo», but that's obviously not the right
 transformation to use.

  This might not seem like such a big deal until you consider it in
  combination with the proposed bind syntax [1].
 
  Also in your examples, redefining `foo` will lead to different results.
 The
  placeholder syntax has a lot more room for optimization in the JIT
 compiler
  (the partially applied result is guaranteed to have no side effects for
  example, so the compiler can create a version of the original function
 where
  it can inline the specified arguments; less moving parts, easier to
  optimize).

 Yeah, it's susceptible to that problem, yes. Do you want me to fix
 that for you if you really want it?

 Your «foo(1, ?, 2);» is equivalent to «((f,a)=f(1,a,2))(foo)».

Your «foo(?, 1, ???);» is equivalent to «((f,a,...b)=f(a,1,...b))(foo)».
 Your «foo(1, ???, 2);» is equivalent to
 «((f,...a)=f(...[1,...a,2]))(foo)».


Your new examples directly execute the function instead of creating a new
function. :) Which goes to show how it would be nice to have specific
syntax for this to make it more obvious what's happening.



 I guess I didn't think of these cases though, because I only use
 explicit arguments to my functions these days, I never use the «this»
 keyword. If I want a function to operate on an object, I pass that
 object into the function.
 I also try to not reuse my variables unless they are part of an
 iteration, in which case they are always local variables that are only
 handled in the iteration process itself. But that's a side issue, as
 it's about my code rather than precepts of the language.


I write my code pretty much the same way. However, it's hard for the
compiler to trust that you're not changing things, regardless of style.

Also because most of the standard library of the language operates on
`this` instead of a separate argument, combining standard library methods
with methods that have their data as an explicit argument often lead to
awkward reading order issues, e.g.

foo(x
  .filter(...)
  .map(...)
  .reduce(...)
)

whereas with the bind operator you get

x
  .filter(...)
  .map(...)
  .reduce(...)
  ::foo()


Which is where this proposal shines, if foo is a partially applied function.

But anyway, seems that this is not something people want, at least yet, so
I'll rest my case. :)


 --
 David liorean Andersson
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Syntax sugar for partial application

2015-04-09 Thread Jussi Kalliokoski
On Thu, Apr 9, 2015 at 4:04 PM, liorean lior...@gmail.com wrote:

 Do we really need it?
 Your «foo(1, ?, 2);» is equivalent to «a=foo(1,a,2)».
 Your «foo(?, 1, ???);» is equivalent to «(a,...b)=foo(a,1,...b)».
 Your «foo(1, ???, 2);» is equivalent to «(...a)=foo(...[1,...a,2])».


Not exactly. Using the placeholder syntax, `this` remains context
dependent, whereas with your examples you get `null` as `this`.

This might not seem like such a big deal until you consider it in
combination with the proposed bind syntax [1].

Also in your examples, redefining `foo` will lead to different results. The
placeholder syntax has a lot more room for optimization in the JIT compiler
(the partially applied result is guaranteed to have no side effects for
example, so the compiler can create a version of the original function
where it can inline the specified arguments; less moving parts, easier to
optimize).

[1] http://wiki.ecmascript.org/doku.php?id=strawman:bind_operator



 Also, the ? token is already taken by the ternary conditional
 operator. Do we really want to overload it here for a nullary
 operator/special form, when we have as low overhead syntax as we
 already do in fat arrows for doing the exact same thing?
 --
 David liorean Andersson
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: async/await improvements

2014-11-12 Thread Jussi Kalliokoski
On Wed, Nov 12, 2014 at 6:17 PM, C. Scott Ananian ecmascr...@cscott.net
wrote:

 On Wed, Nov 12, 2014 at 11:08 AM, Axel Rauschmayer a...@rauschma.de
 wrote:
  Is that true, though? Couldn’t a finalizer or something similar check
  (before a promise is garbage collected) whether all errors have been
  handled?

 A finalizer can do this check.  This will flag some uncaught
 exceptions, but not promptly.  And as I wrote above, that's only part
 of the issue -- promises can also be kept alive for an indefinite
 period of time, but never end up either handling their exceptions or
 becoming unreachable.  This could also be an error.

 That is, liveness is one way to tell that an exception will never be
 handled, but it is only an approximation.

 And it's not necessarily an error to not handle an exception --
 `Promise.race()` is expected to have this behavior as a matter of
 course, for example.

 We've been through this discussion many times before.  Eventually
 there may be a `Promise#done`.  But the consensus was that the first
 step was to give the devtools folks time to make good UI for showing
 the dynamic unhandled async exception state of a program, and see
 how well that worked.
   --scott



Actually that already works, at least in Chrome, if you execute

(function () {
  return new Promise(function (resolve, reject) {
reject(new Error(foo));
  });
}());


that shows up as an uncaught exception in the console.



 ps. some of the discussed language features threaten to release zalgo.
 but i'll not open up that can of worms.
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Immutable collection values

2014-11-09 Thread Jussi Kalliokoski
I figured I'd throw an idea out there, now that immutable data is starting
to gain mainstream attention with JS and cowpaths are being paved. I've
recently been playing around with the idea of introducing immutable
collections as value types (as opposed to, say, instances of something).

So at the core there would be three new value types added:

* ImmutableMap.
* ImmutableArray.
* ImmutableSet.

In the spirit of functional programming and simplicity, these types have no
prototype chain (i.e. inherit from null). Instead, all the built-in
functions that deal with these are accessible via respective utility
modules or like with Array and Object, available as static methods of the
constructors. I don't really have a preference in this.

We could also introduce nice syntactic sugar, such as:

var objectKey = {};

var map = {:
  [objectKey]: foo,
  bar: baz,
}; // ImmutableMap [ [objectKey, foo], [bar, baz] ]

var array = [:
  1,
  1,
  2,
  3,
]; // ImmutableArray [ 1, 2, 3, 4 ]

var set = :
  1,
  2,
  3,
; // ImmutableSet [ 1, 2, 3 ]

Being values, there could be nice syntax for common operations too:

{: foo: bar } === {: foo: bar } // true
{: foo: bar, qoo: 1 } + {: qoo: 2, baz: qooxdoo } // ImmutableMap [
[foo, bar], [qoo, 2], [baz, qooxdoo] ]
: 1, 2, 3  + : 3, 4, 5  // ImmutableSet [ 1, 2, 3, 4, 5 ]
: 1, 2, 3  - : 2, 4  // ImmutableSet [ 1, 3 ]
[: 1, 2, 3 ] + [: 3, 4, 5 ] // ImmutableArray [ 1, 2, 3, 3, 4, 5 ]
foo in {: foo: bar } // true
bar of {: foo: bar } // true
2 of : 1, 2, 3  // true
2 of [: 1, 2, 3 ] // true
var x = {}; x in { [x]: 1 } // true

Having no prototype chain combined with being a value also enables nice
access syntax and errors:

var map = {: foo: bar }; set.foo // bar
map.foo = baz; // TypeError: cannot assign to an immutable value.

The syntax suggestions are up to debate of course, but I think the key
takeaway from this proposal should be that the immutable collection types
would be values and have an empty prototype chain.

I think this would make a worthwhile addition to the language, especially
considering functional compile-to-JS languages. With the syntactic sugar,
it would probably even render a lot of their features irrelevant because
the core of JS could provide a viable platform for functional programming
(of course one might still be happier using abstraction layers that provide
immutable APIs to the underlying platforms, such as DOM, but then that's
not a problem in the JS' domain anymore).
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Immutable collection values

2014-11-09 Thread Jussi Kalliokoski
On Sun, Nov 9, 2014 at 5:39 PM, David Bruant bruan...@gmail.com wrote:

 Le 09/11/2014 15:07, Jussi Kalliokoski a écrit :

 I figured I'd throw an idea out there, now that immutable data is
 starting to gain mainstream attention with JS and cowpaths are being paved.
 I've recently been playing around with the idea of introducing immutable
 collections as value types (as opposed to, say, instances of something).

 So at the core there would be three new value types added:

 * ImmutableMap.
 * ImmutableArray.
 * ImmutableSet.

 Why would both Array and Set be needed?


Because sometimes you want lists of unique values (e.g. the list of doors
opened) and sometimes you want to have duplicates (e.g. the durabilities of
all doors).



  We could also introduce nice syntactic sugar, such as:

 var objectKey = {};

 var map = {:
   [objectKey]: foo,
   bar: baz,
 }; // ImmutableMap [ [objectKey, foo], [bar, baz] ]

 var array = [:
   1,
   1,
   2,
   3,
 ]; // ImmutableArray [ 1, 2, 3, 4 ]

 var set = :
   1,
   2,
   3,
 ; // ImmutableSet [ 1, 2, 3 ]

 The syntax suggestions are up to debate of course, but I think the key
 takeaway from this proposal should be that the immutable collection types
 would be values and have an empty prototype chain.

 I find : too discrete for readability purposes. What about # ?
 That's what was proposed for records and tuples (which are pretty much the
 same thing as ImmutableMap and ImmutableSet respectively)
 http://wiki.ecmascript.org/doku.php?id=strawman:records
 http://wiki.ecmascript.org/doku.php?id=strawman:tuples
 #SyntaxBikeshed


Like I said, I don't have a strong preference on the syntax. I chose :
instead of # purely because # is often suggested for many other things.
Also : makes happy beginnings and sad endings. :]



  I think this would make a worthwhile addition to the language, especially
 considering functional compile-to-JS languages. With the syntactic sugar,
 it would probably even render a lot of their features irrelevant because
 the core of JS could provide a viable platform for functional programming
 (of course one might still be happier using abstraction layers that provide
 immutable APIs to the underlying platforms, such as DOM, but then that's
 not a problem in the JS' domain anymore).

 It would also open the possibility of a new class of postMessage sharing
 (across iframes or WebWorkers) that allows parallel reading of a complex
 data structure without copying.

 A use case that would benefit a lot from this would be computation of a
 force-layout algorithm with real-time rendering of the graph.


Good points, agreed!




 David

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: typed array filling convenience AND performance

2014-10-30 Thread Jussi Kalliokoski
+1 - especially the lack of something like the proposed memset() has been a
massive headache for me.

Semantics (I'm quite nitpicky on them)... I'd prefer the argument order for
memcpy() to be (src, dstOffset, srcOffset, size) to be consistent with
set(). As for memset(), I'd prefer (value, dstOffset, size) because I've so
far actually never needed offset and size. I'd also prefer memcpy to be
named closer to set(), because it's basically the same thing with two extra
arguments. I'm not sure what the performance implications would be to
actually just add the extra two parameters to set as optional, so maybe
someone smarter than me can chime in on whether it's better to add a
separate method or overload the existing set()?

It's worth noting that set() currently gets relatively faster the bigger
the data is, and the last I tested (also confirmed by Jukka Jylänki), at
around 4k (surprisingly many) elements it becomes faster than manually
assigning values.

On Thu, Oct 30, 2014 at 10:29 AM, Florian Bösch pya...@gmail.com wrote:

 The usecases:

 *1) Filling with custom data*

 When writing WebGL or physics or many other things todo with large
 collections of data, it's not unusual to have to fill arrays with custom
 data of some kind.

 someArray.set([
 x0, y0, 1, 0, 0,
 x1, y0, 0, 1, 0,
 x1, y1, 0, 0, 1,
   x0, y0, 1, 0, 0,
   x1, y1, 0, 1, 0,
   x0, y1, 0, 0, 1
 ], i);


 *2) Copying in data from another array*

 Some data resides in another array and needs to be copied in. A feature
 frequently in use by emscripten.

 someArray.set(otherArray.subarray(srcOffset, srcSize), dstOffset)


 *3) Initializing an existing array with a repeated numerical value*

 For audio processing, physics and a range of other tasks it's important to
 initialize an array with the same data.

 for(var i=0; isize; i++){ someArray[i] = 0; }


 *The problem:* Doing all of these things is slow and/or unsuitable for
 realtime code.

1. someArray.set from a new list is slow due to set being slow, and
constructing the list is slow. It's not realtime friendly because it'll
construct a new list, which will have to be GCed.
2. someArray.set is slow due to the new array view construction and
it's not realtime friendly due to GCing.
3. Filling an array one element at a time is slow.

 *The test: *http://jsperf.com/typed-array-fast-filling/4 (screenshot here
 http://codeflow.org/pictures/typed-array-test.png and attached)

 *The status quo:*

 The fastest way to fill an array with custom data across browsers is:

 r[i] = x0;
 r[i + 1] = y0;
 r[i + 2] = 1;
 r[i + 3] = 0;
 r[i + 4] = 0;
 r[i + 5] = x1;
 r[i + 6] = y0;


 *Things that are not faster: *

- pushing to a list: ~93% slower
- a helper function filling from a list: 57-70% slower
- array.set: ~73% slower
- a helper function filling from arguments: 65% - 93% slower
- asm.js: 69-81% slower (even in firefox)

 *Suggestions:*

1. Browser engines should get a lot better at arguments handling so
that non sized arguments can be quickly iterated by native code. Firefox is
already pretty good at unboxing a specified argument list (chrome not so
much), but I think that test shows that there's ample room for improvement.
2. *someArray.memcpy*: Add a method to typed arrays that can shuffle
bytes from array A to array B like so: dst.memcpy(dstOffset, src,
srcOffset, size). This is to avoid having to allocate an object to do the
job.
3. *someArray.memset*: Add a method to typed arrays that can
initialize them with a value like so: dst.memset(dstOffset, value, size)
4. *someArray.argcpy*: Add a (fast) method to typed arrays that can
copy the arguments like so: dst.argcpy(dstOffset, 1, 2, 3, 4)
5. Drastically improve the set method.

 (naming and semantic don't matter to me, long as the methods do it
 efficiently, conveniently and fast what's suggested).

 *Related discussion:*

- https://bugzilla.mozilla.org/show_bug.cgi?id=936168
-

 https://www.khronos.org/webgl/public-mailing-list/archives/1410/msg00105.html

 *Consequence of failure to rectify:*

 Fast code will be unreadable and unmaintainable. Sophisticated and speed
 requiring code will not be written in ecmascript. Emscripten and asm.js
 with its hermetic nature will crowd out ecmascript driven developments.
 Other alternatives such as on GPU transformfeedback and compute shaders
 will be preferred to solve the problem.

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: typed array filling convenience AND performance

2014-10-30 Thread Jussi Kalliokoski
On Thu, Oct 30, 2014 at 3:14 PM, Adrian Perez de Castro ape...@igalia.com
wrote:

 On Thu, 30 Oct 2014 09:29:36 +0100, Florian Bösch pya...@gmail.com
 wrote:

  The usecases:
 
  [...]
 
  *3) Initializing an existing array with a repeated numerical value*
 
  For audio processing, physics and a range of other tasks it's important
 to
  initialize an array with the same data.
 
  for(var i=0; isize; i++){ someArray[i] = 0; }

 For this use case there is %TypedArray%.prototype.fill(), see:


Oh, nice! Had completely missed fill() before. Hope to see this land soon.
:)



 http://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.fill

 JavaScript engines are expected to implement it at some point. For example
 I am implementing this in V8, along with other new typed array methods. The
 engines should be able to generate quite good code for uses of this
 function
 and/or provide optimized versions relying on knowledge of the underlying
 element type of the typed array they are applied to.

 Cheers,

 --
  ☺ Adrián

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: RegExps that don't modify global state?

2014-09-17 Thread Jussi Kalliokoski
On Wed, Sep 17, 2014 at 8:35 AM, Steve Fink sph...@gmail.com wrote:

  On 09/16/2014 10:13 PM, Jussi Kalliokoski wrote:

 On Wed, Sep 17, 2014 at 3:21 AM, Alex Kocharin a...@kocharin.ru wrote:


 What's the advantage of `re.test(str); RegExp.$1` over `let
 m=re.match(str); m[1]`?


  Nothing. However, with control structures it removes a lot of
 awkwardness:

 * `if ( /foo:(\d+)/.test(str)  parseInt(RegExp.$1, 10)  15 ) { ...`
  * `if ( /name: (\w+)/).test(str) ) { var name = RegExp.$1; ...`


 Is

   if ((m = /foo:(\d+)/.exec(str))  parseInt(m[1], 10)  15) { ... }

 so bad? JS assignment is an expression; make use of it. Much better than
 Python's refusal to allow such a thing, requiring indentation trees of doom
 or hacky workarounds when you just want to case-match a string against a
 couple of regexes.


Looks pretty confusing, and my linter agrees (assignment in an if statement
is most likely a bug). Also that doesn't do the same thing, it assigns to
global m, unless you var it before the if(), so more noise, especially when
this if() is an else if() in a set of if() statements.

But this boils down to taste in linter rules and other bias for what is
pretty and what is not, which is not a very interesting discussion. My main
point was that the /u flag shouldn't disable this feature as a side effect.


 The global state *is* bad, and you don't need turns or parallelism to be
 bitten by it.

 function f(s) {
   if (s.test(/foo(\d+/)) {
 print(Found in  + formatted(s));
 return RegExp.$1; // Oops! formatted() does a match internally.
   }
 }

 Global variables are bad. They halfway made sense in Perl, but even the
 Perl folks wish they'd been lexical all along.


No argument here, I have no use for the RegExp.$ things being global. I'd
much rather have them lexical, e.g. `if ( /name: (\w+)/.test(str) ) { let
name = $1; ...` but that ship's sailed and even if we wanted to introduce
that now as a part of the disable global state modification flag (which
would be awesome), it would have a lot of things that need to be thought
through to make it happen and I doubt anyone's willing to champion that
effort.





  I personally find this functionality very useful and would be saddened
 if /u which I want to use all of the sudden broke this feature. Say what
 you mean. Unicode flag disabling features to enable parallelism is another
 footnote for WTFJS.



 I assume RegExp[$'] and RegExp[$`] are nice to have, I remember them
 from perl, but never actually used them in javascript.


 16.09.2014, 23:03, Andrea Giammarchi andrea.giammar...@gmail.com:

 I personally find the `re.test(str)` case a good reason to keep further
 access to `RegExp.$1` and others available instead of needing to test
 **and** grab eventually a match (redundant, slower, etc)

 As mentioned already `/u` will be used by default as soon as supported;
 having this implicit opt-out feels very wrong to me since `/u` meaning is
 completely different.

 Moreover, AFAIK JavaScript is single threaded per each EventLoop so I
 don't see conflicts possible if parallel execution is performed elsewhere,
 where also globals will (will them?) be a part, as every
 sandbox/iframe/worker has worked until now.

 I'd personally +1 an explicit opt-out and indifferently accept a re-opt
 as modifier such `/us` where `s` would mean stateful (or any other char
 would do as long as `RegExp.prototype.test` won't loose it's purpose and
 power).

 Regards

 P.S. there's no such thing as RegExp.$0 but RegExp['$'] will provide the
 (probably) intended result
  P.S. to know more about RegExp and these proeprties my old slides from
 BerlinJS event should do:
 http://webreflection.blogspot.co.uk/2012/02/berlin-js-regexp-slides.html

 On Tue, Sep 16, 2014 at 7:35 PM, Allen Wirfs-Brock al...@wirfs-brock.com
  wrote:


 On Sep 16, 2014, at 11:16 AM, Domenic Denicola wrote:

  I had a conversation with Jaswanth at JSConf EU that revealed that
 RegExps cannot be used in parallel JS because they modify global state,
 i.e. `RegExp.$0` and friends.
 
  We were thinking it would be nice to find some way of getting rid of
 this wart. One idea would be to bundle the don't-modify-global-state
 behavior with the `/u` flag. Another would be to introduce a new flag to
 opt-out. The former is a bit more attractive since people will probably
 want to use `/u` all the time anyway. I imagine there might be other
 possibilities others can think of.
 
  I also noticed today that the static `RegExp` properties are not
 specced, which seems at odds with our new mandate to at least Annex B-ify
 the required-for-web-compat stuff.

 Yes, they should be in Annex B.  But that means that somebody needs to
 write a spec. that defines their behavior.

 We could then add that extension to clause 16.1 as being forbidden for
 RegExps created with the /u flag.

 Allen

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo

Re: RegExps that don't modify global state?

2014-09-16 Thread Jussi Kalliokoski
On Wed, Sep 17, 2014 at 3:21 AM, Alex Kocharin a...@kocharin.ru wrote:


 What's the advantage of `re.test(str); RegExp.$1` over `let
 m=re.match(str); m[1]`?


Nothing. However, with control structures it removes a lot of awkwardness:

* `if ( /foo:(\d+)/.test(str)  parseInt(RegExp.$1, 10)  15 ) { ...`
* `if ( /name: (\w+)/).test(str) ) { var name = RegExp.$1; ...`

I personally find this functionality very useful and would be saddened if
/u which I want to use all of the sudden broke this feature. Say what you
mean. Unicode flag disabling features to enable parallelism is another
footnote for WTFJS.



 I assume RegExp[$'] and RegExp[$`] are nice to have, I remember them
 from perl, but never actually used them in javascript.


 16.09.2014, 23:03, Andrea Giammarchi andrea.giammar...@gmail.com:

 I personally find the `re.test(str)` case a good reason to keep further
 access to `RegExp.$1` and others available instead of needing to test
 **and** grab eventually a match (redundant, slower, etc)

 As mentioned already `/u` will be used by default as soon as supported;
 having this implicit opt-out feels very wrong to me since `/u` meaning is
 completely different.

 Moreover, AFAIK JavaScript is single threaded per each EventLoop so I
 don't see conflicts possible if parallel execution is performed elsewhere,
 where also globals will (will them?) be a part, as every
 sandbox/iframe/worker has worked until now.

 I'd personally +1 an explicit opt-out and indifferently accept a re-opt as
 modifier such `/us` where `s` would mean stateful (or any other char would
 do as long as `RegExp.prototype.test` won't loose it's purpose and power).

 Regards

 P.S. there's no such thing as RegExp.$0 but RegExp['$'] will provide the
 (probably) intended result
 P.S. to know more about RegExp and these proeprties my old slides from
 BerlinJS event should do:
 http://webreflection.blogspot.co.uk/2012/02/berlin-js-regexp-slides.html

 On Tue, Sep 16, 2014 at 7:35 PM, Allen Wirfs-Brock al...@wirfs-brock.com
 wrote:


 On Sep 16, 2014, at 11:16 AM, Domenic Denicola wrote:

  I had a conversation with Jaswanth at JSConf EU that revealed that
 RegExps cannot be used in parallel JS because they modify global state,
 i.e. `RegExp.$0` and friends.
 
  We were thinking it would be nice to find some way of getting rid of
 this wart. One idea would be to bundle the don't-modify-global-state
 behavior with the `/u` flag. Another would be to introduce a new flag to
 opt-out. The former is a bit more attractive since people will probably
 want to use `/u` all the time anyway. I imagine there might be other
 possibilities others can think of.
 
  I also noticed today that the static `RegExp` properties are not
 specced, which seems at odds with our new mandate to at least Annex B-ify
 the required-for-web-compat stuff.

 Yes, they should be in Annex B.  But that means that somebody needs to
 write a spec. that defines their behavior.

 We could then add that extension to clause 16.1 as being forbidden for
 RegExps created with the /u flag.

 Allen

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

 ,

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: import script -- .esm

2014-09-11 Thread Jussi Kalliokoski
On Thu, Sep 11, 2014 at 12:56 AM, Mark S. Miller erig...@google.com wrote:

 If there are no objections to recommending .js vs .jsm in this informal
 way, I propose that we place it there.



FWIW, .jsm extension is currently used as a convention in XUL for denoting
JavaScript modules (not the same thing as ES6 modules):
https://developer.mozilla.org/en-US/docs/Mozilla/JavaScript_code_modules/Using

For other associations of the .jsm extension, see
http://filext.com/file-extension/JSM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Asynchronous Module Initialize

2014-07-10 Thread Jussi Kalliokoski
On Thu, Jul 10, 2014 at 6:40 PM, John Barton johnjbar...@google.com wrote:


 On Wed, Jul 9, 2014 at 1:57 PM, Jussi Kalliokoski 
 jussi.kallioko...@gmail.com wrote:
 ...

  I proposed (it was less of a proposal though, more an idea or an example
 to spur better ideas) that we had a single dynamic exportable per each
 module, and that could be an object, function, undefined for side effects
 or anything. But, the important part was that it could also be a Promise of
 what you want to export, allowing asynchronous module initialization.

 The use cases addressed include:

 * Optional dependencies (required for porting large amounts of existing
 code to use ES6 modules).
 * Async feature detection.
 * Dependencies on things other than JS, such as stylesheets, images,
 templates or configuration (e.g. a default language pack).
 * Waiting on something to be ready, for example something like jQuery
 could wait for DOM ready so that the API consumer doesn't have to.

 All of these can be done with the current design, however you cannot
 defer the module being ready to be imported. So if you depend on these use
 cases, you have to provide async APIs for things that are possibly
 synchronous otherwise, not only imposing a performance penalty, but also a
 convenience downer on the consumers of your API.

 ...

 If I understand your question here, I think the current solution as
 adequate support for these cases.

 The current module loading solution mixes imperative and declarative
 specification of the JS execution order. It's a bit of a layer cake:
 imperative layer trigger declarative layers trigger imperative layers.

 The top imperative layer (eg System.import()) loads the root of the first
 dependency tree and parses it, entering a declarative specification layer,
 the import declarations. These declarations are then processed with Loader
 callbacks, in effect more imperative code, that can result in parsing and
 more declarative analysis.

 By design the declarative layers prevent all of the things you seek. This
 layer is synchronous, unconditional, wired to JS exclusively.

 The imperative layers support all of the use cases you outline, though to
 be sure some of this is more by benign neglect than design.

 By providing a custom Loader one can configure the module registry to
 contain optional, feature-detected modules or non-JS code. The Loader can
 also delay loading modules until some condition is fulfilled.   I expect
 that multiple custom loaders will emerge optimized for different use cases,
 with their own configuration settings to make the process simpler for devs.
  Guy Bedford's systemjs already supports bundling for example.


Interesting, thank you! I like this in the sense that the goal seems to not
be the ultimate solution, but the tool for building one (or many). So, do
you have any examples of how having optional dependencies would look from
the API providers' perspective, versus e.g. the examples I showed earlier:

// foo.js
export System.import(optional-better-foo-implementation)
  .catch( = System.import(worse-but-always-there-foo-implementation) );

Does the provided by the custom loaders defer the responsibility of taking
care of the optional dependencies to the API consumer, e.g. by dictating
which module loader to use for loading the module at hand? That might not
be ideal, especially if your code base is built on features of one loader
and then want to employ a third party library that is built on the
assumption of another loader. But maybe the future will show it to be a
worthy compromise.

- Jussi


 This approach concentrates the configuration activity in the code
 preceding the load of a dependency tree (and hopefully immediately before
 it). This seems like a better design than say commonjs where any module at
 any level can manipulate the configuration.

 The only unfortunate issue in this result is the decision to embed the
 custom loader in the global environment. This means that a tree of
 interdependent modules can issue Loader calls expecting a particular Loader
 to be in System so a custom loader will have to set/unset the global while
 loading the tree. Maybe we can experiment with modular loaders some time.


 jjb

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Asynchronous Module Initialize

2014-07-09 Thread Jussi Kalliokoski
On the ModuleImport thread, I pretty much derailed [1] the conversation
with poor argumentation which lead to the discussion drying out. But the
more I think about it, the more important I feel my concern was so I
figured I'd have another shot at it.

The problem I was describing is that, as with any problem space complex
enough, there is a conflict of interest between different sets of use
cases. As Brendan replied [2] to my post, the goals of the current design
of the module system include:

* Static vs. dynamic imports and exports. This enables read-time import
resolving, better (or at least easier to build) tooling, import:export
mismatches as early errors, leaves the door open for static metaprogramming
patterns like macros and types and probably other benefits. Also the
transitive cyclic dependencies I unwarrantedly focused on.
* Paving the cowpaths of existing patterns, such as named exports and
default exports.

Like I stated in the other thread, I'm a big fan of the static imports and
the read-time import resolving that it brings to the table. However, the
use case incompatibilities arise from the static exports side of things. I
proposed (it was less of a proposal though, more an idea or an example to
spur better ideas) that we had a single dynamic exportable per each module,
and that could be an object, function, undefined for side effects or
anything. But, the important part was that it could also be a Promise of
what you want to export, allowing asynchronous module initialization.

The use cases addressed include:

* Optional dependencies (required for porting large amounts of existing
code to use ES6 modules).
* Async feature detection.
* Dependencies on things other than JS, such as stylesheets, images,
templates or configuration (e.g. a default language pack).
* Waiting on something to be ready, for example something like jQuery could
wait for DOM ready so that the API consumer doesn't have to.

All of these can be done with the current design, however you cannot defer
the module being ready to be imported. So if you depend on these use cases,
you have to provide async APIs for things that are possibly synchronous
otherwise, not only imposing a performance penalty, but also a convenience
downer on the consumers of your API.

I'm quite skeptic of this being possible to retrofit to the current design
without sacrificing static exports, but I'd be more than happy to see
myself proven wrong and we have a lot of smart minds gathered here so
mayhaps.

I really like macros (all good things in moderation of course) and sweet.js
and use it occasionally even for production code. Before that I sometimes
even used GCC's preprocessor to get macros in JS. I also really like types
- in moderation as well, i.e. declaring the types of inputs and outputs of
functions. This is to say, I'm definitely not just happily trying to close
the metaprogramming door here. But I think things like macros and types are
something that can be done and is being done with tooling, at least to some
extent, whereas optional dependencies for example, not really, at least my
imagination is too limited to see how.

Of course, the ES6 modules ship is already overdue, there's already tooling
made for it and probably browsers are prototyping the current design as
well, so maybe these use cases are something we don't want to or can't
(anymore) afford to consider. Or maybe they are less important than the use
cases that would be excluded if we included async module initializing. And
that's all fine, as long as it's a conscious choice made with these
implications considered. (Maybe not personally fine for me, but I've
survived with the status quo so I can probably survive with using the
existing solutions if I have a use case that's not elegantly solved by ES6
modules).

[1] http://esdiscuss.org/topic/moduleimport#content-181
[2] For some reason, I could not find the reply on esdiscuss.org.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Trailing comma for function arguments and call parameters

2014-07-08 Thread Jussi Kalliokoski
On Tue, Jul 8, 2014 at 1:40 AM, Dmitry Soshnikov dmitry.soshni...@gmail.com
 wrote:

 On Sun, Jul 6, 2014 at 10:36 PM, Isiah Meadows impinb...@gmail.com
 wrote:

 My responses are inline.

  From: Alex Kocharin a...@kocharin.ru
  To: Oliver Hunt oli...@apple.com, Dmitry Soshnikov 
 dmitry.soshni...@gmail.com
  Cc: es-discuss es-discuss@mozilla.org
  Date: Sun, 06 Jul 2014 12:07:09 +0400
  Subject: Re: Trailing comma for function arguments and call parameters

 
  In fact, how about the same syntax for arrays and function calls? With
 trailing commas and elisions?
 
  So foo(1,,3,,) would be an alias for foo(1,undefined,3,undefined) ?
 
 
  06.07.2014, 11:57, Alex Kocharin a...@kocharin.ru:
   Unless you use leading comma style, trailing commas are very good to
 have for anything that has variable amount of items/keys/arguments.
  
   This is a classic example I use to show why JSON is a bad idea:
 https://github.com/npm/npm/commit/20439b21e103f6c1e8dcf2938ebaffce394bf23d#diff-6
  
   I believe the same thing applies for javascript functions. If it was
 a bug in javascript, I wish for more such bugs really...
  
   04.07.2014, 20:33, Oliver Hunt oli...@apple.com:
On Jul 3, 2014, at 3:52 PM, Dmitry Soshnikov 
 dmitry.soshni...@gmail.com wrote:
 Hi,
  
 Will it makes sense to standardize a trailing comma for function
 arguments, and call parameters?

 2. Function statements usually don't have such long lists of arguments
 that such a thing would truly become useful. That is rare even in C, where
 you may have as many as 6 or 7 arguments for one function.


 Yes, it's true, however, my use-case was based on 80-cols rule we use in
 our code-base. And with type annotations (especially type annotations with
 generics, when you have types like `Mapstring, IParamDefinition $params`,
 etc) it can quickly become more than 80 cols, and our style-guide is to use
 each parameter on its own line, with a trailing comma for convenience of
 future adding that will preserve git blame logs if one doesn't  need always
 append the comma on previous line, in case when adding a new parameter


Another example can be found by looking at pretty much any Angular-based
codebase, due to the dependency injection.

+1 from me for this. Anything that makes changesets easier to contain makes
me happier. I also think it'd be a nice for syntax consistency to support
this.

Cheers,
Jussi




 Dmitry

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: WeakMap not the weak needed for zombie views

2014-07-07 Thread Jussi Kalliokoski
To first address the particular case of using weak maps for custom event
listeners via iteration:

I think the only relatively sane approach to iterating a WeakMap would be
to force GC whenever the WeakMap is being iterated. This would make sure
that you couldn't get references to items that are about to be
garbage-collected (and thus don't also introduce non-deterministic errors
and memory leaks for event listeners firing on disposed views). However,
this would make iterating a WeakMap potentially unbearably slow and thus
not worth using for this case. The performance hit may be tuned down by
traversing the reference tree only from the items contained in the WeakMap,
but I'm not sure if that's feasible and it would probably also make the
performance worse if the WeakMap is large enough and has a lot of resources
that are alive. Another drawback is that this would potentially lead to
abuse where for example all views would be stored in a WeakMap and then the
WeakMap would be iterated through just to force GC on the views.

On the discussion thread linked, it's also discussed that weakrefs would be
used for DOM event listeners, but I'm not exactly sure if that's a very
workable solution either. You'll basically get a weak reference locally,
but the DOM event listener will still hold a strong reference to the
function. You could of course add a weak addEventListener variant, but soon
you'd notice that you also need a weak setTimeout, setInterval,
requestAnimationFrame, Object.observe and maybe even weak promises. :/

All in all, I'm doubtful that weak references can solve the use cases
presented very well. They would basically encourage people to start
building frameworks that use weakrefs instead of lifecycle hooks only to
notice that there's some part of the platform where they need manual
reference clearing anyway. The solution, I think, is to just use frameworks
and libraries like angular and react that provide these lifecycle hooks and
take care that these hooks are triggered for you, instead of having to
manually call a destroy method.

Cheers,
Jussi


On Mon, Jul 7, 2014 at 4:49 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 7/6/14, 4:11 PM, Filip Pizlo wrote:

 My reading of the linked Mozilla discussions seems to be that some GC
 implementors think it's hard to get the feature right


 I'm not sure how you can possibly read https://groups.google.com/
 forum/#!msg/mozilla.dev.tech.js-engine.internals/V__5zqll3zc/hLJiNqd8Xq8J
 that way.  That post isn't even from a GC implementor and says nothing
 about implementation issues!

 I think that post presents the strongest argument I know against the use
 GC to reclaim your non-memory resources argument, and the summary is that
 while that approach looks promising at first glance in practice it leads to
 resources not being reclaimed when they should be because the GC is not
 aiming for whatever sort of resource management those particular resources
 want.

 -Boris

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: WeakMap not the weak needed for zombie views

2014-07-07 Thread Jussi Kalliokoski
On Mon, Jul 7, 2014 at 12:44 PM, Till Schneidereit 
t...@tillschneidereit.net wrote:

 Ah, no, you didn't - I misunderstood your argument and did indeed think it
 was about caching. I'm still hesitant about this particular argument
 because it seems like your framework would still have issues with delayed
 cleanup if it relied on GC to do that. I know, however, that in practice
 it's Hard to ensure that all references in a complex system are properly
 managed (especially in scenarios involving third-party code as you
 describe), so I also don't think this can be outright dismissed.


True. However, I think that the non-determinism will not help the situation
of a complex system as it can introduce more leaks. For example, the custom
event handler scenario can trigger handlers that would otherwise be dead,
and those handlers might cause other things to become active again, so it
requires even deeper a level of understanding of the system to reason with
this than with manual cleanup. I find this a similar issue to null pointer
exceptions caused when somebody else cleans up your stuff but forgets to
tell you, so the way I see it it's just replacing one class of problems
with another.

Still, I also acknowledge that weak references have their place in making
reasoning about systems easier. For example WeakMap already solves a lot of
the problems that are caused by not knowing the lifecycle of a (possibly
3rd party) closure. For example, if the closure holds some state that's
associated with an object provided as an input to the closure, it can use
the object as the key and then GC can just do it's job as the closure holds
no strong references to its inputs or outputs. I'm just not very convinced
that adding any features that make GC observable solve any problems big
enough to justify the problems caused.

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ModuleImport

2014-07-04 Thread Jussi Kalliokoski
On Thu, Jul 3, 2014 at 9:05 PM, Brendan Eich bren...@mozilla.org wrote:

 Jussi Kalliokoski wrote:

 So sometimes someone can need it, so we must have good support? Is that
 how we operate these days?


 Cool down a minute :-|.


Heh, the internet is a funny place when it comes to interpreting emotion;
I've actually been very calm the whole time. ;D


 JS is a mature language on a big rich-and-messy evolving platform-set
 (browser JS, Node.js, other embeddings). We don't preach only majority use
 cases or there should be only one way to do it -- more TIMTOWDI or
 TimToady Bicarbonate:

 http://en.wikipedia.org/wiki/There%27s_more_than_one_way_to_do_it

 JS systems start small and grow. Modules often merge, split, merge again.
 Cycles happen in the large. ES6 modules address them, they were always a
 design goal among several goals.


I'm well aware, to me it looks like cycles are the defining feature of the
module system on which other features have been built on. The reason this
is a problem is because the requirement of cyclic dependencies not only
complicates the API surface, reasoning about it and implementations, it
also excludes a lot of features.

For example, if we look at the problem of optional dependencies (which,
btw, unlike cyclic dependencies are an extremely common corner case,
especially on the web, and can't really be refactored away) is not that you
can't load things dynamically, because you can, but in the fact that
importing a module is async but initializing and declaring is not, it's
static to support mutable bindings magic (that are required for transitive
cyclic dependencies) and compile-time errors.

Now, let's have a hypothetical change to the module system. Let's say that
we allow only exporting one thing, and that one thing can be any value.
When you import it, it's like assigning a variable, except that resolving
the value you are assigning to is done async at compile time. Like:

// somewhere.js

export {
  something: function something () {}
};

// doSomethingElse.js
import { something } from somewhere;
export function doSomethingElse () {};

Benefit #1: No module meta object crap, the only new concept needed to
understand this is compile-time prefetching.
Benefit #2: No special destructuring syntax (since you're doing normal
destructuring on a normal value).

Cool, but we broke cyclic dependencies without fixing optional dependencies:

// optional dependency here
var fasterAdd = System.import(fasterAdd);
var basicAdd = function (a, b) {
  return a + b;
};

// because there's no async initialize, we have to impose an async
interface for an otherwise sync operation
export function add (a, b) {
  return fasterAdd
.catch( = basicAdd )
.then( (addMethod) = addMethod(a, b) );
};

However, with this design, we can allow exporting a promise of what we want
to export, thus deferring the import process until that promise is resolved
or rejected:

var basicAdd = function (a, b) {
  return a + b;
};

export System.import(fasterAdd)
  .catch( = basicAdd );

And there you have it, voilá.

Benefit #3: See #1, if you don't want to, you don't even have to comprehend
compile-time prefetching anymore. It's just optimization sugar.
Benefit #4: No need to impose async interfaces for inherently sync
operations just because the initialize phase is not async while loading is.
Benefit #5: You can do stuff like async feature detects in the initialize
phase, something that is completely broken in existing module systems (you
*can* do this with RequireJS through some effort, but I'm not sure it's
officially supported or part of AMD).
Benefit #6: High compatibility with existing module systems, including edge
cases, providing a solid foundation for transpiling legacy modules to the
new system. See some sketches I made yesterday when playing around with the
idea: https://gist.github.com/jussi-kalliokoski/6e0bf476760d254e5465
(includes an example of how you could implement localforage if the platform
dependencies were provided as modules).
Benefit #7: Addresses the issue of libraries depending on more than just JS:

import {loadImages, importStylesheets} from fancy-loader;

var myModule = { ... };

export Promise.all([
  loadImages([foo.jpg, bar.png]),
  importStylesheets([style.css])
]).then( = myModule );

There's probably more that didn't come to my mind. So, this would support
pretty much all the features of the existing solutions and more. Just not
cyclic dependencies. At all.

Now, we could of course try to add this as an afterthought to the current
design by letting individual exports be promises, but in order to preserve
transitive cyclic dependencies in that case, you'd have to wait for the
promise of that to resolve when the importing module accesses the imported
binding, not only causing execution to yield to the event loop unexpectedly
(which is what we're trying to avoid with async functions) but also makes
it transitive only most of the time, ending up with cyclic dependencies

Re: ModuleImport

2014-07-04 Thread Jussi Kalliokoski
On Fri, Jul 4, 2014 at 10:19 AM, Jussi Kalliokoski 
jussi.kallioko...@gmail.com wrote:


 On Thu, Jul 3, 2014 at 9:05 PM, Brendan Eich bren...@mozilla.org wrote:

 Cool down a minute :-|.


I now realize that my tone on this thread hasn't been very considerate, and
apologize if I offended anyone, or even if I didn't.

I don't want anyone to think that I don't respect the work that's been done
to get ES6 modules where they are today. Considering the use cases and
requirements they've been designed for, the design is actually great. My
goal was only to challenge those use cases and requirements and their
priority over other ones, but I failed at conveying that properly. Sorry.

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: @@initialize, or describing subclassing with mixins

2014-07-04 Thread Jussi Kalliokoski
On Tue, Jul 1, 2014 at 4:06 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 6/30/14, 3:38 PM, Jussi Kalliokoski wrote:

 On Mon, Jun 30, 2014 at 10:16 PM, Boris Zbarsky bzbar...@mit.edu
 Would it help the discussion if UA implementors described how they
 solve these problems now?

 Yess, please! +


 https://ask.mozilla.org/question/850/how-do-webidl-
 methodsgetterssetters-tell-whether-objects-are-of-the-
 right-types/?answer=851#post-id-851 for Gecko.

 I'd be curious to hear what other UAs do.


Thanks for the link! Me too. :)


  As I suggested in one approach, for natural instances (in lack of a
 better word) of Nodes can have all the in-memory layout optimizations
 they need.


 Some of these are not just optimizations; some of these are needed for
 correctness.  For example, we use fixed parts of the object layout to cache
 values for some Web IDL properties that need to keep returning the same
 thing over and over again.


I might be missing something, so is there something in the idea I proposed
that would prevent doing this?

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: ModuleImport

2014-07-03 Thread Jussi Kalliokoski
On Thu, Jul 3, 2014 at 1:29 AM, Brian Di Palma off...@gmail.com wrote:

  The arguments for and against supporting cyclic dependencies seem to be
 academic. I'm yet to see any evidence of their importance in practice nor
 proof they they are fundamental ... or not.

 Transitive cyclic dependencies. I'd say that's the case that was in
 the minds of the authors of the module system.
 In large codebases those can happen and a module system that does not
 handle them gracefully would be poor.

 Support for them is needed, and what CommonJS has is not good enough.
 They are acknowledged in the modules documentation for node
 http://nodejs.org/api/all.html#all_cycles
 This does not mean they are recommended, the same holds true for ES6
 modules.

 It is an acceptance of the reality of complex and large codebases that
 sometimes cyclic dependencies can occur.


So sometimes someone can need it, so we must have good support? Is that how
we operate these days?


 It boils down to this.

 You can import a dependency in three ways

 import MyClass from 'MyClass';
 import {MyClass} from 'MyClass';
 module myClass from 'MyClass';


And (in the same order):
System.import(MyClass).then(function (MyClassModule) {
  // I don't actually know how someone would even access the default
exports from the module object unless in case of default exports there is
no module object, just the default exports as the module.
  var MyClass = MyClassModule;
});

System.import(MyClass).then(function (MyClassModule) {
  var { MyClass } = MyClassModule;
});

System.import(MyClass).then(function (MyClassModule) {
  var myClass = MyClassModule;
});

var MyClass = System.get(MyClass);
var {MyClass} = System.get(MyClass);
var myClass = System.get(MyClass);



 That's one too many ways for the simplest module system that fulfills
 all requirements.

 import MyClass from 'MyClass';
 import {MyClass} from 'MyClass';
 import * as myClass from 'MyClass';

 Is not the fix.

 The confusion stemmed from the first production not the last.


I agree. Thanks, I'm actually no longer even indifferent towards default
exports but I also think it should go.


 Perfection is achieved, not when there is nothing more to add, but
 when there is nothing left to take away.

 B.
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: ModuleImport

2014-07-03 Thread Jussi Kalliokoski
On Thu, Jul 3, 2014 at 11:41 AM, Brian Di Palma off...@gmail.com wrote:

  So sometimes someone can need it, so we must have good support? Is that
 how we operate these days?

 Imagine a large codebase which already has transitive cyclic dependencies.
 If the module system has poor support for them it might still work
 with them until one day a developer reordered the import statements.
 How would you feel if such a simple operation caused you issues?


Happy; finally having a good motivator to refactor and get rid of the
cyclic dependencies, at least in that compartment.


 Or upgrading to the latest version of a popular utility toolchain like
 lo-dash could introduce an issue purely because the upgrade created a
 transitive cyclic dependency.
 And the fix for that would be to reorder your import statements and
 add comments in your module telling people not to change the order.
 Again, how would you feel?


Angry at lo-dash for breaking backwards compatibility, then I'd revert the
upgrade and file a bug against it. I'd probably also consider whether
lo-dash were in this imaginary case something I want to depend on given
that for a utility library they need cyclic dependencies. I'd also be happy
that the module system made this obvious.

But I get your point since I'm well aware that my thoughts are likely not
to be a very good representation of how most people would feel.

However, I still don't think it's something worth making other sacrifices
over.


 Default import and exports are purely sugar over

 ```
 import {default as MyClass} from 'MyClass';
 ```

 It saves you a few character typing out when importing from legacy
 module system.

 ```
 import MyClass from 'MyClass';
 ```

 Those two are the same thing.


Ughh, I see. Not cool.


 From birth the brand new module system is going to have this
 superfluous appendage to support


Let's not have it. At least not in ES6. Tools like Traceur can support it
for an easier migration path since they already have diverged from ES.next
anyway with all the annotations (for which, off topic, I haven't seen even
a proposal here yet) and stuff.


 module systems that 10 years from now
 people will struggle to remember.


I certainly hope that will be the case, but it's 2014 and people are still
implementing banking with Cobol, security protocols with C++ and website
cryptography with Java applets.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Re: ModuleImport

2014-07-03 Thread Jussi Kalliokoski
On Thu, Jul 3, 2014 at 5:42 PM, John Barton johnjbar...@google.com wrote:

 On Thu, Jul 3, 2014 at 2:31 AM, Jussi Kalliokoski 
 jussi.kallioko...@gmail.com wrote:

  Tools like Traceur can support it for an easier migration path since
 they already have diverged from ES.next anyway with all the annotations
 (for which, off topic, I haven't seen even a proposal here yet) and stuff.


 Jussi, I would appreciate a bug report on the Traceur github project
 pointing to information that makes you think this statement is correct. We
 consider divergence from ES.next to be a bug and do not support any feature
 outside of the proposal from TC39.


Annotations are marked as experimental, but I filed a bug anyway to either
get a proposal or update the wiki to inform that there's no official spec
or proposal for it: https://github.com/google/traceur-compiler/issues/1156
;)


 Our project does provides great technology that has been used to develop
 migration tools, annotation experiments, and stuff. All of that comes on
 other projects or under opt-in flags on traceur.


Exactly, and that's how it could be done in traceur.

- Jussi


 jjb

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ModuleImport

2014-07-02 Thread Jussi Kalliokoski
On Tue, Jul 1, 2014 at 10:28 PM, Kevin Smith zenpars...@gmail.com wrote:


  As such, we are balancing the marginal user experience gains of
 export-overwriting against the better support for circular dependencies
 of real modules.


 Another way of looking at it:

 Being in control of the language, you can always invent new sugar to
 optimize user experience (within limits) when real usage data proves its
 worth.  But the core model can't be changed.

 As such, it seems like it would be better to err on the side of making the
 core model simple and solid, leaving minor UX optimizations to future
 syntax.


But it's neither simple nor solid. It's overtly complicated to support
features that shouldn't be there.

Sorry in advance for the tone of this message, it is quite negative. But
the intent is constructive. To me modules are the most anticipated feature
in ES6 and I've been closely following the discussion and the proposal's
evolution and I've been extremely thrilled. Finally we could have a chance
of alleviating the situation of having a ton of non-intercompatible
different module loaders. I haven't contributed much to the discussion
since I liked the overall direction. However, now that I've been actually
using the modules for a while with a couple of different transpilers, it's
become obvious that the design is inherently broken (not the fault of the
transpilers, they are actually probably better in some aspects than the
real thing), and in order for ES modules to help the situation instead of
making it worse, it needs to be better than the existing solutions.

The core unique selling points of ES6 modules as opposed to the other
module loading systems:

* Language built-in: the strongest point. This makes sure that most tooling
should support the feature out of the box and that module authors are more
inclined to provide their modules in this format. But the design can be
whatever and this point will still hold true, so no points to the design
here.
* (Reliably) statically analyzable syntax. This is my favourite feature and
really awesome. Allows engine to fetch next modules while the current one
is still being parsed, and tooling to better understand what the code does.
However, this would hold true even if all we standardized was syntactic
sugar on top of requirejs.
* Cyclic dependencies: First of all, this is not a feature you want to
necessarily encourage. It looks good in academia, but in my career I've yet
to see real world code that would benefit more from cyclic dependencies
more than refactoring. Not to mention that having cyclic dependencies have
been possible in global scope modules since LiveScript, and is possible in
CommonJS as well if you do a little DI:
https://gist.github.com/jussi-kalliokoski/50cc79951a59945c17a2 (I had such
a hard time coming up with an example that could use cyclic dependencies
that I had to dig the example from modules examples wiki). And honestly I
don't think it's a feature that deserves to be easier to do than my
example, and especially not worth making other design compromises for.
* Mutable bindings: This will make a good chapter in a future edition of
JavaScript: The bad parts, along with being able to redefine undefined. You
can have mutable bindings in other module systems as well, just reassign
something in your exports and then your consumer uses the exports property
directly. A lot of WTFs avoided and even doing it like that will ring alarm
bells in code review.
* Compile-time errors: This not a feature, it's a bug. Try finding a
website that doesn't somewhere in its code check for whether you have a
feature / module available, i.e. optional dependencies. Some examples:
Angular has jQuery as an optional dependency; spritesheet generator modules
for node that have multiple engines as optional dependencies because some
of them may be native modules that don't compile on all platforms. Also
things get worse if platform features are implemented as modules. Let's say
things like localStorage, IndexedDB and friends were provided as modules
and as a result, things like localforage would either not exist or would be
infinitely more complex. Just look at github search for keywords `try` and
`require`
https://github.com/search?l=javascriptq=try+requireref=cmdformtype=Code
to see how widely used the pattern of wrapping a module load in a try-catch
is.

Now let's look at some things that tip the odds into existing solutions
favor:

* Massive amount of existing modules.
* Existing large-scale user-bases.
* Node has stated that the core will always be CommonJS, meaning that on
node, in order to use ES6 modules, you'll have to be using two different
module systems which doesn't sound like a thing that people would do unless
there's proven benefits.
* Completely dynamic. Now, I know there are people that think that this
isn't not good, but it is. It gives you a lot of power when debugging
things or playing around with new things (something I haven't seen
discussed

Re: ModuleImport

2014-07-02 Thread Jussi Kalliokoski
On Wed, Jul 2, 2014 at 7:09 PM, John Barton johnjbar...@google.com wrote:


 * (Reliably) statically analyzable syntax. This is my favourite feature
 and really awesome. Allows engine to fetch next modules while the current
 one is still being parsed,


 This isn't true -- all module designs in play require parsing (include of
 course CJS and AMD).


Huh? I wasn't saying that they don't. I mean that with the static syntax
the parser can initiate the request immediately when it hits an import
statement, which is a good thing and not possible with what is out there.
You can of course assume that the require call with a static string does
what you'd expect but then you might end up loading something that was
never actually required but someone had their own require function there
instead that has something else.


 and tooling to better understand what the code does. However, this would
 hold true even if all we standardized was syntactic sugar on top of
 requirejs.


 I don't believe that anyone expects such an outcome.


Heh of course not, that would be horrible; I was referring to the fact that
this is a low-hanging fruit to pick.


  * Cyclic dependencies: First of all, this is not a feature you want to
 necessarily encourage. It looks good in academia, but in my career I've yet
 to see real world code that would benefit more from cyclic dependencies
 more than refactoring. Not to mention that having cyclic dependencies have
 been possible in global scope modules since LiveScript, and is possible in
 CommonJS as well if you do a little DI:
 https://gist.github.com/jussi-kalliokoski/50cc79951a59945c17a2 (I had
 such a hard time coming up with an example that could use cyclic
 dependencies that I had to dig the example from modules examples wiki). And
 honestly I don't think it's a feature that deserves to be easier to do than
 my example, and especially not worth making other design compromises for.


 The arguments for and against supporting cyclic dependencies seem to be
 academic. I'm yet to see any evidence of their importance in practice nor
 proof they they are fundamental ... or not.


True, and that being the case I don't see the reason of putting them on a
pedestal. If they happen to be a nice side effect, that's fine, but I'm
mostly referring to arguments against different proposals using doesn't
support cyclic dependencies.


 * Compile-time errors: This not a feature, it's a bug. Try finding a
 website that doesn't somewhere in its code check for whether you have a
 feature / module available, i.e. optional dependencies. Some examples:
 Angular has jQuery as an optional dependency; spritesheet generator modules
 for node that have multiple engines as optional dependencies because some
 of them may be native modules that don't compile on all platforms. Also
 things get worse if platform features are implemented as modules. Let's say
 things like localStorage, IndexedDB and friends were provided as modules
 and as a result, things like localforage would either not exist or would be
 infinitely more complex. Just look at github search for keywords `try` and
 `require`
 https://github.com/search?l=javascriptq=try+requireref=cmdformtype=Code
 to see how widely used the pattern of wrapping a module load in a try-catch
 is.


 Optional dependency is completely supported by Loader.import().
  Furthermore its promise based API avoids try/catch goop.


Try/catch is far less goop than promises, and furthermore your non-optional
dependencies don't come in as promises, and neither can you define your
module asynchronously and wait to see whether the optional dependency is
available before exposing your interface. If you could define your module
like this it would be less of a problem but still ugly and inferior (in
this specific case) to for example CJS:

import someRequiredDependency from somewhere;

Loader.import(someOptionalDependency)
  .catch(function noop () {})
  .then(function (someRequiredDependency) {
exports function doSomething (foo) {
  if ( someOptionalDependency ) {
return someOptionalDependency(foo + 5);
  } else {
return someRequiredDependency(foo + 2);
  }
};
  });





 Now let's look at some things that tip the odds into existing solutions
 favor:

 * Massive amount of existing modules.
 * Existing large-scale user-bases.
 * Node has stated that the core will always be CommonJS, meaning that on
 node, in order to use ES6 modules, you'll have to be using two different
 module systems which doesn't sound like a thing that people would do unless
 there's proven benefits.


 These points are not relevant since nothing in the current design prevents
 these success stories from continuing.


The two first points are relevant if they decrease the chances of ES6
modules becoming the most used module system (which is obviously a goal
because otherwise we'll just be making things worse by contributing to
fragmentation). The last one is relevant because

Re: ModuleImport

2014-07-02 Thread Jussi Kalliokoski
On Wed, Jul 2, 2014 at 3:38 PM, Kevin Smith zenpars...@gmail.com wrote:


 But it's neither simple nor solid. It's overtly complicated to support
 features that shouldn't be there.


 I have to disagree here.  If we drop default imports, then we can describe
 the module system like this:

 Variables can be exported by name.  Variables can be imported by name.


FWIW, I don't have a problem with dropping / deferring default exports,
although my personal ideal is that like functions, modules should do one
thing and thus export one thing, but having no default exports doesn't
prevent exporting just one thing.



 It doesn't get any more simple than that.  What I mean by solid is that it
 has good coverage of the edge cases, meaning primarily cyclic dependencies.


The complexity is in having multiple different ways of doing one thing and
introducing new kind of bindings to the language that didn't exist before
to support those edge cases.




 Sorry in advance for the tone of this message, it is quite negative.


 I didn't perceive this as negative.  I think it's quite constructive to
 uncover all of the arguments.


Happy to hear!



 * Massive amount of existing modules.
 * Existing large-scale user-bases.


 We've already taken a look at the technical side of interoperability with
 old-style modules, and there's no problem there.  What remains, I suppose,
 is a sociological argument.  More on that later.


 * Node has stated that the core will always be CommonJS, meaning that on
 node, in order to use ES6 modules, you'll have to be using two different
 module systems which doesn't sound like a thing that people would do unless
 there's proven benefits.


 If users want Node core to be exposed as ES6 modules, then the Node
 developers will provide it.  It's not some ideological battle - it's about
 whatever is good for the platform.  Regarding two module systems at the
 same time: more later.

  * Completely dynamic. Now, I know there are people that think that this
 isn't not good, but it is. It gives you a lot of power when debugging
 things or playing around with new things (something I haven't seen
 discussed re:modules on this list). One of the greatest things in JS is
 that instead of reading the usually poor documentation, let alone the code,
 of something you want to use you can just load it up in node or the
 developer tools and play around with it. With node, you require() something
 in the repl and you see immediately what it exports. (...edit...) This is
 simply not possible with ES6 modules without a massive boilerplate
 spaghetti with async stuff.


 You're right, but I don't think we need objects-as-modules to address
 this.


But depending on how you load the modules (i.e. syntax or Loader API) the
module might end up being an object anyway so it's just confusing if it
sometimes is and sometimes isn't.


 We want a blocking load API for these situations:

 1.  Any REPL  (strong)
 2.  Server-only programs which don't care about async loading and don't
 want the complications (weaker)

 In es6now, I provide a `loadModule` function for loading ES6 modules
 synchronously in the REPL.  I think Node would want to provide a
 synchronous loading method as part of the so-called module meta object.


The term module meta object and simple design don't go hand in hand.


 That API needs eyes, BTW.


Thanks for the reference, I'll take a look at it after having a good
night's sleep first. :)




 Given all this, how are we supposed to convince people to use this stuff?
 These concerns are not something that can be fixed later either, they're
 fundamental to the current design.


 I don't see any technical problem here.  So let's look at the sociological
 argument:

 ES6 modules are different from Node/AMD modules, and seem to be at odds
 philosophically.  Since Node/AMD modules already have established user
 bases, and important members of the Node community are critical, ES6
 modules won't be successful at penetrating the market.

 Counter-argument:

 Take a look at this package:  https://github.com/zenparsing/zen-sh . It's
 an experimental ES6 package which allows you to open up a shell and execute
 commands using tagged template strings.  I use it at work to automate git
 tasks.  It's awesome (but very rough).  It's completely installable using
 NPM, today.  I encourage anyone to try it out (you'll need to install
 es6now https://github.com/zenparsing/es6now first, though).

 It exports a single thing, but that thing is given a name.  It is set up
 to work with both `require` and `import`.

 Now, the sociological argument says that because it's written as an ES6
 module the community will reject this package.  Does that sound plausible
 to you?


No of course not, but why would anyone introduce more complexity to their
project to do what you did? Experimental curiosity is probably your case,
but the only other reason I can think of is sadism, deliberately
fragmenting the platform. In that example, 

@@initialize, or describing subclassing with mixins

2014-06-30 Thread Jussi Kalliokoski
This is probably an absurd idea and I have no idea if it can be actually
made to work for host objects, but wanted to throw it in the air to see if
it has any viability and could be polished.

As we all know, mixins are a strongly ingrained pattern in JS out in the
wild. One notable example is the EventEmitter in node and the numerous
browserland alternatives. The main benefit of mixins is pretty much that
they allow multiple inheritance.

The common mixin design pattern is something like:

function A () {
  this.x = 1;
}

function B () {
  this.y = 2;
  A.apply(this, arguments);
}

So you have the constructor function that initializes the state of the
instance, and doesn't care whether the state is applied to an instance of
the class or even a plain object.

Could subclassing be described in terms of mixins? After all, the problems
we're having here and the reason we're trying to make complex solutions
like deferred creation step is to hide the uninitialized state.

But say, in addition to @@create, we had @@initialize which would just add
the hidden state to any given object bound to `this`. This can be done in
terms of for example assigning a host object or other private state into
one symbol, or multiple symbols to hold different private variables, or an
engine-specific way; it doesn't matter because it's an implementation
detail that is not visible to the userland. However, the key thing is that
it could be applied to any given object, not just instances of the host
object. That sidesteps the whole problem of uninitialized state, because
you only have objects that have no state related to the host object or
objects that have initialized state of the host object.

So let's say the default value of @@initialize was as follows:

super(...arguments);

That is, just propagate through the @@initialize methods in the inheritance
chain.

Then the default value of @@create could be described as follows:

var instance = Object.create(ThisFunction.prototype);
ThisFunction[@@initialize].apply(instance, arguments);
return instance;

As an example, here's how you could self-host Map in these terms:
https://gist.github.com/jussi-kalliokoski/5ef02ef90c6cbb8c1a70 . In the
example, the uninitialized state is never revealed.

Simplicity is not the only gain from this approach, since it also opens the
door to multiple inheritance, e.g. let's say you wanted a Map whose
contents you can append to a DOM node:

class DomBag {
[@@initialize] () {
DocumentFragment[@@initialize].apply(this);
Map[@@initialize].apply(this, arguments);

for ( let node of this.values ) {
DocumentFragment.prototype.appendChild.call(this, value);
}
}

get: Map.prototype.get
has: Map.prototype.has

set (key, value) {
if ( this.has(key) ) {
this.removeChild(this.get(key));
}

Map.prototype.set.call(this, key, value);
DocumentFragment.prototype.appendChild.call(this, value);
}

remove (key) {
if ( this.has(key) ) {
this.removeChild(this.get(key));
}

Map.prototype.remove.call(this, key);
}
}

var bag = new DomBag([ [foo, document.createElement(foo)], [bar,
document.createElement(bar)] ]);
document.body.appendChild(bag);

So the core idea of the proposal is to make host objects completely
unobservable. A DOMElement instance for example is no longer a host object;
it's a normal object with __proto__ assigned to DOMElement.prototype,
however it contains a private reference to a host object that is not
observable to userland in any way.

There's obvious problems that need to be thought of, mostly because of DOM,
like if you initialize two DOM node things on the same object, and then
append that node somewhere. However, if we think of it in terms of symbols,
we can have a symbol that represents the host object that gets applied to
the tree when the object is applied, and the @@initialize of these nodes
assigns a host object to that symbol, of course the assignment in the
latter @@initialize overrides the one in the former.

WDYT?

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: @@initialize, or describing subclassing with mixins

2014-06-30 Thread Jussi Kalliokoski
On Mon, Jun 30, 2014 at 4:41 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 6/30/14, 5:37 AM, Jussi Kalliokoski wrote:

 However, the key thing is that it could be applied to any given object,
 not just
 instances of the host object.


 The problem with this is that it requires either allocating the hidden
 state outside the object itself somewhere or requiring all objects to have
 space for this hidden space or both (e.g. allocating some amount of space
 for hidden state in all objects but spilling out into out-of-line slots
 when you want more hidden state than that).

 So this fails one of the primary design criteria for @@create: being able
 to explain the existing builtins (both ES and DOM), which allocate hidden
 state inline for efficiency's sake.

 I realize you consider this an irrelevant implementation detail, but it
 _is_ important.


No, I don't consider potential to optimize an irrelevant implementation
detail, especially when it comes to DOM. However, my knowledge in
engine-level DOM optimization is relatively poor, so I'm glad to have any
flaws in the idea pointed out.


  As an example, here's how you could self-host Map in these terms:
 https://gist.github.com/jussi-kalliokoski/5ef02ef90c6cbb8c1a70 . In the
 example, the uninitialized state is never revealed.


 Right, at the cost of requiring the symbol thing, which costs both
 performance and memory.


The host environment needs not use actual symbols, but I see your point.


   DocumentFragment.prototype.appendChild.call(this, value);


 This is an interesting example.  How do you expect the appendChild code to
 perform the check that its this value is a Node (not necessarily
 DocumentFragment, note, appendChild needs to work on any node) in this
 case?  Your time budget for this is 20 machine instructions as an absolute
 limit, 10 preferred and your space budget is as small as possible.


There are various approaches to that, all cost something, but the current
approach is not free either. One heavy-handed optimization for the poor
performance common case is to have Object's struct contain a pointer to a
Node's host object, then if that's null, the object doesn't represent a
Node. Depending on what you count in, that's around 4 instructions per
check (I'm surprised if the current implementations do better than that).
Memory footprint is 32bits on ARM and x86, and 64 bits on 64bit
environments, per every Object, so pretty damn high. Performance often lies
in tradeoffs, however, so an implementation might spend some extra cycles
on having its own memory map of DOM nodes instead and enforce 32bit, or if
they're sadistic towards developers of massive table sites, 24 bits. This
can also be implemented as a binary flag where the layout is expanded if
the flag is active to take the non-Node object memory footprint addition to
1 bit.

Another, more general, approach that's less memory-greedy for DOM-light
applications is to store that pointer (or the whole state) in the layout of
objects that are naturally instances of Node, then even possibly use the
symbol space (or something less expensive) for the state if it's not. That
doesn't change the current situation's best-case performance (i.e. only
instances of Node are attempted to be appended to the DOM, where you have
to do the instance checks in the current situation as well), adds one check
to the case of erroring out for appending a non-Node, and leaves the
performance cost on the doorstep of the thing that wasn't even possible
before.

Aside, out of curiosity, which is more problematic in DOM currently:
creation or appending of nodes? My guess is appending, but when it comes to
performance I'd take data over instinct any day.


 (This is not even worrying about the fact that in practice we want
 different in-memory layouts for different DOM objects,


These different in-memory layouts can be applied to the state behind the
pointer.


 that Object.prototype.toString needs to return different values for
 different sorts of DOM objects


Depending on how the internal layout problem is solved, that can be looked
up from the internal state just as now, or the prototype chain (latter
being a slight compatibility hazard). After all, no code is currently using
@@initialize today so we can't break the behavior of plain object - DOM
node mixins.


 , or that some DOM objects need different internal hooks (akin to proxies)
 from normal objects.)


The internals can again be in the internal state. However, this internal
state is now explainable in terms of the spec language. For all the
implementation cares, the spec may describe that the implementation uses
private symbols to store the internal state, and then the implementation
stores the state inline anyway, because there's no test can prove that it
doesn't internally use private symbols for it.

Cheers,
Jussi


 -Boris
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https

Re: IsConstructor

2014-06-13 Thread Jussi Kalliokoski
On Fri, Jun 13, 2014 at 8:21 PM, Allen Wirfs-Brock al...@wirfs-brock.com
wrote:


 On Jun 13, 2014, at 8:05 AM, C. Scott Ananian wrote:

  On Fri, Jun 13, 2014 at 6:33 AM, Tom Van Cutsem tomvc...@gmail.com
 wrote:
  Jason proposes:
 new C(...args) = C[Symbol.new](...args)
 
  Allen proposes:
new C(...args) =  C.apply(C[Symbol.create](), args)
 
  Consider also the way the spec could read.  For example, for the
  'Error' object, instead of having 19.5.1.1 Error ( message ) and
  19.5.1.2 new Error ( ...argumentsList ), in Jason's formulation
  section 19.5.1.2 would just be Error[ @@new ]], aka an ordinary
  method definition.  If Error is implemented as an ECMAScript function
  object, its inherited implementation of @@new will perform the above
  steps.
 

 The existence or not of @@new doesn't make a difference in that regard.
  The only reason we current need the new Foo(...) specifications is
 because [[Construct]] exists and built-ins are not necessarily ECMAScript
 functions so we have to say what their [[Construct]] behavior is.

 If [[Construct]] was eliminated that reason would go away and there would
 be no need for the new Foo(...) built-in specifications.


Yes, please. The whole concept of [[Construct]] is just very confusing in
my opinion, and makes especially little sense from the userland
perspective. For a userland function, it's practically impossible to
statically analyze whether that function is a constructor or not, consider
these for example:
https://gist.github.com/jussi-kalliokoski/7f86ff181a01c671d047 . Which of
them are constructors and which are not? And how can you tell?
IsConstructor would have to say `true` for every JS-defined function out
there to give even close to usable results, and say `false` only for
host-defined functions. If we were to then use IsConstructor anywhere, it
would deepen the gap between host objects and native objects even further,
which I don't think anyone wants. In JS itself (as it currently is),
depending on how you think about it, there either aren't constructors at
all or every function is a constructor.


 The only difference between inlining ordinary [[Construct]] into the 'new'
 operator semantics and defining the 'new' operator as invoking @@new is the
 amount of freedom anES programmer would have in defining how a @@create
 method relates to its companion  constructor function. Without @@new there
 is a fixed protocol for how @@create and the constructor function are
 invoked relative to each other.  With @@new a different protocol (including
 ones that completely ignores @@create and the constructor function) could
 be implemented in ES code.

 It isn't clear if there are really any use cases that could be supported
 by @@new that couldn't also be support by carefully crafting a @@create and
 companion constructor function.

 Anotherconcern I have with @@new, it that it exposes two extension points,
 @@new and @@create, on very constructor.  I'm afraid that it wouldn't be
 very clear to most ES programmers when you should over-ride one or the
 other.

 Finally, let's say that for ES6 we eliminate [[Construct]] without adding
 @@new. If after some experience we find that @@new is really needed we can
 easily add it in a backwards compatible manner.


To me this sounds like a very reasonable idea, +100, but of course I'm not
too well aware of how widely it's used.

One question about @@create though... What would this do:

function Foo () {}

Foo.prototype[Symbol.create] = null;

// ???
// Maybe error out, like currently host objects without [[Construct]]:
// TypeError: Foo is not a constructor.
new Foo();

- Jussi



 Allen
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Assignment to method invocation result

2014-05-16 Thread Jussi Kalliokoski
On Thu, May 15, 2014 at 9:17 PM, Rick Waldron waldron.r...@gmail.comwrote:

 This particular syntax would also require static (a lookahead) and
 semantic (based on the lookahead results) disambiguation to account for
 DecimalLiteral:

   var n = .2;


True. However, it's a good thing valid identifiers can't start with a
number, otherwise even lookahead couldn't save it. :P




 If the . was on the other side?

   AccessorAssignmentOperator :
 .= IdentifierName


I deliberately avoided that because I've seen it proposed for
Object.extend() syntactic sugar a couple of times:

foo .= {
bar: 1
};

( proposed as doing roughly the same as `_.extend(foo, { bar: 1 });` )

But I have no personal preference either way.


   var string =   a  ;
   string .= trim(); // would throw if no `trim` method existed for `string`

   string; // a;


 Where .= means assign the result of Property Accessors Runtime
 Semantics Evaluation with lval and rval in appropriate positions (TBH, I'm
 sure I missed something in there)


 Rick


- Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Assignment to method invocation result

2014-05-16 Thread Jussi Kalliokoski
On 16 May 2014 15:08, Rick Waldron waldron.r...@gmail.com wrote:




 On Fri, May 16, 2014 at 1:55 AM, Tab Atkins Jr. jackalm...@gmail.com
wrote:

 On Thu, May 15, 2014 at 8:47 PM, Rick Waldron waldron.r...@gmail.com
wrote:
  I imagined .= would do both, but I don't think my suggestion should be
taken
  seriously. In fact, your example illustrates a major flaw (that exists
in
  either proposal/suggestion), that I don't immediately know how I would
  answer:
 
  var o = { foo: bar };
  o .= foo;
 
  Is `o` now a string with the value bar?? I think that would cause
more
  problems than its worth.

 Yes, that's exactly what it would do.  This sort of pattern is even
 reasonably common when doing tree-walking, for example: you see a lot
 of node = node.left; or whatnot.


 In your example, is it safe to assume that `node.left` is a node? I'm
familiar with this precedent on a daily basis ;) It was the object becomes
a string behaviour that I was objecting to.

I'm not a fan of objects becoming strings either, but that problem exists
with methods as well:

var foo = [bar, baz];
foo .= join();

However, I don't think it's a problem introduced by this proposal, nor
addressible by it, but rather by something like guards:

var foo : Array = [bar, baz];
foo .= join(); // Error

- Jussi


 Rick


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Assignment to method invocation result

2014-05-15 Thread Jussi Kalliokoski
Throughout my history with JS (and pretty much every programming language
I've used) I've always found one thing really awkward: reassigning a
variable to the result of its method.

Say you have code like this:

var foo = function (options) {
  var string = something();

  if ( options.stripWhiteSpace ) {
string = string.trim();
  }

  // do something else here...
};

The same thing applies to pretty much all methods that return a modified
version of the same type of the variable, e.g. replace, filter, map,
concat, etc., you name it.

As a comparison let's say we had Number#add() and no operators for the add
method:

var bar = function (firstNumber, secondNumber) {
  var thirdNumber = 5;
  if ( firstNumber  0 ) {
secondNumber = secondNumber.add(thirdNumber);
  }
  firstNumber = firstNumber.add(secondNumber);
};

whereas with the operators we have the convenience of combining the
operator with the assignment to avoid typing the variable name twice:

var bar = function (firstNumber, secondNumber) {
  var thirdNumber = 5;
  if ( firstNumber  0 ) {
secondNumber += thirdNumber;
  }
  firstNumber += secondNumber;
};

I don't really know what would be a good solution for this problem, hence I
wanted to share this here if we can figure out a nicer way to do these
kinds of things. The best I can think of is some syntax like this:

var foo = function (options) {
  var string = something();

  if ( options.stripWhiteSpace ) {
string = .trim();
  }

  // do something else here...
};

so basically the operator would a combination of assignment followed by
property access dot, then the method name and invocation. This could also
allow plain property access so you could for example say `foo = .first` or
something. The reason I don't like this syntax is that it might be
conflicting with other some ideas thrown around to replace `with`, e.g.:

using (foo) {
  .remove(x);
  var x = .size;
}

- Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Rename Number.prototype.clz to Math.clz

2014-01-16 Thread Jussi Kalliokoski
To me this sounds as a good idea, I was actually under the impression that
it would be under Math until I saw the first ES6 draft featuring clz.

Having it under Math not only seems more consistent to me, but also lets
you do nice things like `numbers.map(Math.clz)`.

Cheers,
Jussi

On Wed, Jan 15, 2014 at 11:08 PM, Brendan Eich bren...@mozilla.com wrote:

 This is a judgment call, I'm with Jason, I think we should revisit. I'm
 putting it on the TC39 meeting agenda.

 /be

  Allen Wirfs-Brock mailto:al...@wirfs-brock.com
 January 15, 2014 11:26 AM


 So we discussed all that when we made that decision. I understand that
 you disagree but is there any new data that should cause us to reopen an
 issue that was already discussed and decided at a TC39 meeting?

 Allen

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

 Jason Orendorff mailto:jason.orendo...@gmail.com
 January 15, 2014 11:18 AM

 ES6 adds a clz function, but it's a method of Number.prototype.clz
 rather than Math.clz.

 The rationale for this decision is here (search for clz in the page):
 http://esdiscuss.org/notes/2013-07-25

 Can we reverse this, for users' sake? The pattern in ES1-5 is quite
 strong: math functions go on the Math object.

 The rationale (What if we add a uint64 type?) doesn't seem compelling
 enough to justify the weirdness of the result: we'll have a single
 mathematical operation available only as a Number method, and all
 others available only as Math functions.

 -j

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

  ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: `String.prototype.contains(regex)`

2013-12-18 Thread Jussi Kalliokoski
On Dec 18, 2013 3:31 PM, Mathias Bynens math...@qiwi.be wrote:

 Both `String.prototype.startsWith` and `String.prototype.endsWith` throw
a `TypeError` if the first argument is a RegExp:

  Throwing an exception if the first argument is a RegExp is specified in
order to allow future editions to define extends that allow such argument
values.

 However, this is not the case for `String.prototype.contains`, even
though it’s a very similar method. As per the latest ES6 draft,
`String.prototype.contains(regex)` behaves like
`String.prototype.contains(String(regex))`. This seems inconsistent. What’s
the reason for this inconsistency?

Not sure why the inconsistency,  but afaik with for example WebIDL if you
say something accepts a string, it accepts pretty much anything that can be
coerced to a string, so the case of `contains` may come from there.

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 'function *' is not mandatory

2013-09-02 Thread Jussi Kalliokoski
On Mon, Sep 2, 2013 at 8:47 AM, Brendan Eich bren...@mozilla.com wrote:

 Jussi Kalliokoski 
 mailto:jussi.kalliokoski@**gmail.comjussi.kallioko...@gmail.com
 
 September 1, 2013 10:03 PM



 No, 'function' is not a reserved word in C/++, who said it was? I'm saying
 `function *myGenerator` looks a lot like `type *identifier`.


 We're going in circles: I'm saying (I said) take off the C/C++ hat.

 JS's future syntax is *not* under a 
 do-not-mimic-C/C++-ignoring-**keywords-and-semantics
 constraint.


Of course not. That doesn't mean we should ignore what other common
languages do with the same syntax. And at best, with my C/++ skills, it's
more like a kippah than a real hat. ^^


  Like I asked, why does it have to be the star?


 Yes, it has to be star. This is not the hill you want to die on,
 metaphorically speaking. TC39 reached consensus based on a
 championed proposal. Re-opening this minor design decision needs
 strong justification. Trying to avoid looking like C or C++ at a
 glance is not strong justification.


 I'm aware of the decision, the rationale behind it is what I'm looking
 for. Or was star picked just because?


 No, think about yield* for delegation. We want consonance, reuse of
 generator sigil.


Googling generator sigil yields online magic symbol generators... Not what
I was looking for, I think. :) Is the star after yield any less arbitrary?


 Your comments on the grammar show continued informality and lack of
 familiarity with parsing theory in general, and the ECMA-262 grammar
 formalisms in particular.


They probably do since I am not familiar with parsing theory in general,
but I probably represent a large portion of the community with that
disability (which I hope to repair at some point). I wouldn't ask the
questions if I knew the answers, now would I. To me this seems just like
arbitrary limitations where it's somehow impossible to add an if statement
in the parser.

We don't want to split 'generator' out from Identifier, and special-case
 its syntax in PrimaryExpressions (which must be covered by a cover-grammar
 that also covers destructuring). We don't have a convenient formalism for
 such special-casing.


All right, so it would be a much larger effort to specify it?


 What's more, special-casing 'generator' is more than a spec and
 implementation front-end complexity tax. It makes a longer and more complex
 path dependency for any future syntax extension nearby (e.g., Mark's
 function! for async function syntax, proposed for ES7 -- controversial, and
 perhaps ^ is better than !, but in any case this is easy to do given
 function*, not so easy to do given generator|function syntax). Special
 cases are a smell.


I agree with special cases being a smell, but hacks are a smell too (hacks
lead to the need for special cases), and `function *` is definitely a hack
around the limitations we have (arbitrary or not). So two smelly things,
the difference is that only one of them conveys the intended meaning at
first glance, but only one has been agreed upon


 Finally, no way will we lumber everyone writing generators with a
 double-keyword 'generator function'. That's too much on several dimensions.
 TC39's consensus for function* is worth keeping, over against your
 inability to take off the C/C++ hat. Of this I am certain, even if the * is
 somewhat arbitrary (but see above about yield*).


I also realized a bit after sending the message that it wouldn't even avoid
the [no LineTerminator here] problem, sorry. ,


 So at this point, to be frank and with the best intentions, I think you're
 being a sore loser ;-). It's no fun to lose, I've had to learn to cope over
 the years big-time (JS in its rushed glory; ES4; etc.). This '*' is very
 small potatoes, as the saying goes.


Don't worry, I know quite well how to lose, in fact I consider losing as
one of the things I'm best at. But when I'm proven wrong, I want to
understand why I was wrong. Where you are now seeing a sore loss, I see a
triumph in learning new things. ;)

However I see this is a sunk cost and beating a dead horse, so I rest my
case (that's not to say I wouldn't appreciate my questions getting
answered). Sorry for making you repeat yourself.

Cheers,
Jussi


 /be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 'function *' is not mandatory

2013-09-02 Thread Jussi Kalliokoski
On Mon, Sep 2, 2013 at 10:49 AM, Brendan Eich bren...@mozilla.com wrote:

 Jussi Kalliokoski wrote:

 Is the star after yield any less arbitrary?


 Not much -- it conjures Kleene's star, worth something (not much? enough!).


Fair enough, I suppose Kleene's star with its zero or more semantics
comes close enough, but in that case, shouldn't the star be a suffix to the
value rather than a prefix (like we already have with RegExp)? ;)


  Your comments on the grammar show continued informality and lack
 of familiarity with parsing theory in general, and the ECMA-262
 grammar formalisms in particular.


 They probably do since I am not familiar with parsing theory in general,
 but I probably represent a large portion of the community with that
 disability (which I hope to repair at some point). I wouldn't ask the
 questions if I knew the answers, now would I. To me this seems just like
 arbitrary limitations where it's somehow impossible to add an if statement
 in the parser.


 We don't want to mess around with ambiguity. The smell of a suffix
 character for function is strictly less than the smell of a sub-grammar for
 'generator' distinct from other identifiers, with newline sensitivity.


All right, although in my head it's better for the parser to have to bear
the bad smell than people writing and reading the code. But it seems the
general course with ES currently is to keep the smell at the end
programmers' end rather than the parser's end and let compile-to-JS fix the
resulting smell in the end programmers' end. :)



 We don't want to split 'generator' out from Identifier, and
 special-case its syntax in PrimaryExpressions (which must be
 covered by a cover-grammar that also covers destructuring). We
 don't have a convenient formalism for such special-casing.


 All right, so it would be a much larger effort to specify it?


 The work involves splitting Identifier :: IdentifierName but not
 ReservedWord into 'generator' | IdentifierNotGenerator and adding
 IdentifierNotGenerator :: IdentifierName but not ('generator' or
 ReservedWord), then splitting Identifier uses and coping with the cover
 grammar complexity. I haven't done the work to make sure it's sound. Much
 bigger fish to fry, rotten smell already.


Fair enough. If the result value  effort value, there's not much to do.


 /be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 'function *' is not mandatory

2013-09-01 Thread Jussi Kalliokoski
Sorry if this has been brought up before, but why `function *` (which looks
like a pointer to a function more than a generator) instead of something
clearer, e.g. `generator myGenerator () {}`? I see the obvious ASI hazard,
but this can be mitigated by not allowing unnamed generators, e.g. `{
myGenerator: generator _ () {} } `, which would not at least be worse than
the current syntax (star required, dummy identifier required, does it make
a difference?), but would be more descriptive. If you see something like
this for the first time, instead of going wtf, pointer?!, you understand
that it is a generator, and if have no prior experience of generators in
other languages can google the concept.

If a more descriptive keyword sounds like a no-no (rationale appreciated),
please let's at least consider using a different operator than the star.

Cheers,
Jussi


On Sun, Sep 1, 2013 at 8:46 AM, François REMY francois.remy@outlook.com
 wrote:

   I've come to learn the TC39 committee members
   usually have good ideas even if they seem bad initially.
   I hope this is the case again this time...
 
  That is nice to hear, and quite a track record to live up to.
  On behalf of all TC39 if I may, thanks.

 Well, I don't think I deserve such thanks just for stating my thrust in
 this group, but I can get how it must feel good to hear in the sea of
 complaints that you're probably used to receive ;-) It's the same story for
 any group, for what it's worth. People easily notice what's wrong, and
 consider all the goodness as granted. That's how humans are made, and how
 we progresses and avoids regression.


  However, I cannot honestly leave you to expect this to
  happen again in this case. I think we've stated the case
  for function* as clearly as we're going to.
 
  It is a tradeoff.

 My gut tells me we're running out of such tradeoffs in JS at speed of
 light recently. There must be another way. And if such way exists, we shall
 find it.
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 'function *' is not mandatory

2013-09-01 Thread Jussi Kalliokoski
On Sun, Sep 1, 2013 at 7:56 PM, Brendan Eich bren...@mozilla.com wrote:

 Jussi Kalliokoski 
 mailto:jussi.kalliokoski@**gmail.comjussi.kallioko...@gmail.com
 
 September 1, 2013 5:38 AM

 Sorry if this has been brought up before,


 It has, even in this thread.


My apologies! Ran a quick scan with my eyes and a find for `generator (`
and `generator(` to no results so decided to bring this up after
contemplating it for quite a while now since the topic is relevant.


  but why `function *` (which looks like a pointer to a function more than
 a generator


 This is JS, please take off your C/C++ hat :-P.


Sure, but let's not ignore that this syntax already has a special meaning
in the language family JS syntax is heavily based on. Like I asked, why
does it have to be the star? Why not tilde? Or plus? Why take something
that already has a completely different meaning in other languages?

 ) instead of something clearer, e.g. `generator myGenerator () {}`? I see
 the obvious ASI hazard, but this can be mitigated by not allowing unnamed
 generators, e.g. `{ myGenerator: generator _ () {} } `,


This doesn't work in general, it is backward-incompatible. If someone bound
 or assigned to generator in-scope, your proposed change could break
 compatibility -- or else require parsing to depend on name binding.

 foo = generator
 bar()
 {}

 Remember, if there wasn't an error, ASI doesn't apply. Trying to patch
 this bad theory with a [no LineTerminator here] restriction to the right of
 'generator' does not work in the grammar without reserving 'generator' --
 we can't put that restriction to the right of every Identifier on the right
 of every expression production.


Can you elaborate on this, please, I'm confused? Why can't we restrict the
syntax? Unrestricted syntax is why we are having this discussion in the
first place. What's the negative effect of reserving 'generator'? In my
opinion the parser saying ohmigod and going back a few cycles when
hitting `generator` is a lot better than humans having to read what we are
going for now. After all, what's the point of programming languages aside
readability? How come is it not OK to disallow the syntax in your example
to be a valid generator, regardless of whether `generator` is defined or
not? What am I missing?

I'm sorry if I'm asking stupid questions, but the only stupid question is
the one left unasked.

Cheers,
Jussi


 Please stamp this on all inner eyelids so I don't have to repeat it ad
 nauseum. Thanks!

 /be


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 'function *' is not mandatory

2013-09-01 Thread Jussi Kalliokoski
On Mon, Sep 2, 2013 at 12:09 AM, Brendan Eich bren...@mozilla.com wrote:

 but why `function *` (which looks like a pointer to a function
 more than a generator


 This is JS, please take off your C/C++ hat :-P.


 Sure, but let's not ignore that this syntax already has a special meaning
 in the language family JS syntax is heavily based on.


 What syntax? 'function' is not a reserved word in C or C++. It is in AWK,
 which is arguably in the C family. Sorry, but JS syntax is not bound to
 evolve only in ways that are disjoint from keyword-free reinterpretation as
 C or C++.


No, 'function' is not a reserved word in C/++, who said it was? I'm saying
`function *myGenerator` looks a lot like `type *identifier`.


  Like I asked, why does it have to be the star?


 Yes, it has to be star. This is not the hill you want to die on,
 metaphorically speaking. TC39 reached consensus based on a championed
 proposal. Re-opening this minor design decision needs strong justification.
 Trying to avoid looking like C or C++ at a glance is not strong
 justification.


I'm aware of the decision, the rationale behind it is what I'm looking for.
Or was star picked just because? I suppose it looking confusing to me
doesn't qualify as a strong justification to revert a decision like this.

I'm obviously not going to convince you that the star is a bad idea?


  Why not tilde? Or plus? Why take something that already has a completely
 different meaning in other languages?


 Rust uses tilde for unique references / linear types. Someone has to pay.


Exclamation mark? Percentage symbol? There's a dozen other punctuation
options, maybe one of them would make it look less like something
completely different in one of the most popular programming language
families.



 ) instead of something clearer, e.g. `generator myGenerator ()
 {}`? I see the obvious ASI hazard, but this can be mitigated by
 not allowing unnamed generators, e.g. `{ myGenerator: generator _
 () {} } `,


 This doesn't work in general, it is backward-incompatible. If
 someone bound or assigned to generator in-scope, your proposed
 change could break compatibility -- or else require parsing to
 depend on name binding.

 foo = generator
 bar()
 {}

 Remember, if there wasn't an error, ASI doesn't apply. Trying to
 patch this bad theory with a [no LineTerminator here]
 restriction to the right of 'generator' does not work in the
 grammar without reserving 'generator' -- we can't put that
 restriction to the right of every Identifier on the right of every
 expression production.


 Can you elaborate on this, please, I'm confused? Why can't we restrict
 the syntax?


 You have to propose exactly *how* you want to restrict the syntax.

 As I just wrote, we cannot restrict all productions with Identifier in
 their right-hand sides to forbid line terminators after. Any restriction
 would be word-sensitive and require 'generator' to be reserved. Then the
 problem becomes backward incompatibility of the kind I showed. To avoid
 breaking such code (and other cases not yet thought of) requires a
 grammatical restriction of some sort. What sort?


So `generator [no LineTerminator here] BindingIdentifier [no LineTerminator
here] ( FormalParameters ) { FunctionBody }` couldn't be done? The parser
can't be word-sensitive without flagging that word as a keyword? Sounds
like an arbitrary limitation that should be fixed.

Even `generator function BindingIdentifieropt ( FormalParameters ) {
FunctionBody }` would be better at describing what it does than what we
have.


  Unrestricted syntax is why we are having this discussion in the first
 place. What's the negative effect of reserving 'generator'? In my opinion
 the parser saying ohmigod


 Please stop informal rambling and specify exactly what you mean.

 /be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Object literal syntax for dynamic keys

2013-07-19 Thread Jussi Kalliokoski
Hi everyone,

This is almost purely a syntactic sugar proposal, and if there's already a
proposal like this being/been discussed, please point me that way, I
couldn't find anything. Anyway, the use case is as follows:

You have keys that come, for example from a config file or are defined as
constants, and you want to create objects with those keys. What you
currently have to do is to use a mix of object notation syntax and
assignment:

myObject = {
  someKey: 'someValue'
};
myObject[MY_KEY_ID] = 'my value';

This is usually ok, but it's a bit confusing to read especially if the
object is long and you try to find the definition inside the brackets but
can't. Also, as far as I know, `someKey` and `MY_KEY_ID` end up being
defined differently; I doubt that the latter matters much, but what I have
in mind would fix both things:

myObject = {
  someKey: 'someValue',
  [MY_KEY_ID]: 'my value'
};

So basically you could use brackets to signify that the key is the value of
an expression inside the brackets.

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype.find

2013-06-07 Thread Jussi Kalliokoski
Oops, my bad,  sorry about that.
On Jun 7, 2013 3:37 AM, Axel Rauschmayer a...@rauschma.de wrote:

 Note: the proposed new parameter is `returnValue`, not `thisArg` (which
 has already been decided on).

 On Jun 6, 2013, at 22:20 , Jussi Kalliokoski jussi.kallioko...@gmail.com
 wrote:

 What would be the use case for this that isn't covered with
 Function#bind() or arrow functions?

 Cheers,
 Jussi

 On Wed, May 29, 2013 at 5:50 AM, Axel Rauschmayer a...@rauschma.dewrote:

 It might make sense to add a third argument to that method, so that it
 works roughly like this:

 Array.prototype.find = function (predicate, returnValue = undefined,
 thisArg = undefined) {
 var arr = Object(this);
 if (typeof predicate !== 'function') {
 throw new TypeError();
 }
 for(var i=0; i  arr.length; i++) {
 if (i in arr) {  // skip holes
 var elem = arr[i];
 if (predicate.call(thisValue, elem, i, arr)) {
 return elem;
 }
 }
 }
 return returnValue;
 }

 --
 Dr. Axel Rauschmayer
 a...@rauschma.de

 home: rauschma.de
 twitter: twitter.com/rauschma
 blog: 2ality.com


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype.find

2013-06-06 Thread Jussi Kalliokoski
What would be the use case for this that isn't covered with Function#bind()
or arrow functions?

Cheers,
Jussi


On Wed, May 29, 2013 at 5:50 AM, Axel Rauschmayer a...@rauschma.de wrote:

 It might make sense to add a third argument to that method, so that it
 works roughly like this:

 Array.prototype.find = function (predicate, returnValue = undefined,
 thisArg = undefined) {
 var arr = Object(this);
 if (typeof predicate !== 'function') {
 throw new TypeError();
 }
 for(var i=0; i  arr.length; i++) {
 if (i in arr) {  // skip holes
 var elem = arr[i];
 if (predicate.call(thisValue, elem, i, arr)) {
 return elem;
 }
 }
 }
 return returnValue;
 }

 --
 Dr. Axel Rauschmayer
 a...@rauschma.de

 home: rauschma.de
 twitter: twitter.com/rauschma
 blog: 2ality.com


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Observability of NaN distinctions — is this a concern?

2013-03-27 Thread Jussi Kalliokoski
On Tue, Mar 26, 2013 at 10:29 AM, Oliver Hunt oli...@apple.com wrote:


 On Mar 26, 2013, at 9:12 PM, Jussi Kalliokoski 
 jussi.kallioko...@gmail.com wrote:

 That's just because ES has had no notion of bits for floating points
 before. Other than that, ES NaN works like IEEE NaN, e.g.

 0/0 === NaN // false
 isNaN(0/0) // true


 That's true in any language - comparing to NaN is almost always defined
 explicitly as producing false.  You're not looking at bit patterns, here so
 conflating NaN compares with bit values is kind of pointless.











 We need to stop raising this causes performance problems type issues
 without a concrete example of that problem.  I remember having to work very
 hard to stop WebGL from being a gaping security hole in the first place and
 it's disappointing to see these same issues being re-raised in a different
 forum to try and get them bypassed here.


 Before saying security hole, please elaborate. Also, when it comes to
 standards, I think change should be justified with data, rather than the
 other way around.


 Done.


 You'll have to do better than that. ;)


 Ok, I'll try to go over this again, because for whatever reason it doesn't
 appear to stick:

 If you have a double-typed array, and access a member:
 typedArray[0]

 Then in ES it is a double that can be one of these values: +Infinitity,
 -Infinity, NaN, or a discrete value representable in IEEE double spec.
  There are no signaling NaNs, nor is there any exposure of what the
 underlying bit pattern of the NaN is.

 So the runtime loads this double, and then stores it somewhere, anywhere,
 it doesn't matter where, eg.
 var tmp = typedArray[0];

 Now you store it:
 typedArray[whatever] = tmp;

 The specification must allow a bitwise comparison of typedArray[whatever]
 to typedArray[0] to return false, as it is not possible for any NaN-boxing
 engine to maintain the bit equality that you would otherwise desire, as
 that would be trivially exploitable.  When I say security and correctness i
 mean it in the can't be remotely pwned sense.

 Given that we can't guarantee that the bit pattern will remain unchanged
 the spec should mandate normalizing to the non-signalling NaN.

 --Oliver


It's not trivially exploitable, at least not in SM or V8. I modified the
example Mark made [1] and ran it through js (SpiderMonkey) and node (V8) to
observe some of the differences of how they handle NaN. Neither could be
pwned using the specified method. In V8, the difference is observable only
if you assign the funny NaN directly to the array (i.e. it doesn't go
through a variable or stuff like that). In SM, the difference is more
observable, i.e. the bit pattern gets transferred even if you assign it to
a variable in between, but no observable enough to make pwning possible. Of
course, feel free to fork the gist and show me how it can be exploited. :)

Regardless, as per Dmitry's observations, I don't think the performance hit
can be dismissed, and I doubt it can be optimized away to a level that
could be dismissed.

I think standardizing whatever V8 is doing with NaN right now seems like
the best option.

Cheers,
Jussi

[1] https://gist.github.com/jussi-kalliokoski/5252226





 Cheers,
 Jussi






 Cheers,
 Jussi


 --Oliver

  Allen
 
 
 
 
 
  /be
 
  On Mar 25, 2013, at 4:33 PM, Kenneth Russell k...@google.com wrote:
 
  On Mon, Mar 25, 2013 at 4:23 PM, Brendan Eich bren...@mozilla.com
 wrote:
  Allen Wirfs-Brock wrote:
 
  On Mar 25, 2013, at 4:05 PM, Brendan Eich wrote:
 
  Allen Wirfs-Brock wrote:
 
  BTW, isn't cannonicalization of endian-ness for both integers
 and floats
  a bigger interop issue than NaN cannonicalization?  I know this
 was
  discussed in the past, but it doesn't seem to be covered in the
 latest
  Khronos spec.  Was there ever a resolution as to whether or not
 TypedArray
  [[Set]] operations need to use a cannonical endian-ness?
 
  Search for byte order at
  https://www.khronos.org/registry/typedarray/specs/latest/.
 
 
  I had already search for endian with similar results.  It says
 that the
  default for DataViews gets/sets that do not specify a byte order
 is
  big-endean. It doesn't say anything (that I can find) about such
 accesses on
  TypedArray gets/sets.
 
 
  Oh, odd -- I recall that it used to say little-endian. Typed
 arrays are LE
  to match dominant architectures, while DataViews are BE to match
 packed
  serialization use-cases.
 
  Ken, did something get edited out?
 
  No. The typed array views (everything except DataView) have used the
  host machine's endianness from day one by design -- although the
 typed
  array spec does not state this explicitly. If desired, text can be
  added to the specification to this effect. Any change in this
 behavior
  will destroy the performance of APIs like WebGL and Web Audio on
  big-endian architectures.
 
  Correctly written code works identically on big-endian and
  little-endian architectures. See
  http://www.html5rocks.com/en

Re: Observability of NaN distinctions — is this a concern?

2013-03-26 Thread Jussi Kalliokoski
On Tue, Mar 26, 2013 at 4:16 AM, Oliver Hunt oli...@apple.com wrote:


 On Mar 26, 2013, at 2:35 PM, Allen Wirfs-Brock al...@wirfs-brock.com
 wrote:

 
  On Mar 25, 2013, at 6:00 PM, Brendan Eich wrote:
 
  Right, thanks for the reminder. It all comes back now, including the
 how to write correct ending-independent typed array code bit.
 
  Ok, so looping back to my earlier observation.  It sounds like
 endian-ness can be observed by writing into an Float64Array element and
 then reading back from a Uint8Array that is backed by the same buffer.  If
 there is agreement that this doesn't represent a significant
 interoperability hazard can we also agree that not doing NaN
 cannonicalization on writes to FloatXArray is an even less significant
 hazard and need not be mandated?
 

 The reason I have pushed for NaN canonicalization is because it means that
 it allows (and in essence requires) that typedArray[n] = typedArray[n] can
 modify the bit value of typedArray[n].

 An implementation _may_ be able to optimize the checks away in some cases,
 but most engines must perform checks on read unless they can prove that
 they were the original source of the value being read.

 Forcing canonicalization simply means that you are guaranteed that a
 certain behavior will occur, and so won't be bitten by some tests changing
 behavior that you may have seen during testing.  I know Ken hates these
 sorts of things, but seriously in the absence of a real benchmark, that
 shows catastrophic performance degradation due to this, simply saying this
 is extra work that will burn cpu cycles without evidence is a waste of
 time.


I also disagree with you.


 Also there is absolutely no case in which abstract performance concerns
 should ever outweigh absolute security and correctness bugs.


Could you elaborate on the security part? I doubt NaN distinctions can
really be of any significant use for fingerprinting, etc.

So far I've yet to come across any unexpected bugs caused by this, maybe
you have examples? NaN is usually a non-desired value so if you write a NaN
you probably had a bug in the first place.

And about correctness, by definition NaN is a category, not a value; by
definition a NaN value is not the same as another NaN value. If you want to
canonicalize NaN, my suggestion is IEEE, not ES-discuss. ;)


 We need to stop raising this causes performance problems type issues
 without a concrete example of that problem.  I remember having to work very
 hard to stop WebGL from being a gaping security hole in the first place and
 it's disappointing to see these same issues being re-raised in a different
 forum to try and get them bypassed here.


Before saying security hole, please elaborate. Also, when it comes to
standards, I think change should be justified with data, rather than the
other way around.

Cheers,
Jussi


 --Oliver

  Allen
 
 
 
 
 
  /be
 
  On Mar 25, 2013, at 4:33 PM, Kenneth Russell k...@google.com wrote:
 
  On Mon, Mar 25, 2013 at 4:23 PM, Brendan Eich bren...@mozilla.com
 wrote:
  Allen Wirfs-Brock wrote:
 
  On Mar 25, 2013, at 4:05 PM, Brendan Eich wrote:
 
  Allen Wirfs-Brock wrote:
 
  BTW, isn't cannonicalization of endian-ness for both integers and
 floats
  a bigger interop issue than NaN cannonicalization?  I know this was
  discussed in the past, but it doesn't seem to be covered in the
 latest
  Khronos spec.  Was there ever a resolution as to whether or not
 TypedArray
  [[Set]] operations need to use a cannonical endian-ness?
 
  Search for byte order at
  https://www.khronos.org/registry/typedarray/specs/latest/.
 
 
  I had already search for endian with similar results.  It says
 that the
  default for DataViews gets/sets that do not specify a byte order is
  big-endean. It doesn't say anything (that I can find) about such
 accesses on
  TypedArray gets/sets.
 
 
  Oh, odd -- I recall that it used to say little-endian. Typed arrays
 are LE
  to match dominant architectures, while DataViews are BE to match
 packed
  serialization use-cases.
 
  Ken, did something get edited out?
 
  No. The typed array views (everything except DataView) have used the
  host machine's endianness from day one by design -- although the typed
  array spec does not state this explicitly. If desired, text can be
  added to the specification to this effect. Any change in this behavior
  will destroy the performance of APIs like WebGL and Web Audio on
  big-endian architectures.
 
  Correctly written code works identically on big-endian and
  little-endian architectures. See
  http://www.html5rocks.com/en/tutorials/webgl/typed_arrays/ for a
  detailed description of the usage of the APIs.
 
  DataView, which is designed for input/output, operates on data with a
  specified endianness.
 
  -Ken
  ___
  es-discuss mailing list
  es-discuss@mozilla.org
  https://mail.mozilla.org/listinfo/es-discuss
  ___
  es-discuss mailing 

Re: Observability of NaN distinctions — is this a concern?

2013-03-26 Thread Jussi Kalliokoski
On Tue, Mar 26, 2013 at 9:54 AM, Mark S. Miller erig...@google.com wrote:




 On Tue, Mar 26, 2013 at 6:40 AM, Jussi Kalliokoski 
 jussi.kallioko...@gmail.com wrote:


 On Tue, Mar 26, 2013 at 4:16 AM, Oliver Hunt oli...@apple.com wrote:


 On Mar 26, 2013, at 2:35 PM, Allen Wirfs-Brock al...@wirfs-brock.com
 wrote:

 
  On Mar 25, 2013, at 6:00 PM, Brendan Eich wrote:
 
  Right, thanks for the reminder. It all comes back now, including the
 how to write correct ending-independent typed array code bit.
 
  Ok, so looping back to my earlier observation.  It sounds like
 endian-ness can be observed by writing into an Float64Array element and
 then reading back from a Uint8Array that is backed by the same buffer.  If
 there is agreement that this doesn't represent a significant
 interoperability hazard can we also agree that not doing NaN
 cannonicalization on writes to FloatXArray is an even less significant
 hazard and need not be mandated?
 

 The reason I have pushed for NaN canonicalization is because it means
 that it allows (and in essence requires) that typedArray[n] = typedArray[n]
 can modify the bit value of typedArray[n].

 An implementation _may_ be able to optimize the checks away in some
 cases, but most engines must perform checks on read unless they can prove
 that they were the original source of the value being read.

 Forcing canonicalization simply means that you are guaranteed that a
 certain behavior will occur, and so won't be bitten by some tests changing
 behavior that you may have seen during testing.  I know Ken hates these
 sorts of things, but seriously in the absence of a real benchmark, that
 shows catastrophic performance degradation due to this, simply saying this
 is extra work that will burn cpu cycles without evidence is a waste of
 time.


 I also disagree with you.


 Also there is absolutely no case in which abstract performance concerns
 should ever outweigh absolute security and correctness bugs.


 Could you elaborate on the security part? I doubt NaN distinctions can
 really be of any significant use for fingerprinting, etc.



 In SES, Alice says:

 var bob = confinedEval(bobSrc);

 var carol = confinedEval(carolSrc);

 // At this point, bob and carol should be unable to communicate with
 // each other, and are in fact completely isolated from each other
 // except that Alice holds a reference to both.
 // See http://www.youtube.com/watch?v=w9hHHvhZ_HY start
 // at about 44 minutes in.

 var shouldBeImmutable = Object.freeze(Object.create(null, {foo:
 {value: NaN}}));

 bob(shouldBeImmutable);

 carol(shouldBeImmutable);

 // Alice, by sharing this object with bob and carol, should still be
 able
 // to assume that they are isolated from each other

 Bob says:

 var FunnyNaN = // expression creating NaN with non-canonical internal
 rep
 // on this platform, perhaps created by doing funny typed array tricks

 if (wantToCommunicate1bitToCarol) {
   Object.defineProperty(shouldBeImmutable, 'foo', {value: FunnyNaN});

 // The [[DefineProperty]] algorithm is allowed to overwrite
 shouldBeImmutable.foo
 // with FunnyNaN, since it passes the SameValue check.

 Carol says:

 if (isNaNFunny(shouldBeImmutable.foo)) {
 // where isNaNFunny uses typed array tricks to detect whether its
 argument has
 // a non-canonical rep on this this platform


The NaN distinction is only observable in the byte array, not if you
extract the value, because at that point it becomes an ES NaN value, so
that example is invalid.





 So far I've yet to come across any unexpected bugs caused by this, maybe
 you have examples? NaN is usually a non-desired value so if you write a NaN
 you probably had a bug in the first place.

 And about correctness, by definition NaN is a category, not a value; by
 definition a NaN value is not the same as another NaN value. If you want to
 canonicalize NaN, my suggestion is IEEE, not ES-discuss. ;)


 You're confusing IEEE NaN with ES NaN. In ES, NaN is a value, not a bit
 pattern. In IEEE, NaN is a family of bit patterns. Type arrays make us face
 the issue of what IEEE NaN bit pattern an ES NaN value converts to.


That's just because ES has had no notion of bits for floating points
before. Other than that, ES NaN works like IEEE NaN, e.g.

0/0 === NaN // false
isNaN(0/0) // true








 We need to stop raising this causes performance problems type issues
 without a concrete example of that problem.  I remember having to work very
 hard to stop WebGL from being a gaping security hole in the first place and
 it's disappointing to see these same issues being re-raised in a different
 forum to try and get them bypassed here.


 Before saying security hole, please elaborate. Also, when it comes to
 standards, I think change should be justified with data, rather than the
 other way around.


 Done.


You'll have to do better than that. ;)

Cheers,
Jussi






 Cheers,
 Jussi

Re: What is the status of Weak References?

2013-03-02 Thread Jussi Kalliokoski
On Sat, Mar 2, 2013 at 6:11 AM, Kevin Gadd kevin.g...@gmail.com wrote:

 I don't understand how the requestAnimationFrame approach (to
 registering periodic callbacks) applies to scenarios where you want
 Weak References (for lifetime management) or to observe an object (for
 notifications in response to actions by other arbitrary code that has
 a reference to an object). These seem to be significantly different
 problems with different constraints.

 If anything, requestAnimationFrame is an example of an API that poorly
 expresses developer intent. It is rare for someone to actually only
 ever want to render a single animation frame; furthermore most
 animation scenarios in fact require rendering a series of frames on
 consistent timing. Furthermore, the need to manually trigger further
 frame callbacks is error-prone - you are essentially offloading the
 cost of lifetime management onto the application developer, by making
 them manually manage the lifetime of their callback on an ongoing
 basis by having to remember to say 'please keep my animation alive' at
 the right time every frame no matter what, which probably means a try
 block and auditing their rAF callback to ensure that all exit paths
 call rAF again. I suspect that if you were to look at most
 applications that use rAF, you would find very few of them
 intentionally stop running animation frames in any scenario other than
 the termination of the application. For this and other reasons, I
 would suggest that it is a horrible idea to use rAF as an example of
 how to design an API or solve developer problems - especially problems
 as important as those addressed by weak references.


One positive aspect about the rAF approach is that if an error occurs in
the callback, the animation will stop instead of potentially leading the
application into an inconsistent state and flooding the console, making
debugging more painful. That said, I hardly ever use rAF directly, but
instead usually a wrapper library that handles animation continuity, fps
calculation etc. Perhaps rAF was meant to be consumed as a low-level API by
animation libraries.

Cheers,
Jussi



 -kg

 On Sat, Mar 2, 2013 at 2:58 AM, David Bruant bruan...@gmail.com wrote:
  Le 02/03/2013 01:58, Rafael Weinstein a écrit :
 
  On Sat, Feb 2, 2013 at 11:02 AM, Brendan Eich bren...@mozilla.com
 wrote:
 
  David Bruant wrote:
 
  Interestingly, revocable proxies require their creator to think to the
  lifecycle of the object to the point where they know when the object
  shouldn't be used anymore by whoever they shared the proxy with. I
 feel
  this
  is the exact same reflections that is needed to understand when an
  object
  isn't needed anymore within a trust boundary... seriously questioning
  the
  need for weak references.
 
 
  Sorry, but this is naive. Real systems such as COM, XPCOM, Java, and C#
  support weak references for good reasons. One cannot do data binding
  transparently without either making a leak or requiring manual dispose
  (or
  polling hacks), precisely because the lifecycle of the model and view
  data
  are not known to one another, and should not be coupled.
 
  See http://wiki.ecmascript.org/doku.php?id=strawman:weak_refs intro,
 on
  the
  observer and publish-subscribe patterns.
 
  This is exactly right.
 
 
  I'm preparing an implementation report on Object.observe for the next
  meeting, and in it I'll include findings from producing a general
  purpose observation library which uses Object.observe as a primitive
  and exposes the kind of semantics that databinding patterns are likely
  to need.
 
  Without WeakRefs, observation will require a dispose() step in order
  to allow garbage collection of observed objects, which is (obviously)
  very far from ideal.
 
  There is another approach taken by the requestAnimationFrame API that
  consists in one-time event listeners (Node.js also has that concept too
  [1]), requiring to re-subscribe if one wants to listen more than once.
  I wonder why this approach has been taken for requestAnimationFrame
 which is
  fired relatively often (60 times a second). I'll ask on public-webapps.
  I won't say it's absolutely better than WeakRefs and it may not apply to
 the
  data binding case (?), but it's an interesting pattern to keep in mind.
 
  I'm looking forward to reading your findings in the meeting notes.
 
  David
 
  [1] http://nodejs.org/api/events.html#events_emitter_once_event_listener
 
  ___
  es-discuss mailing list
  es-discuss@mozilla.org
  https://mail.mozilla.org/listinfo/es-discuss
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Can JS Algorithm object finally solve html5 codecs gridlock and move us forward?

2013-01-19 Thread Jussi Kalliokoski
Hi Ladislav

On Fri, Jan 18, 2013 at 11:51 PM, neuralll neura...@gmail.com wrote:
 Hi guys
 I am playing with web audio api and mozilla audio api but I got frustrated 
 about so much html5 potential wasted due to infinite codecs deadlock support 
 in browsers holding progress back for so long.
 This even resulted to people writing codecs purely in JS.
 Unfortunately performance is obviously slow and limiting to browsers allowing 
 raw data from js
 And slow mainly because of huffman and imdct.
 But looking at block diagrams of pretty much any widespread codec like theora 
 ogg mpg aac mp4
 it striked me.

 Since they pretty much share the same building blocs. and which themself are 
 not that much patent encumbered.

 So why not just introduce new  Algorithm object in JS
 and just register huffman idct imdct filterbank in it. for browsers its easy 
 task they just make visible already heavily hw optimized pretty much 4 or so 
 functions that are already in them

I've been trying to address the performance bottlenecks for signal
processing on the web platform a lot lately. Imho, you're on the right
track, approaching this from the angle of adding basic building blocks
rather than complete solutions.

One significant addition for ES6 is Number#clz(), which is currently a
rather significant bottleneck in our codecs.

Aside from that, at Audio WG [1] we're working on a DSP API [2] that
in best case scenarios will give you very close to (or sometimes even
better than) native performance for vector processing. Currently the
DSP API includes the DSP interface (add, sub, mul, div, ramp, etc),
the FFT interface and a Filter interface that lets you specify
arbitrary coefficients, optimizing the algorithms under the hood based
on whether you have a biquad filter or long convolution or whatever.

It might be worth adding DCT/MDCT to the DSP API, but actually I
wonder if at least DCT can just be made a special case of FFT without
being very wasteful. This needs some thinking. Anyway, would be great
to get some implementer interest for the DSP API. Current
implementations include a JS polyfill, my node C++ module
implementation ('dsp' on npm) and a partial SpiderMonkey prototype
implementation by Jens Nockert.

Huffman isn't directly related to signals so that probably doesn't
belong in the DSP API. I'm not convinced it really makes sense to have
a native implementation for it anyway... However, wasn't there some
Archive API in the works or something that might have something
related?

David mentioned mentioned parallelization, but at least for audio that
doesn't make much sense, at least unless you're trying to decode all
of a few hours long track into memory as fast as possible, which isn't
a likely scenario. That said, we're moving aurora.js to do the
decoding off the main thread to be less vulnerable to audio interrupts
and blocking the main thread as little as possible, but so far it has
increased our overall CPU usage (as anticipated), because with the
current Audio APIs, we have to transfer the decoded audio to the main
thread for playback. Transferables help significantly, but that
problem has to be solved at the Audio API level.

Cheers,
Jussi

[1] http://www.w3.org/2011/audio/
[2] http://people.opera.com/mage/dspapi/
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Creating a filled array with a given length?

2012-12-13 Thread Jussi Kalliokoski
On Wed, Dec 12, 2012 at 6:08 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Dec 12, 2012 at 1:59 AM, Axel Rauschmayer a...@rauschma.dewrote:

 I would still love to have something like that in ES6 (loosely similar to
 String.prototype.repeat). Once you have that, you can e.g. use
 Array.prototype.map to do more things.

 Two possibilities:
 - Array.repeat(undefined, 3) - [ undefined, undefined, undefined ]
 - [ undefined ].repeat(3) - [ undefined, undefined, undefined ]

 The same array could be created like this, but that seems too much work
 for a relatively common operation.

 'x'.repeat(3).split('').map(= undefined)


 Array Comprehensions!

 This is probably wrong, so treat it more like an idea and less like a
 matter of fact:

 [ undefined for x of new Array(3) ].map( v = ... );

 [ undefined for x of [0,0,0] ].map( v = ... );


This should work, too (at least it works in Firefox, if you use a normal
function):

var powersOfTwo = [ ...Array(8) ].map( (v, i) = 1  i )

Cheers,
Jussi


 Rick


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Map|Set|WeakMap)#set() returns `this` ?

2012-12-06 Thread Jussi Kalliokoski
On Wed, Dec 5, 2012 at 10:43 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Wed, Dec 5, 2012 at 3:26 PM, Domenic Denicola 
 dome...@domenicdenicola.com wrote:

 Readability or library preference aside, I still think it's bizarre that

 map.set(key, val)

 is analogous to

 (dict[key] = val, dict)

 and not to

 dict[key] = val

 When I'm using a fluent library like jQuery or a configuration DSL like
 those in the npm packages surveyed, I can see the attraction of chaining.
 But when I am using a basic primitive of the language, I expect uniformity
 across primitives.


 This argument won't hold when the language doesn't make any such
 uniformity promises, eg.

 array.push(val); // new length
 array[ array.length - 1 ] = val; // val


That's just a bad analogy, because that's not what push does, since it has
a variadic argument (admittedly, I don't think returning the length is
useful anyway, but if it returned the value, should it return the first,
last, or all of them?). And if it comes down to precedents in the language,
even Array#forEach() returns undefined, contrary to popular libraries out
there. Let's keep some consistency here.

I agree with you, fear-driven design is bad. But don't you agree that if
there's chaining, it's better done at language level rather than having all
APIs be polluted by `this` returns? After all, the APIs can't guarantee a
`this` return, since they might have something actually meaningful to
return, otherwise we might as well just replace `undefined` with `this` as
the default return value.

We could introduce mutable primitives so that meaningful return values
could be stored in arguments, kinda like in C, but instead of error values,
we'd be returning `this`, heheheh. :)

I'm curious, do you have any code examples of maps/sets that could be made
clearer by chaining?

Cheers,
Jussi




 Rick



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Comments

2012-12-06 Thread Jussi Kalliokoski
On Thu, Dec 6, 2012 at 8:25 AM, David Herman dher...@mozilla.com wrote:

 On Dec 5, 2012, at 7:16 PM, Kevin Smith khs4...@gmail.com wrote:

  1) export ExportSpecifierSet (, ExportSpecifierSet)* ;
 
  This rule seems too permissive.  It allows strange combinations like:
 
  export x, { y: a.b.c, z }, { j, k }, * from bloop;
 
  I think it would be better to flatten it out a little bit:
 
  ExportDeclaration: export { ExportSpecifier (,
 ExportSpecifier)* } ;
  ExportDeclaration: export Identifier (, Identifier)* ;
  ExportDeclaration: export * from StringLiteral ;

 Reasonable point, I'll think about that.

 
  2) Do we need `export *;`?
 
  I don't see the point of exporting every declaration in the current
 module scope.  If the programmer wants to export a bunch of stuff, the
 other forms make that sufficiently easy.  Exporting everything encourages
 bad (de-modular) design.

 Again, reasonable point.

  3) I'm just OK with as.  Note that it inverts the position of the
 string and the binding:
 
  import { x } from goo;
  import ga as ga;
 
  Which makes it harder to read when you have a bunch of imports mixed
 together.  It's harder for the eyes to scan back and forth.

 Yeah, that's the known downside to this syntax. We've been around this
 block so many times, and it's just one of those things that will end up
 with bikeshed colored not in paint but in blood. I'm open to alternative
 suggestions but not to endless discussion (it goes nowhere).

 The alternative of

 import ga = ga

 has the problem of looking like it's saying that ga is a string.

 The alternatives of

 import ga = module(ga)

 or

 import ga = (module ga)

 have the problem of making it look like the RHS is an expression.

 Feel free to suggest alternatives, but forgive me if I'm not willing to
 respond to every opinion on this one. :}


Replace the current `from` form with `of` and use `from` for the single
export:

import go from ga; // prev: import ga as go
import {a: b, b: a} of ta; // prev: import {a: b, b:a} from ta

It would even make the common case shorter. :P

Or keep the current `from` form and:

import go off ga; // prev: import ga as go

Or maybe `in`, but it sounds really weird in English as import has an
implicit in.

Cheers,
Jussi


  4) Why was this form eliminated?
 
  import x from goo;  // Note lack of curlies!
 
  In an object-oriented application, we're often going to be importing a
 single thing (the class) from external modules.  So we may see long lists
 like this:
 
  import { ClassA } from ClassA.js;
  import { ClassB } from ClassB.js;
  
  import { ClassZ } from ClassZ.js;
 
  Removing the curlies for this simple case would seem like a win.

 Another fair point. I think it might've just been a refactoring oversight.

  5) Dynamic exports via `export = ?` could make interop with existing
 module systems easier.  But how does that work?

 Basic semantics:

 - The module has one single anonymous export.
 - Clients can only get access to the export with the as form; there's no
 way to access it via a named export.
 - The value of the export is undefined until the RHS is evaluated.

  6) Adding .js as the default resolution strategy:
 
  I don't think this is tenable.  First, there's the practical issue of
 what happens when someone actually wants to load a resource without a .js
 extension?

 They change the resolution hook.

  Second, there's the philosophical problem of essentially setting up an
 alternate URL scheme.  That's fine for libraries like AMD loaders or YUI.
  But EcmaScript is a cornerstone of the internet.  External resource
 resolution can't conflict with HTML/CSS/etc.

 I'll wait till it's fleshed out more didn't last long, eh? ;-) Anyway, I
 don't really understand this argument but do you think there's much value
 to a philosophical debate on es-discuss?

 Dave

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Map|Set|WeakMap)#set() returns `this` ?

2012-12-06 Thread Jussi Kalliokoski
On Thu, Dec 6, 2012 at 8:25 PM, Jussi Kalliokoski 
jussi.kallioko...@gmail.com wrote:

 On Thu, Dec 6, 2012 at 7:32 PM, Rick Waldron waldron.r...@gmail.comwrote:

 Array.prototype.map and Array.prototype.filter return newly created
 arrays and as such, are chainable (and will have the same benefits as I
 described above)

 // map and return a fresh iterable of values
 array.map( v = ... ).values()

 // map and return a fresh iterable of entries (index/value pairs)
 array.filter( v = ... ).entries()


 Of course, but that's pears and apples, .set() doesn't create a new
 instance. And btw, that .values() is redundant.


Wait, sorry about that, wrote before I investigated.





 I agree with you, fear-driven design is bad. But don't you agree that if
 there's chaining, it's better done at language level rather than having all
 APIs be polluted by `this` returns?


 Who said all APIs would return `this`? We specified a clear criteria.


 You're dodging my question: isn't it better for the chaining to be
 supported by the language semantics rather than be injected to APIs in
 order to have support?


 After all, the APIs can't guarantee a `this` return,


 Yes they can, they return what the specification defines them to return.


 What I mean is that the not all functions in an API can return `this`
 anyway (like getters), so it's inconsistent. After all, it's not a very
 useful API if you can just set but not get.

 since they might have something actually meaningful to return, otherwise
 we might as well just replace `undefined` with `this` as the default return
 value.


 In the cases I presented, I believe that returning `this` IS the
 meaningful return.


 No, it's a generic return value if it's applied to everything that's not a
 getter.



 We could introduce mutable primitives so that meaningful return values
 could be stored in arguments, kinda like in C, but instead of error values,
 we'd be returning `this`, heheheh. :)

 I'm curious, do you have any code examples of maps/sets that could be
 made clearer by chaining?


 This is incredibly frustrating and indicates to me that you're not
 actually reading this thread, but still find it acceptable to contribute to
 the discussion.

 https://gist.github.com/4219024


 I'm sorry you feel that way, but calm down. I've read the gist all right
 and just read the latest version, and imho it's quite a biased example,
 you're making it seem harder than it actually is. For example, the last
 paragraph:

 ( map.set(key, value), set ).keys();

 // simpler:
 map.set(key, value);
 map.keys();

 ( set.add(value), set ).values();

 // simpler:
 set.add(value);
 set.values;

 ( set.add(value), set ).forEach( val =  );

 // simpler:
 set.add(value);
 set.forEach( val =  );

 Why would you need to stuff everything in one line? This way it's more
 version control friendly as well, since those two lines of code have
 actually nothing to do with each other, aside from sharing dealing with the
 same object. Why do you want to get all of those things from .set()/.add(),
 methods which have nothing to do with what you're getting at?

 Cheers,
 Jussi

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Map|Set|WeakMap)#set() returns `this` ?

2012-12-06 Thread Jussi Kalliokoski
On Thu, Dec 6, 2012 at 8:44 PM, Rick Waldron waldron.r...@gmail.com wrote:

 values() returns an iterable of the values in the array. Array, Map and
 Set will receive all three: keys(), values(), entries(). Feel free to start
 a new thread if you want to argue about iterator protocol.


Yes, I apologized for that mistake already, I remembered incorrectly. I
don't have a want to argue, just like I'm sure you don't.

I'm absolutely not dodging the question, I answered this in a previous
 message, much earlier. Cascade/monocle/mustache is not a replacement here.


That wasn't the question I asked. Cascade/monocle/mustache aren't even
ready yet, and are hence in no way an indication that chaining cannot be
made a language-side construct. I believe it can and will, and at that
point, returning this becomes completely meaningless. But (I don't see) how
can you fix this on the language syntax side:

var obj = {
  foo: bar,
  baz: taz
}
set.add(obj)
return set

instead of simply:

return set.add({
  foo: bar,
  baz: taz
})


 What I mean is that the not all functions in an API can return `this`
 anyway (like getters), so it's inconsistent. After all, it's not a very
 useful API if you can just set but not get.


 That's exactly my point. The set/add API return this, allowing
 post-mutation operations to be called: such as get or any of the examples
 I've given throughout this thread.


What? I'm really sorry, but I can't understand how what I said leads to
your point. But I bet we're both wasting our time with this part, so it's
probably best to just leave it.


 No one said anything about applying return this to everything that's not
 a getter. That was exactly what the criteria we have consensus on defines.
 It's in the meeting notes for Nov. 29.


Sorry, about that, the meeting notes (in the part Cascading this returns)
just say:

Supporting agreement
(Discussion to determine a criteria for making this API specification
distinction)
Consensus... with the criteria that these methods are not simply a set of
uncoordinated side effects that happen to have a receiver in common, but a
set of coordinated side effects on a specific receiver and providing access
to the target object post-mutation.

With no reference to the logic behind the conclusion (these methods are
not simply a set of uncoordinated side effects that happen to have a
receiver in common). I fail to see how .set()/.add() are a special case.
Am I missing something?

Please read everything I've written so far, it's not fair to make me
 constantly repeat myself in this thread.


I agree, and I'm sorry, but I have, at least everything on this thread,
those referred to and those that have seemed related. I'm doing my best,
but I'm afraid I can't keep up with every thread in my inbox, and I don't
think it's a good reason for me not to contribute at all.

Of course I could've shown it as you have here, but I made examples where
 the intention was to match the preceding examples illustrated in the gist.


Fair enough, but I fail to see the convenience in your examples.

 Why would you need to stuff everything in one line?


 As evidenced several times throughout this thread, the pattern is widely
 implemented in the most commonly used library APIs, so I guess the answer
 is The kids love it.


document.write() is widely implemented too, doesn't make it good or worth
repeating.


 This way it's more version control friendly as well, since those two lines
 of code have actually nothing to do with each other, aside from sharing
 dealing with the same object. Why do you want to get all of those things
 from .set()/.add(), methods which have nothing to do with what you're
 getting at?


 You could just as easily have them on separate lines, but in cases where
 it might be desirable to immediately operate on the result of the mutation,
 chaining the next method call has the net appearance of a single tasks (if
 that's how a programmer so chooses to express their program).


So it's taste, rather than convenience?

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Map|Set|WeakMap)#set() returns `this` ?

2012-12-05 Thread Jussi Kalliokoski
My 2 cents against the windmills...

I personally think returning `this` in absence of any meaningful value (and
chaining in general) is a bad pattern. Chaining leads to worse readability
(there's nothing subjective about this, if you have to scan the code to
another page to figure out which object the code is interacting with, it's
bad readability) and returning this is more future-hostile than returning
nothing. If we in the future come up with something actually useful the
function could return, there's no turning back if it already returns
`this`. For set(), meaningful values include the value that was set, a
boolean whether the value was added or replaced, etc., whereas `this` is
meaningless since you have it already. Returning `this` in absence of
meaningful values is also a mental overhead: Does this return a meaningful
value, or just this, so can I chain it?

I, like Andrea, have dreamed that Array#push() returned the value I passed
to it (although the reason that it doesn't is probably that you can pass
multiple values to push), I have a lot of cases where I've hoped that I
could do this:

return objects.push({
  foo: bar,
  baz: tar
})

instead of:

var newObject = {
  foo: bar,
  baz: tar
}
objects.push(newObject)
return newObject

For Array#push() to have returned `this`, I can't think of a single line of
code it would have made simpler.

Another thing is that set() returning the value that was set is also
consistent with language semantics in that `(a = 1)` has the value of 1.

I'd like to see a better way to do chaining than functions returning
`this`, like the monocle mustache.

Cheers,
Jussi

On Wed, Dec 5, 2012 at 3:30 AM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:

 as discussed before, the problem with setDefault() approach you are
 suggesting is that the dict(), at least in JS, will be created in any case.

 var newDict = a.setDefault(k1, dict());

 above operation will invoke dict() regardless k1 was present or less and
 this is not good for anyone: RAM, CPU, GC, etc

 the initial pattern is and wants to be like that so that not a single
 pointless operation is performed when/if the key is already there.

 var o = obj.has(key) ? obj.get(key) : obj.set(key, dict()); // == see
 dict, never called if key

 quick and dirty

 var o = obj.get(key) || obj.set(key, dict());

 With current pattern, and my only concern is that after this decision
 every other `set()` like pattern will return this even where not optimal, I
 have to do

 var o = obj.has(key) ? obj.get(key) : obj.set(key, dict()).get(key);

 meh ... but I can survive :D











 On Tue, Dec 4, 2012 at 4:49 PM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Mon, Dec 3, 2012 at 2:21 PM, Andrea Giammarchi
 andrea.giammar...@gmail.com wrote:
  IMHO, a set(key, value) should return the value as it is when you
 address a
  value
 
  var o = m.get(k) || m.set(k, v); // o === v
 
  // equivalent of
 
  var o = m[k] || (m[k] = v); // o === v

 If this pattern is considered sufficiently useful (I think it is), we
 should handle it directly, as Python does.  Python dicts have a
 setDefault(key, value) method which implements this pattern exactly -
 if the key is in the dict, it returns its associated value (acts like
 a plain get()); if it's not, it sets the key to the passed value and
 then returns it.  Using this pattern is not only clearer, but avoids
 repetition (of m and k in your example), and actually chains - I
 use setDefault all the time when working with nested dicts.

 (For example, if I have a sparse 2d structure implemented with nested
 dicts, I can safely get/set a terminal value with code like
 a.setDefault(k1, dict()).set(k2, v).  If that branch hadn't been
 touched before, this creates the nested dict for me.  If it has, I
 create a throwaway empty dict, which is cheap.  If JS ever grows
 macros, you can avoid the junk dict as well.)

 I prefer the plain methods to work as they are currently specified,
 where they return this.

 ~TJ



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Binary data: Structs and ArrayBuffers

2012-12-05 Thread Jussi Kalliokoski
On Tue, Dec 4, 2012 at 3:34 AM, David Herman dher...@mozilla.com wrote:

 Typed Arrays will become a special case of binary data objects. In other
 words, the binary data API generalizes typed arrays completely.


Sounds like a plan!


 Yes, but with some exceptions. We will be extending the API to allow
 fields of type object (basically, pointers to JS objects) or the type
 any (basically, any JS value whatsoever), but those types will not be
 allowed to expose their ArrayBuffer.


I see, makes sense.


 Same story: if you have the type object or the type any you can't get
 at the underlying ArrayBuffer.


As well as this.

As Ken says, the WhatWG string encoding/decoding spec is the way to do
 that. The important thing to recognize is that a string formats are just
 that -- encoding/decoding formats -- rather than types. There isn't and
 won't be a string type, or a UTF8 type, or anything like that in the API.


Sounds good!

 Arrays inside structs. Are there plans for this? They're not absolutely
 necessary, but often quite handy anyway, for example:
 
  StructType({
unique: uint32[4]
  })

 That's definitely supported.


Woohoo!


  Also, what's the plan with Typed Arrays anyway? Are we going to adopt
 them as a part of JS or leave it as an extension?

 Fully adopted.


Excellent.


  Binary stuff is hard in a language like JavaScript, but I think that
 ultimately we'll get something workable, and would love to see more
 discussion around this!

 I think binary data will be very nice in ES6, and I'm excited about where
 it's headed. We've started implementation in SpiderMonkey and hopefully
 before too long people will be able to start experimenting with it in
 nightly builds of Firefox.


I for one am eager to get experimenting with this! I think this will be
quite useful especially for our JS codecs project.


 Thanks for your questions!


Thanks for your answers!

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Map|Set|WeakMap)#set() returns `this` ?

2012-12-05 Thread Jussi Kalliokoski
On Wed, Dec 5, 2012 at 6:29 PM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:

 For what is worth it , I don't believe everyone does it like that has
 ever been a valid argument. Maybe everyone simply copied a pattern from
 jQuery without even thinking if it was needed or it was the best.

 Br


Agreed. You don't make a band that makes crappy music because most other
bands do too. You don't jump in the well if I do. We don't have to make bad
API decisions because the cows have paved a path to the butcher's. It's not
ignoring the norm, we acknowledge it and make our choices with that
knowledge, but we don't necessarily have to follow the norm.

Cheers,
Jussi


 On Wednesday, December 5, 2012, Rick Waldron wrote:




 On Wed, Dec 5, 2012 at 10:06 AM, Nathan Wall nathan.w...@live.comwrote:

  Date: Tue, 4 Dec 2012 11:03:57 -0800
  From: bren...@mozilla.org

  Subject: Re: (Map|Set|WeakMap)#set() returns `this` ?
 
  Allen Wirfs-Brock wrote:
   It's less clear which is the best choice for JS.
 
  Cascading wants its own special form, e.g., Dave's
  mustache-repurposed proposal at
 
  https://blog.mozilla.org/dherman/2011/12/01/now-thats-a-nice-stache/
 
  so one can write cascades without having to be sure the methods
 involved
  follow an unchecked |this|-returning convention.


 I really like this possibility. Is there any way of the monocle-mustache
 making it into ... say, ES7?

 If so, it would seem wrong to ever return `this`.  Sounds like you get
 the best of both worlds to me!


 Yes, monocle-mustache is very cool, especially Dave's proposed version
 here, but this:

 obj.{
prop = val
 };

 ...has received negative feedback, because developers want colon, not
 equal, but colon is to define as equal is to assign.

 eg. What does this do?

 elem.{
innerHTML: pparagraph/p
 };

 Most developers would naturally assume that this sets elem.innerHTML
 to pparagraph/p, but it actually results in a [[DefineOwnProperty]]
 of innerHTML with {[[Value]]: pparagraph/p , [[Writable]]: true,
 [[Enumerable]]: true, [[Configurable]]: true}, which would blow away the
 accessor descriptor that was previously defined for elem.innerHTML (ie. the
 one that would convert pparagraph/p to a node and insert it into the
 DOM. So the obvious choice is to use = instead of : because it
 correctly connotes the assignment behaviour—except that developers
 complained about that when we evangelized the possibility.

 Monocle-mustache is simply not a replacement for return this because
 chaining mutation methods is not the sole use case. Please review the use
 cases I provided earlier in the thread.

 There is simply too much real world evidence (widely adopted libraries
 (web) and modules (node)) in support of return-this-from-mutation-method to
 ignore, or now go back on, a base criteria for including the pattern in
 newly designed built-in object APIs.

 Rick



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Array#sort() implementations not interoperable

2012-12-03 Thread Jussi Kalliokoski
Hello everyone,

Reposting, I think my previous attempt got stuck in a filter or something,
because I somehow managed to have the code there in several copies.

I was thinking about sorting algorithms yesterday and I realized that ES
implementations may have different sorting algorithms in use, and decided
to try it out. Now, if you sort strings or numbers, it doesn't matter, but
you may be sorting objects by a key and this is where things get nasty
(think non-deterministic vs deterministic). Here's an example:

function shuffle (arr, depth) {
  var tmp = []

  var pi = String(Math.PI).substr(2)
  var i = 0
  var p = 0

  while (arr.length) {
i = (i + +pi[p]) % arr.length
p = (p + 1) % pi.length

tmp.push(arr[i])
arr.splice(i, 1)
  }

  if (!depth) return tmp

  return shuffle(tmp, depth - 1)
}

var unique = 'abcdefghijklmnopqrstu'
var sorter = 'deggeaeasemiuololizor'

var arr = Array.apply(null, Array(unique.length)).map(function (a, i) {
  return {
unique: unique[i],
sorter: sorter.charCodeAt(i)
  }
})

var original = shuffle(arr, 3)
var sorted = original.slice().sort(function (a, b) {

  return a.sorter - b.sorter
})

console.log(original.map(function (item) { return item.unique }))
console.log(sorted.map(function (item) { return item.unique }))

function shuffle (arr, depth) {
  /* it's a silly way to shuffle, but at least it's deterministic. */
  var tmp = []

  var pi = String(Math.PI).substr(2)
  var i = 0
  var p = 0

  while (arr.length) {
i = (i + +pi[p]) % arr.length
p = (p + 1) % pi.length
tmp.push(arr[i])
arr.splice(i, 1)
  }

  if (!depth) return tmp

  return shuffle(tmp, depth - 1)
}

var unique = 'abcdefghijklmnopqrstu'
var sorter = 'deggeaeasemiuololizor'

var arr = Array.apply(null, Array(unique.length)).map(
function (a, i) {
  return {
unique: unique[i],
sorter: sorter.charCodeAt(i)
  }
})

var original = shuffle(arr, 3)
var sorted = original.slice().sort(function (a, b) {
  return a.sorter - b.sorter
})

console.log(original.map(function (item) { return item.unique }))
console.log(sorted.map(function (item) { return item.unique }))

In Firefox, you get:

[s, m, q, l, e, b, k, i, f, g, o, j, d, t, n,
c, a, p, h, r, u]
[f, h, a, e, b, g, j, d, c, l, r, q, o, k, t,
n, p, u, i, m, s]

In Chrome, you get:

[s, m, q, l, e, b, k, i, f, g, o, j, d, t, n,
c, a, p, h, r, u]
[f, h, a, g, e, b, j, d, c, l, r, o, q, k, n,
t, p, u, i, m, s]

Real world consequences of this may include:

 * A blog where posts are sorted by date (/MM/DD). Different browsers
will show the posts in different order if Array#sort is used to accomplish
this. Not a very severe consequence.
 * A spreadsheet application. If it has some order-dependent algorithm to
calculate values, different browsers can give different results for the
same research data.

Now I'm not sure what could be done to this, if anything even should be,
just thought I'd bring it up.

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array#sort() implementations not interoperable

2012-12-03 Thread Jussi Kalliokoski
On Mon, Dec 3, 2012 at 8:46 PM, Brendan Eich bren...@mozilla.org wrote:

 You have three messages total on this topic at

 https://mail.mozilla.org/**pipermail/es-discuss/2012-**December/https://mail.mozilla.org/pipermail/es-discuss/2012-December/


Oh, sorry about the noise, I should've checked the archives!


 Have you read the language dating from ES3 on Array sort in the spec? In
 particular Array#sort is not guaranteed to be stable. Perhaps it should be


Yes, I have, that's why I actually thought about trying this out it. I'm
not sure if we should change it, but it'd be interesting to have the
conversation, to see if there are any real world use cases that may benefit
from doing so. Otherwise, I don't see a reason to change it. Don't fix it
if it ain't broke.

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Binary data: Structs and ArrayBuffers

2012-12-03 Thread Jussi Kalliokoski
I was just reading through the binary data proposal [1] and I have a few
comments / questions:

First of all, how will this integrate with the Typed Arrays? Will a struct
have an intrinsic ArrayBuffer? What about an ArrayType instance? If they
do, how will it react to typeless properties? Are typeless properties
allowed in the first place (IIRC there was a talk I watched where David
Herman or someone else said that this might be)? Will you be able to
extract structs out of an ArrayBuffer?

What about strings? Looking through the binary data related proposals,
there seems to be no good way of extracting strings from binary data.
Should we have, for example StringType(DOMString encoding, uint length,
boolean isPadded=false) for Structs? Or, should DataView have a method for
extracting a string from it? What about storing one?

Pointers. Now it's useful if a struct can contain an array of arbitrary
size, but we don't have pointers. We can't let the struct be of arbitrary
size either. What are the thoughts on this?

Arrays inside structs. Are there plans for this? They're not absolutely
necessary, but often quite handy anyway, for example:

StructType({
  unique: uint32[4]
})

Is way simpler to manage than:

StructType({
  unique0: uint32,
  unique1: uint32,
  unique2: uint32,
  unique3: uint32
})

Also, what's the plan with Typed Arrays anyway? Are we going to adopt them
as a part of JS or leave it as an extension?

Binary stuff is hard in a language like JavaScript, but I think that
ultimately we'll get something workable, and would love to see more
discussion around this!

Cheers,
Jussi

[1] http://wiki.ecmascript.org/doku.php?id=harmony:binary_data
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Array#sort implementations are not interoperable

2012-12-01 Thread Jussi Kalliokoski
Hello everyone,

I was thinking about sorting algorithms yesterday and I realized that ES
implementations may have different sorting algorithms in use, and decided
to try it out. Now, if you sort strings or numbers, it doesn't matter, but
you may be sorting objects by a key and this is where things get nasty
(think non-deterministic vs deterministic). Here's an example:

function shuffle (arr, depth) {
  var tmp = []

  var pi = String(Math.PI).substr(2)
  var i = 0
  var p = 0

  while (arr.length) {
i = (i + +pi[p]) % arr.length
p = (p + 1) % pi.length
tmp.push(arr[i])
arr.splice(i, 1)
  }

  if (!depth) return tmp

  return shuffle(tmp, depth - 1)
}

var unique = 'abcdefghijklmnopqrstu'
var sorter = 'deggeaeasemiuololizor'

var arr = Array.apply(null, Array(unique.length)).map(function (a, i) {
  return {
unique: unique[i],
sorter: sorter.charCodeAt(i)
  }
})

var original = shuffle(arr, 3)
var sorted = original.slice().sort(function (a, b) {
  return a.sorter - b.sorter
})

console.log(original.map(function (item) { return item.unique }))
console.log(sorted.map(function (item) { return item.unique }))

function shuffle (arr, depth) {
  /* it's a silly way to shuffle, but at least it's deterministic. */
  var tmp = []

  var pi = String(Math.PI).substr(2)
  var i = 0
  var p = 0

  while (arr.length) {
i = (i + +pi[p]) % arr.length
p = (p + 1) % pi.length
tmp.push(arr[i])
arr.splice(i, 1)
  }

  if (!depth) return tmp

  return shuffle(tmp, depth - 1)
}

var unique = 'abcdefghijklmnopqrstu'
var sorter = 'deggeaeasemiuololizor'

var arr = Array.apply(null, Array(unique.length)).map(function (a, i) {
  return {
unique: unique[i],
sorter: sorter.charCodeAt(i)
  }
})

var original = shuffle(arr, 3)
var sorted = original.slice().sort(function (a, b) {
  return a.sorter - b.sorter
})

console.log(original.map(function (item) { return item.unique }))
console.log(sorted.map(function (item) { return item.unique }))

In Firefox, you get:

[s, m, q, l, e, b, k, i, f, g, o, j, d, t, n,
c, a, p, h, r, u]
[f, h, a, e, b, g, j, d, c, l, r, q, o, k, t,
n, p, u, i, m, s]

In Chrome, you get:

[s, m, q, l, e, b, k, i, f, g, o, j, d, t, n,
c, a, p, h, r, u]
[f, h, a, g, e, b, j, d, c, l, r, o, q, k, n,
t, p, u, i, m, s]

Real world consequences of this may include:

 * A blog where posts are sorted by date (/MM/DD). Different browsers
will show the posts in different order if Array#sort is used to accomplish
this. Not a very severe consequence.
 * A spreadsheet application. If it has some order-dependent algorithm to
calculate values, different browsers can give different results for the
same research data.

Now I'm not sure what could be done to this, if anything even should be,
just thought I'd bring it up.

Cheers,
Jussi

function shuffle (arr, depth) {
  var tmp = []

  var pi = String(Math.PI).substr(2)
  var i = 0
  var p = 0

  while (arr.length) {
i = (i + +pi[p]) % arr.length
p = (p + 1) % pi.length
tmp.push(arr[i])
arr.splice(i, 1)
  }

  if (!depth) return tmp

  return shuffle(tmp, depth - 1)
}

var unique = 'abcdefghijklmnopqrstu'
var sorter = 'deggeaeasemiuololizor'

var arr = Array.apply(null, Array(unique.length)).map(function (a, i) {
  return {
unique: unique[i],
sorter: sorter.charCodeAt(i)
  }
})

var original = shuffle(arr, 3)
var sorted = original.slice().sort(function (a, b) {
  return a.sorter - b.sorter
})

console.log(original.map(function (item) { return item.unique }))
console.log(sorted.map(function (item) { return item.unique
}))function shuffle (arr, depth) {
  var tmp = []

  var pi = String(Math.PI).substr(2)
  var i = 0
  var p = 0

  while (arr.length) {
i = (i + +pi[p]) % arr.length
p = (p + 1) % pi.length
tmp.push(arr[i])
arr.splice(i, 1)
  }

  if (!depth) return tmp

  return shuffle(tmp, depth - 1)
}

var unique = 'abcdefghijklmnopqrstu'
var sorter = 'deggeaeasemiuololizor'

var arr = Array.apply(null, Array(unique.length)).map(function (a, i) {
  return {
unique: unique[i],
sorter: sorter.charCodeAt(i)
  }
})

var original = shuffle(arr, 3)
var sorted = original.slice().sort(function (a, b) {
  return a.sorter - b.sorter
})

console.log(original.map(function (item) { return item.unique }))
console.log(sorted.map(function (item) { return item.unique
}))function shuffle (arr, depth) {
  var tmp = []

  var pi = String(Math.PI).substr(2)
  var i = 0
  var p = 0

  while (arr.length) {
i = (i + +pi[p]) % arr.length
p = (p + 1) % pi.length
tmp.push(arr[i])
arr.splice(i, 1)
  }

  if (!depth) return tmp

  return shuffle(tmp, depth - 1)
}

var unique = 'abcdefghijklmnopqrstu'
var sorter = 'deggeaeasemiuololizor'

var arr = Array.apply(null, Array(unique.length)).map(function (a, i) {
  return {
unique: unique[i],
sorter: sorter.charCodeAt(i)
  }
})

var original = shuffle(arr, 3)
var sorted = original.slice().sort(function (a, b) {
  return a.sorter - b.sorter
})


Re: Array#sort implementations are not interoperable

2012-12-01 Thread Jussi Kalliokoski
Oops, sorry everyone, looks like I had a serious copy-paste mishap with
gmail. :/

On Sat, Dec 1, 2012 at 3:10 PM, Jussi Kalliokoski 
jussi.kallioko...@gmail.com wrote:

 Hello everyone,

 I was thinking about sorting algorithms yesterday and I realized that ES
 implementations may have different sorting algorithms in use, and decided
 to try it out. Now, if you sort strings or numbers, it doesn't matter, but
 you may be sorting objects by a key and this is where things get nasty
 (think non-deterministic vs deterministic). Here's an example:

 function shuffle (arr, depth) {
   var tmp = []

   var pi = String(Math.PI).substr(2)
   var i = 0
   var p = 0

   while (arr.length) {
 i = (i + +pi[p]) % arr.length
 p = (p + 1) % pi.length

 tmp.push(arr[i])
 arr.splice(i, 1)
   }

   if (!depth) return tmp

   return shuffle(tmp, depth - 1)
 }

 var unique = 'abcdefghijklmnopqrstu'
 var sorter = 'deggeaeasemiuololizor'

 var arr = Array.apply(null, Array(unique.length)).map(function (a, i) {
   return {
 unique: unique[i],
 sorter: sorter.charCodeAt(i)
   }
 })

 var original = shuffle(arr, 3)
 var sorted = original.slice().sort(function (a, b) {

   return a.sorter - b.sorter
 })

 console.log(original.map(function (item) { return item.unique }))
 console.log(sorted.map(function (item) { return item.unique }))

 function shuffle (arr, depth) {
   /* it's a silly way to shuffle, but at least it's deterministic. */
   var tmp = []

   var pi = String(Math.PI).substr(2)
   var i = 0
   var p = 0

   while (arr.length) {
 i = (i + +pi[p]) % arr.length
 p = (p + 1) % pi.length
 tmp.push(arr[i])
 arr.splice(i, 1)
   }

   if (!depth) return tmp

   return shuffle(tmp, depth - 1)
 }

 var unique = 'abcdefghijklmnopqrstu'
 var sorter = 'deggeaeasemiuololizor'

 var arr = Array.apply(null, Array(unique.length)).map(function (a, i) {
   return {
 unique: unique[i],
 sorter: sorter.charCodeAt(i)
   }
 })

 var original = shuffle(arr, 3)
 var sorted = original.slice().sort(function (a, b) {
   return a.sorter - b.sorter
 })

 console.log(original.map(function (item) { return item.unique }))
 console.log(sorted.map(function (item) { return item.unique }))

 In Firefox, you get:

 [s, m, q, l, e, b, k, i, f, g, o, j, d, t,
 n, c, a, p, h, r, u]
 [f, h, a, e, b, g, j, d, c, l, r, q, o, k,
 t, n, p, u, i, m, s]

 In Chrome, you get:

 [s, m, q, l, e, b, k, i, f, g, o, j, d, t,
 n, c, a, p, h, r, u]
 [f, h, a, g, e, b, j, d, c, l, r, o, q, k,
 n, t, p, u, i, m, s]

 Real world consequences of this may include:

  * A blog where posts are sorted by date (/MM/DD). Different
 browsers will show the posts in different order if Array#sort is used to
 accomplish this. Not a very severe consequence.
  * A spreadsheet application. If it has some order-dependent algorithm to
 calculate values, different browsers can give different results for the
 same research data.

 Now I'm not sure what could be done to this, if anything even should be,
 just thought I'd bring it up.

 Cheers,
 Jussi

 function shuffle (arr, depth) {
   var tmp = []

   var pi = String(Math.PI).substr(2)
   var i = 0
   var p = 0

   while (arr.length) {
 i = (i + +pi[p]) % arr.length

 p = (p + 1) % pi.length
 tmp.push(arr[i])
 arr.splice(i, 1)
   }

   if (!depth) return tmp

   return shuffle(tmp, depth - 1)
 }

 var unique = 'abcdefghijklmnopqrstu'
 var sorter = 'deggeaeasemiuololizor'

 var arr = Array.apply(null, Array(unique.length)).map(function (a, i) {
   return {
 unique: unique[i],
 sorter: sorter.charCodeAt(i)
   }
 })

 var original = shuffle(arr, 3)
 var sorted = original.slice().sort(function (a, b) {

   return a.sorter - b.sorter
 })

 console.log(original.map(function (item) { return item.unique }))
 console.log(sorted.map(function (item) { return item.unique }))function 
 shuffle (arr, depth) {
   var tmp = []


   var pi = String(Math.PI).substr(2)
   var i = 0
   var p = 0

   while (arr.length) {
 i = (i + +pi[p]) % arr.length
 p = (p + 1) % pi.length
 tmp.push(arr[i])
 arr.splice(i, 1)

   }

   if (!depth) return tmp

   return shuffle(tmp, depth - 1)
 }

 var unique = 'abcdefghijklmnopqrstu'
 var sorter = 'deggeaeasemiuololizor'

 var arr = Array.apply(null, Array(unique.length)).map(function (a, i) {

   return {
 unique: unique[i],
 sorter: sorter.charCodeAt(i)
   }
 })

 var original = shuffle(arr, 3)
 var sorted = original.slice().sort(function (a, b) {
   return a.sorter - b.sorter
 })

 console.log(original.map(function (item) { return item.unique }))
 console.log(sorted.map(function (item) { return item.unique }))function 
 shuffle (arr, depth) {
   var tmp = []

   var pi = String(Math.PI).substr(2)

   var i = 0
   var p = 0

   while (arr.length) {
 i = (i + +pi[p]) % arr.length
 p = (p + 1) % pi.length
 tmp.push(arr[i])
 arr.splice(i, 1)
   }

   if (!depth) return tmp


   return shuffle(tmp, depth - 1)
 }

 var

Re: How to count the number of symbols in a string?

2012-11-30 Thread Jussi Kalliokoski
On Fri, Nov 30, 2012 at 10:39 PM, Yusuke Suzuki utatane@gmail.comwrote:

 I remember that String object iterator produces the sequence of Unicode
 characters.
 http://wiki.ecmascript.org/doku.php?id=harmony:iterators#string_iterators

 So I think we can get code points by using array comprehension,
 var points = [ch for ch of string];

 Is it right?  all


So verbose, ugh.

var points = [...string]

Partly kidding here. :D

Cheers,
Jussi


 On Sat, Dec 1, 2012 at 5:33 AM, Mathias Bynens math...@qiwi.be wrote:

 ECMAScript 6 introduces some useful new features that make working with
 astral Unicode symbols easier.

 One thing that is still missing though (AFAIK) is an easy way to count
 the number of symbols / code points in a given string. As you know, we
 can’t rely on `String.prototype.length` here, as a string containing
 nothing but an astral symbol has a length of `2` instead of `1`:

  var poo = '\u{1F4A9}'; // U+1F4A9 PILE OF POO
  poo.length
 2

 Of course it’s possible to write some code yourself to loop over all the
 code units in the string, handle surrogate pairs, and increment a counter
 manually for each full code point, but that’s a pain.

 It would be useful to have a new property on `String.prototype` that
 would return the number of Unicode symbols in the string. Something like
 `realLength` (of course, it needs a better name, but you get the idea):

  poo.realLength
 1

 Another possible solution is to add something like
 `String.prototype.codePoints` which would be an array of the numerical code
 point values in the string. That way, getting the length is only a matter
 of accessing the `length` property of the array:

  poo.codePoints
 [ 0x1F4A9 ]
  poo.codePoints.length
 1

 Or perhaps this would be better suited as a method?

  poo.getCodePoints()
 [ 0x1F4A9 ]
  poo.getCodePoints().length
 1

 Has anything like this been considered/discussed here yet?
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss




 --
 Regards,
 Yusuke Suzuki


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Arrow functions and return values

2012-11-29 Thread Jussi Kalliokoski
It's a bit unclear to me how arrow functions react to semicolons, for
example:

var a = (c) = {
  var b = 2;
  b * c;
}

a(4);

To me, it seems like this should return undefined. After all, the last
statement in the function is empty. To actually return b * c, you should
drop the semicolon:

var a = (c) = {
  var b = 2;
  b * c
}

a(4);

This would be consistent with, for example, Rust and would help avoid
annoying accidental returns (see [1] for discussion about this wrt
CoffeeScript).

Cheers,
Jussi

[1] https://github.com/jashkenas/coffee-script/issues/2477
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Arrow functions and return values

2012-11-29 Thread Jussi Kalliokoski
On Thu, Nov 29, 2012 at 7:42 PM, Rick Waldron waldron.r...@gmail.comwrote:




 On Thu, Nov 29, 2012 at 9:41 AM, Brendan Eich bren...@mozilla.org wrote:

 Kevin Smith wrote:


 It's a bit unclear to me how arrow functions react to semicolons,
 for example:

 var a = (c) = {
   var b = 2;
   b * c;
 }

 a(4);


 Hmmm...  I was under the impression that arrow functions with normal
 function bodies do *not* implicitly return anything.  Maybe I need to
 adjust my spec goggles, but I don't see that in the latest draft.


 Oh (and d'oh!) you are quite right. There's no implicit return or other
 TCP aspect save lexical-|this|, at all.


Oh, I hadn't realized this! In that case, great!

Cheers,
Jussi



 Sorry for the echo :(




 /be

 __**_
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/**listinfo/es-discusshttps://mail.mozilla.org/listinfo/es-discuss



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Pure functions in EcmaScript

2012-11-28 Thread Jussi Kalliokoski
On Wed, Nov 28, 2012 at 3:47 PM, Marius Gundersen gunder...@gmail.comwrote:




 On Wed, Nov 28, 2012 at 1:21 PM, David Bruant bruan...@gmail.com wrote:

  Hi Marius,

 I won't say the idea is bad, but what would be the benefit of this new
 type of function?

 From experience on this list, if a new idea cannot prove to make a major
 difference with what currently exists, it is not considered to be added to
 the ES6 spec.
 The major difference can be in performance, security, language
 extensibility, programming idioms/conveniences, etc.
 Do you have reasons to think pure functions as you propose them make that
 big of an improvement as opposed to JS as it is?


 With many new functional programming possibilities (like map, reduce,
 filter, lambdas) there are many scenarios where the implementer should use
 pure (or as I renamed it in another reply, side-effect-free) functions.
 Library implementers are likely interested in making functions that can
 take lambdas as parameters, and these lambdas (in many cases) should not
 alter any outside state, they should only return a value. A function with
 this restriction could run faster and take up less memory since it only
 needs the values passed to it as arguments in its own scope.

 Mostly I feel this would introduce better coding practises with a focus on
 functional programming rather than object oriented programming. Using
 functions with limited scope would reduce the number of variables written
 to the global scope, and would reduce the amount of state in an
 application. Seeing as FP is a bit of a trend today (due, in part, to the
 popularity of JavaScript), it seems to me like a good idea to implementing
 features which help promote good FP patterns in a language that allows for
 both FP and OOP.


With pure function, are you after

a) The equivalent of `inline` in the C family?
b) Something less state-independent, a function that could be, for example,
parallelized/forked safely?

For a), I'm not sure to which extent implementations already do this. For
b), seems like something related to the RiverTrail project.

Cheers,
Jussi


 David

 Le 28/11/2012 12:50, Marius Gundersen a écrit :

 Has there been any work done on pure functions in EcmaScript? The way I
 imagine it, there would be a way to indicate that a function should be pure
 (by using a symbol or a new keyword, although I understand new keywords
 aren't terribly popular). The pure function is not allowed to access any
 variable outside its own scope. Any access to a variable outside the scope
 of the function would result in a Reference Error, with an indication that
 the reference attempt was made from a pure function. This also applies to
 any function called from within the pure function. The entire stack of a
 pure function must be pure. This also means the pure function cannot access
 the [this] object. Only the parameters  passed to the function can be used
 in the calculation.

 The syntax could be something like this (the @ indicates that it is pure):

 function sum@(a, b){
   return a+b;
 }

 var sum = function@(a, b){
   return a+b;
 }

 Marius Gundersen


 ___
 es-discuss mailing 
 listes-discuss@mozilla.orghttps://mail.mozilla.org/listinfo/es-discuss




 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Support for basic linear algebra on Array's

2012-11-19 Thread Jussi Kalliokoski
I think it's a good time for me to point out the DSP API [1] again, which
is designed for fast vector math. There's already a partial node.js
implementation [2] by me and Jens Nockert is working on a SpiderMonkey
implementation, which is more complete atm.

Cheers,
Jussi

[1] http://people.opera.com/mage/dspapi/
[2] https://github.com/jussi-kalliokoski/node-dsp

On Mon, Nov 19, 2012 at 11:15 AM, Alex Russell a...@dojotoolkit.org wrote:

 I like this as a rough solution. Assuming we get Array sub-typing done in
 a similar timeframe, it might all fold into a nice tidy package.

 On Nov 19, 2012, at 3:08 AM, Brendan Eich bren...@mozilla.com wrote:

  Oliver Hunt wrote:
  On Nov 18, 2012, at 6:17 PM, Matt Calhouncalhoun...@gmail.com  wrote:
 
  I believe that having a concise notation for linear algebra is an
 important feature of a programming language that can dramatically improve
 code readability, and furthermore that linear algebra is a powerful tool
 that has many applications in java script.  I would like to make the
 following two suggestions:
 
  1. Extend the + operator to work on Array's of numbers (or strings).
 
  2. Allow for scalar multiplication of Array's which contain only
 numbers.
 
  Although there are many problems with a simple linear algebraic
 solution which arise in contexts where java script is a natural language of
 choice (for example WebGL), there are no simple ways to express these
 solutions in java script because the language itself creates a barrier to
 even basic operations such as subtracting two vectors, in the sense that
 the amount of writing it takes to express these operations obscures their
 mathematical content.  For more complicated linear algebraic work, the
 problem is quite severe.
 
  Changing the behaviour of any of the basic operators on
 builtin/pre-existing types is essentially a non-starter.  + already has
 sufficiently, errr, sensible behaviour to be widely used on arrays.
  Other operators don't really have any such sensible use cases, but
 changing their semantics on one kind of object vs. other kinds would be
 highly worrying.
 
  (2) would also be a non-starter i feel: JS implicitly converts to
 number in all other cases, and if we made operator behaviour dependent on
 content type (in a nominally untyped language) I suspect we would very
 rapidly end up in a world of semantic pain, and another WAT presentation.
 
  I think we could actually reduce the WAT effect in JS with *opt-in*
 operators for value objects. This is on the Harmony agenda:
 
  http://wiki.ecmascript.org/doku.php?id=strawman:value_objects
  http://wiki.ecmascript.org/doku.php?id=strawman:value_proxies
 
  I've implemented int64 and uint64 for SpiderMonkey, see
 https://bugzilla.mozilla.org/show_bug.cgi?id=749786, where in the patch
 there's a comment discussing how the operators for these new value-object
 types work:
 
  /*
  * Value objects specified by
  *
  *  http://wiki.ecmascript.org/doku.php?id=strawman:value_objects
  *
  * define a subset of frozen objects distinguished by the
 JSCLASS_VALUE_OBJECT
  * flag and given privileges by this prototype implementation.
  *
  * Value objects include int64 and uint64 instances and support the
 expected
  * arithmetic operators: | ^  ==  =+ - * / %, boolean test,
 ~,
  * unary - and unary +.
  *
  * != and ! are not overloadable to preserve identities including
  *
  *  X ? A : B =  !X ? B : A
  *  !(X  Y) =  !X || !Y
  *  X != Y =  !(X == Y)
  *
  * Similarly,  and = are derived from  and = as follows:
  *
  *  A  B =  B  A
  *  A = B =  B = A
  *
  * We provide = as well as  rather than derive A = B from !(B  A) in
 order
  * to allow the = overloading to match == semantics.
  *
  * The strict equality operators, === and !==, cannot be overloaded, but
 they
  * work on frozen-by-definition value objects via a structural recursive
 strict
  * equality test, rather than by testing same-reference. Same-reference
 remains
  * a fast-path optimization.
  *
  * Ecma TC39 has tended toward proposing double dispatch to implement
 binary
  * operators. However, double dispatch has notable drawbacks:
  *
  *  - Left-first asymmetry.
  *  - Exhaustive type enumeration in operator method bodies.
  *  - Consequent loss of compositionality (complex and rational cannot be
  *composed to make ratplex without modifying source code or wrapping
  *instances in proxies).
  *
  * So we eschew double dispatch for binary operator overloading in favor
 of a
  * cacheable variation on multimethod dispatch that was first proposed in
 2009
  * by Christian Plesner Hansen:
  *
  *  https://mail.mozilla.org/pipermail/es-discuss/2009-June/009603.html
  *
  * Translating from that mail message:
  *
  * When executing the '+' operator in 'A + B' where A and B refer to value
  * objects, do the following:
  *
  *  1. Get the value of property LOP_PLUS in A, call the result P
  *  2. If P is not a list, throw a TypeError: no '+' operator
  *  3. Get

Re: Modules, Concatenation, and Better Solutions

2012-10-16 Thread Jussi Kalliokoski
Just to be sure... Does a get printed only the first time the module A is
imported somewhere, or every time?

On Tue, Oct 16, 2012 at 3:57 PM, Patrick Mueller pmue...@yahoo.com wrote:

 On Mon, Oct 15, 2012 at 9:45 AM, Kevin Smith khs4...@gmail.com wrote:

 OK, so:

 module A { console.log(a); export var x; }
 console.log($);
 import x from A;

 Does this print:
 $
 a
 or
 a
 $


 The first - $, then a.

 At least, that's how most module systems I've played with seem to work -
 CommonJS-ish ones I've written and used, and mostly AMD; there was an issue
 with the AMD almond loader that it would execute a module factory when
 the module was define()'d, not when it was first require()'d.  Can't
 remember the final stand on that one.  I think most of the AMD loaders will
 ensure that factories are not run until needed.

 This has worked out quite well, as it means ordering of modules doesn't
 matter - Browserify is a good example of this.  It means you don't HAVE to
 arrange your modules in any particular order, just make sure they are all
 defined before you do your first require().  MUCH, MUCH easier to build
 concatenators if you don't care what the order is.

 --
 Patrick Mueller
 pmue...@gmail.com

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: why (null = 0) is true?

2012-09-25 Thread Jussi Kalliokoski
On Tue, Sep 25, 2012 at 1:22 PM, David Bruant bruan...@gmail.com wrote:

 Le 25/09/2012 12:13, Frank Quan a écrit :
  Hi, Brendan, thank you for reply.
 
 
  I mean in common understanding, a=b always have the same result
  with  ab || a==b .
 Common understanding assumes a and b are numbers. I personally don't
 know if there is a common understanding of what 'true  azerty' could
 mean.


Indeed. For the fun of it, I think that in the context of JS that means
`Number(true)  azerty.charCodeAt(0)`.


  But I noticed that in ES5/ES3, there are several cases breaking this
 rule.
 
  See the following:
 
  null == 0 // false
  null  0 // false
 
  null = 0 // true
 
  I was wondering if this is by design.
 
  And, is it possible to have some change in future versions of ES?
 Regrettably, no. As a complement to Brendan's response, I recommand you
 to read the following paragraph

 https://github.com/DavidBruant/ECMAScript-regrets#web-technologies-are-ugly-and-there-is-no-way-back
 Changing this in a future version of ECMAScript would break the web
 (break websites that rely on this broken behavior)

 David
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Function#fork

2012-09-24 Thread Jussi Kalliokoski
Hello everyone,

I've been thinking a lot about parallel processing in the context of
JavaScript, and this is really a hard problem. I'm very curious to hear
what everyone's opinions are about it's problems and so forth, but I don't
think an open question like that will give very interesting results, so I
have an example problem for discussion (while it seems like a bad idea to
me, and unlikely to ever get to the language, what I want to know is
everyone's reasoning behind their opinions whether it's for or against).

What if we introduce Function#fork(), which would call the function in
another thread that shares state with the current one (how much state it
shares is an open question I'd like to hear ideas about, but one
possibility is that only the function arguments are shared) using a similar
signature to Function#call except that the first argument would be a
callback, which would have error as its first argument (if the forked
function throws with the given arguments, it can be controlled) and the
return value of the forked function as the second argument.

 * What are the technical limitations of this?
 * What are the bad/good implications of this on the language users?
 * Better ideas?
 * etc.

I have a detailed example of showing Function#fork in action [1] (I was
supposed to make a simplified test, but got a bit carried away and made it
do parallel fragment shading), it uses a simple fill-in for the
Function#fork using setTimeout instead of an actual thread.

Cheers,
Jussi

[1] https://gist.github.com/3775697
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Function#fork

2012-09-24 Thread Jussi Kalliokoski
Hi Rick!

Thanks for the links, very interesting! I was already aware of River Trail
and other concurrency proposals for JavaScript, my purpose for this thread
was anyway to get good clarification on what approaches are impossible and
why and what approaches are possible and what are their virtues /
downsides. So thanks again, those two papers are more than I hoped for! But
I hope that there will be more discussion about this.

Cheers,
Jussi

On Mon, Sep 24, 2012 at 4:55 PM, Hudson, Rick rick.hud...@intel.com wrote:

  Besides web workers there are two straw man proposals that address
 adding parallelism and concurrency to JavaScript.

 ** **

 http://wiki.ecmascript.org/doku.php?id=strawman:data_parallelism and
 http://wiki.ecmascript.org/doku.php?id=strawman:concurrency.

 ** **

 The Parallel JavaScript (River Trail) proposal has a prototype
 implementation available at https://github.com/rivertrail/rivertrail/wiki.
 You should be able to implement your example’s functionality using this API.
 

 ** **

 The latest HotPar
 https://www.usenix.org/conference/hotpar12/tech-schedule/workshop-programhad 
 two interesting papers
 

 ** **
  Parallel Programming for the 
 Webhttps://www.usenix.org/conference/hotpar12/parallel-programming-web
 https://www.usenix.org/conference/hotpar12/parallel-programming-web

 and

 *Parallel Closures: A New Twist on an Old Idea *
 https://www.usenix.org/conference/hotpar12/parallel-closures-new-twist-old-idea
 

 ** **

 These projects each address some important part of the general problem of
 adding parallelism and concurrency to JavaScript. 

 ** **

 Feedback is always appreciated.

 ** **

 **-**Rick

 ** **

 ** **

 ** **

 *From:* es-discuss-boun...@mozilla.org [mailto:
 es-discuss-boun...@mozilla.org] *On Behalf Of *Jussi Kalliokoski
 *Sent:* Monday, September 24, 2012 8:44 AM
 *To:* es-discuss
 *Subject:* Function#fork

 ** **

 Hello everyone,

 I've been thinking a lot about parallel processing in the context of
 JavaScript, and this is really a hard problem. I'm very curious to hear
 what everyone's opinions are about it's problems and so forth, but I don't
 think an open question like that will give very interesting results, so I
 have an example problem for discussion (while it seems like a bad idea to
 me, and unlikely to ever get to the language, what I want to know is
 everyone's reasoning behind their opinions whether it's for or against).

 What if we introduce Function#fork(), which would call the function in
 another thread that shares state with the current one (how much state it
 shares is an open question I'd like to hear ideas about, but one
 possibility is that only the function arguments are shared) using a similar
 signature to Function#call except that the first argument would be a
 callback, which would have error as its first argument (if the forked
 function throws with the given arguments, it can be controlled) and the
 return value of the forked function as the second argument.

  * What are the technical limitations of this?
  * What are the bad/good implications of this on the language users?
  * Better ideas?
  * etc.

 I have a detailed example of showing Function#fork in action [1] (I was
 supposed to make a simplified test, but got a bit carried away and made it
 do parallel fragment shading), it uses a simple fill-in for the
 Function#fork using setTimeout instead of an actual thread.

 Cheers,
 Jussi

 [1] https://gist.github.com/3775697

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Function#fork

2012-09-24 Thread Jussi Kalliokoski
Hi David,

Very nice insight, thanks. I agree with you on the shared state,
transferable ownership is a much more tempting option. Especially since
even immutable shared state is hard to achieve in JS, given we have
getters, setters and proxies, etc.

Cheers,
Jussi

On Mon, Sep 24, 2012 at 5:12 PM, David Bruant bruan...@gmail.com wrote:

 Le 24/09/2012 14:43, Jussi Kalliokoski a écrit :
  Hello everyone,
 
  I've been thinking a lot about parallel processing in the context of
  JavaScript, and this is really a hard problem. I'm very curious to
  hear what everyone's opinions are about it's problems and so forth,
  but I don't think an open question like that will give very
  interesting results, so I have an example problem for discussion
  (while it seems like a bad idea to me, and unlikely to ever get to the
  language, what I want to know is everyone's reasoning behind their
  opinions whether it's for or against).
 The concurrency strawman [1] defines a concurrency as well as
 parallelism model. So far, it's been expressed as the favorite model for
 general purpose parallelism.
 Different use cases are efficiently solved by different forms of
 parallelism, for instance, there is another strawman on data parallelism
 [2] for the case of applying the same computation to a large amount of
 data.

  What if we introduce Function#fork(), which would call the function in
  another thread that shares state with the current one.
 Shared state (no matter how much) always has the same story. 2
 computations units want to access the shared state concurrently, but for
 the sake of the shared state integrity, they can't access the state
 simultaenously. So we need to define a form of mutex (for MUTual
 EXclusion) for a computation unit to express the intention to use the
 state that should be used by one computation unit at once. With mutexes
 as we know them, used at scale, you end up with deadlocks which are
 nasty bugs to find out and debug.
 This is all a consequence of the idea of shared state.

 Of all this story, 2 parts can be attacked to fix the problem. Either,
 define something better than what we know of mutexes (I have no idea of
 what it would look like, but that's an interesting idea) or get rid of
 shared state.
 The current concurrency strawman is doing the latter.

 One annoying thing of naive no-shared-state systems as we know them is
 that everything has to be copied from a computation unit to another.
 That's not exactly true though. It's always possible to implement a
 copy-on-right mechanism.
 Another idea is to define ownership over data. HTML5 defines
 transferable objects [3] which can be passed back and forth form
 worker to worker but can always be used in one worker at a time. Rust
 has a concept of unique pointer which is the same idea.
 Another idea would be to have data structures which live in 2 or more
 computation units, showing just an interface to each and which integrity
 would be taken care of under the hood by the VM and not client code.
 This is what local storage does for instance.

 I will fight very hard against the idea of shared state, because there
 are very few benefits against all what it costs in large-scale programs.

 David

 [1] http://wiki.ecmascript.org/doku.php?id=strawman:concurrency
 [2] http://wiki.ecmascript.org/doku.php?id=strawman:data_parallelism
 [3]
 http://updates.html5rocks.com/2011/12/Transferable-Objects-Lightning-Fast

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: State of discussion - module / Import syntax

2012-09-24 Thread Jussi Kalliokoski
I find this interesting as well, because I've been thinking of creating Yet
Another(TM) module loader, which would be a standalone polyfill for Harmony
modules.

While we're at it, a few questions I've been wondering:
 * Is it possible to have modules that don't export *anything*? I suppose
this would allow existing scripts like jQuery to work out of the box, if
they tie their exports to the window object, then just do `import
jquery.js` or `import * from jquery.js` if necessary.
 * If there's a cross-compilation hook on the loader, does the dependency
resolving happen before or after the compilation? Former is more efficient,
but places constraints on the compile-to-JS languages.
 * Is there a way to do async cross-compilation with the hooks? e.g.
offload parsing and everything to a worker to keep the main thread
responsive?
 * Is it possible to import things to local scope? For example, is this a
syntax error, and if not, what happens: `function x () { import y from x }`

Cheers,
Jussi

On Mon, Sep 24, 2012 at 5:00 PM, Aron Homberg i...@aron-homberg.de wrote:

 Hi all,

 I found that the recent draft / harmony PDF doesn't include a
 specification of the import syntax and
 just wanna ask if the following wiki pages in (harmony namespace) reflect
 the current state of discussion
 and if there are big changes to expect in the future regarding this:


 http://wiki.ecmascript.org/doku.php?id=harmony:modules_exampless=import

 If it's relatively stable I would start prototyping the import syntax in
 my Traceur clone.

 Thanks and regards,
 Aron





 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Function#fork

2012-09-24 Thread Jussi Kalliokoski
On Mon, Sep 24, 2012 at 9:19 PM, Mark S. Miller erig...@google.com wrote:



 On Mon, Sep 24, 2012 at 9:17 AM, Jussi Kalliokoski 
 jussi.kallioko...@gmail.com wrote:

 As I suspected. Glad to hear my assumptions were correct. :) I think this
 is a good thing actually, we'll have a good excuse not to have shared
 state in the language (fp yay).

 For the record, I've updated my initial JS fragment shading experiment
 using Workers (on which my previous example was based) to take use of the
 Transferrables [1] [2]. If you compare the results on Chrome and Firefox,
 the benefit of Transferrables is quite impressive.

 There seems to be a small downside to Transferrables though, as I
 couldn't figure out a way to send parts of an ArrayBuffer using them.

 Cheers,
 Jussi

 [1] http://labs.avd.io/parallel-shading/test.html


 Made laptop too hot for lap ;).


Heheh, luckily there are better technologies available for this exact use
case. :D




 [2] https://gist.github.com/2689799


 On Mon, Sep 24, 2012 at 5:59 PM, Alex Russell slightly...@google.comwrote:

 Let me put bounds on this, then:

 Approaches that enable shared mutable state are non-starters. A send
 based-approach might work (e.g., Worker Tranferrables) as might automatic
 parallelization (e.g., RiverTrail) -- but threads and thread-like semantics
 aren't gonna happen. Turn-based execution with an event loop is how JS
 works and anything that changes that apparent semantic won't fly.

 Regards

 On Mon, Sep 24, 2012 at 3:09 PM, Jussi Kalliokoski 
 jussi.kallioko...@gmail.com wrote:

 Hi Rick!

 Thanks for the links, very interesting! I was already aware of River
 Trail and other concurrency proposals for JavaScript, my purpose for this
 thread was anyway to get good clarification on what approaches are
 impossible and why and what approaches are possible and what are their
 virtues / downsides. So thanks again, those two papers are more than I
 hoped for! But I hope that there will be more discussion about this.

 Cheers,
 Jussi


 On Mon, Sep 24, 2012 at 4:55 PM, Hudson, Rick rick.hud...@intel.comwrote:

  Besides web workers there are two straw man proposals that address
 adding parallelism and concurrency to JavaScript.

 ** **

 http://wiki.ecmascript.org/doku.php?id=strawman:data_parallelism and
 http://wiki.ecmascript.org/doku.php?id=strawman:concurrency.

 ** **

 The Parallel JavaScript (River Trail) proposal has a prototype
 implementation available at
 https://github.com/rivertrail/rivertrail/wiki. You should be able to
 implement your example’s functionality using this API.

 ** **

 The latest HotPar
 https://www.usenix.org/conference/hotpar12/tech-schedule/workshop-programhad
  two interesting papers
 

 ** **
  Parallel Programming for the 
 Webhttps://www.usenix.org/conference/hotpar12/parallel-programming-web
 https://www.usenix.org/conference/hotpar12/parallel-programming-web***
 *

 and

 *Parallel Closures: A New Twist on an Old Idea *
 https://www.usenix.org/conference/hotpar12/parallel-closures-new-twist-old-idea
 

 ** **

 These projects each address some important part of the general problem
 of adding parallelism and concurrency to JavaScript. 

 ** **

 Feedback is always appreciated.

 ** **

 **-**Rick

 ** **

 ** **

 ** **

 *From:* es-discuss-boun...@mozilla.org [mailto:
 es-discuss-boun...@mozilla.org] *On Behalf Of *Jussi Kalliokoski
 *Sent:* Monday, September 24, 2012 8:44 AM
 *To:* es-discuss
 *Subject:* Function#fork

 ** **

 Hello everyone,

 I've been thinking a lot about parallel processing in the context of
 JavaScript, and this is really a hard problem. I'm very curious to hear
 what everyone's opinions are about it's problems and so forth, but I don't
 think an open question like that will give very interesting results, so I
 have an example problem for discussion (while it seems like a bad idea to
 me, and unlikely to ever get to the language, what I want to know is
 everyone's reasoning behind their opinions whether it's for or against).

 What if we introduce Function#fork(), which would call the function in
 another thread that shares state with the current one (how much state it
 shares is an open question I'd like to hear ideas about, but one
 possibility is that only the function arguments are shared) using a 
 similar
 signature to Function#call except that the first argument would be a
 callback, which would have error as its first argument (if the forked
 function throws with the given arguments, it can be controlled) and the
 return value of the forked function as the second argument.

  * What are the technical limitations of this?
  * What are the bad/good implications of this on the language users?
  * Better ideas?
  * etc.

 I have a detailed example of showing Function#fork in action [1] (I
 was supposed to make a simplified test, but got a bit carried away and 
 made
 it do parallel fragment shading), it uses a simple fill

Re: Fwd: delay keyword

2012-09-05 Thread Jussi Kalliokoski
That explains a lot, I read the spec for that quite a few times to make
sure that I didn't misunderstand the case and it seemed to me that it isn't
really a spec violation. But it surely isn't desired behavior.

Cheers,
Jussi

On Wed, Sep 5, 2012 at 7:48 PM, Ian Hickson i...@hixie.ch wrote:

 On Thu, 5 Jul 2012, Boris Zbarsky wrote:
  On 7/5/12 1:50 PM, Brendan Eich wrote:
   Seems like a bug in Firefox, a violation of HTML5 even. The slow
   script dialog should not allow an event loop to nest. Cc'ing Boris for
   his opinion (this may be a known bug on file, my memory dims with
   age).
 
  [...] Say the user decides to close the tab or window when they get the
  slow script prompt (something that I think is desirable to allow the
  user to do, personally). Should this close the tab/window without firing
  unload events (a spec violation)

 That's not a script violation, it's just equivalent to turning off
 scripts briefly and closing the browsing context.


  or should it fire them while other script from the page is on the stack
  and at some random point in its execution (hey, another spec violation)?

 The spec allows user agents to abort scripts (with or without catchable
 exceptions) upon a timeout or upon user request, so it wouldn't be a spec
 violation either way.

 http://www.whatwg.org/specs/web-apps/current-work/#killing-scripts

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: About Array.of()

2012-08-28 Thread Jussi Kalliokoski
On Tue, Aug 28, 2012 at 9:59 AM, Maciej Stachowiak m...@apple.com wrote:


 On Aug 26, 2012, at 4:30 PM, Brendan Eich bren...@mozilla.com wrote:

  Rick Waldron wrote:
  But Array.of is not. Maybe Array.new is a good name.
  Array.of is unambiguous with the current ES specification
 
  Array.new is ok too, though -- no problem with a reserved identifier as
 a property name. It's darn nice for Rubyists.

 Another possibility is Array.create, following the pattern of
 Object.create. of seems like a funky name for a constructor, to my taste.


True, but I'd rather see Array.create as a fix for the one argument Array
constructor, i.e. creating an array of the specified length, without
holes. For example:

Array.create = function (length) {
  if (length === 1) return [undefined]

  return Array.apply(null, Array(length))
}

There's been some discussion of that earlier, actually [1].

My 2 cents goes to Array.fromElements, conveys very well what it does.

However, I'm still not quite sure what the use case is for this. For code
generation, if you know how many elements there are and what they are
enough to put them in the Array.of(...,...,...) call, why not just use
[...,...,...]? Unless it's supposed to be used for converting array-likes
to arrays, where I really don't think this is the best function signature.
For the dart example, why not just use [] and you avoid the gotcha?

Cheers,
Jussi

[1] https://mail.mozilla.org/pipermail/es-discuss/2012-July/023974.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: About Array.of()

2012-08-28 Thread Jussi Kalliokoski
On Tue, Aug 28, 2012 at 12:40 PM, Axel Rauschmayer a...@rauschma.de wrote:

 However, I'm still not quite sure what the use case is for this. For code
 generation, if you know how many elements there are and what they are
 enough to put them in the Array.of(...,...,...) call, why not just use
 [...,...,...]? Unless it's supposed to be used for converting array-likes
 to arrays, where I really don't think this is the best function signature.
 For the dart example, why not just use [] and you avoid the gotcha?


 map and map-like scenarios are another use case:

 [1,2,3].map(Array.of)  // [[1], [2], [3]]

 But, as Domenic mentions, it does indeed compete with:

 [1,2,3].map(...x = [...x])


Yeah, and in that case (making every element of an array an array),
actually:

[1,2,3].map(x = [x])

Which is even shorter.

I really have a hard time seeing any value in having this feature. All the
problems it's supposed to solve (at least the ones presented here) already
have better solutions. :D

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Hash style comments

2012-08-08 Thread Jussi Kalliokoski
And even if it wasn't, it wouldn't make much sense to use the only
punctuation symbol we have left for comments where we already have two
syntaxes. :)

Cheers,
Jussi

On Wed, Aug 8, 2012 at 7:38 PM, Rick Waldron waldron.r...@gmail.com wrote:



 On Wed, Aug 8, 2012 at 11:58 AM, Trans transf...@gmail.com wrote:

 Hi. First time posting to the list, so please forgive if I am not
 following proper approach.

 I'd like to make one proposal for future of EMCAScript. I would like
 to see support for `#` comment notation.


 The # is already on hold for several potential syntax additions:

 - sealed object initializers
 http://wiki.ecmascript.org/doku.php?id=strawman:obj_initialiser_methods
 - Tuples http://wiki.ecmascript.org/doku.php?id=strawman:tuples
 - Records http://wiki.ecmascript.org/doku.php?id=strawman:records

 Rick





 The reasons for this are more interesting than one might think.

 First, of course, is the simple fact that `#` is a very common
 notation among programming languages. It is used by Shell scripts,
 Ruby, Python, even Coffeescript, and many others.

 Secondly, `#` is preferable to `//` in that it is only one character
 instead of two, and albeit subjective (IMHO) it just seems a little
 bit more aesthetic.

 But another reason, that few will at first consider, is the
 relationship between JSON and YAML. Their respective development teams
 made an effort to ensure JSON was a perfect subset of YAML. Now there
 is consideration of JSON5 (https://github.com/aseemk/json5). JSON5
 adds support for comments, however it is Javascript style comments,
 where as YAML supports `#` style comments. This causes the
 superset-subset relationship to break. To help remedy this going
 forward, it would be very helpful if EMCAScript also supported `#`
 comments. The YAML spec could in turn add support for `//` style
 comments.

 To be clear, I am not suggesting that `//` be deprecated. That would
 simply break far too much old code for no good reason! I am just
 seeking for `#` to be supported too.

 Thanks for consideration,
 trans
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: for-of statement of sparse array

2012-07-06 Thread Jussi Kalliokoski
The only case where I've had a problem with forEach, map and friends
skipping holes is when I want a quick (to type) way to create a populated
array, say I wanted to do something like

var powersOf2 = Array(16).map((item, index) = Math.pow(2, index))

But that leads me to suggest Array.create() that would be another FP
goodie, simplifies things as `item` becomes irrelevant:

var powersOf2 = Array.create(16, (index) = Math.pow(2, index))

What do you think?

This being instead of the current:

var powersOf2 = []

for (var index=0; index16; index++) {
  powersOf2.push(Math.pow(2, index))
}

I think it would align well with all these other array helpers.

Cheers,
Jussi

On Fri, Jul 6, 2012 at 8:22 AM, Rick Waldron waldron.r...@gmail.com wrote:



 On Thu, Jul 5, 2012 at 6:30 PM, Rick Waldron waldron.r...@gmail.comwrote:


 On Thursday, July 5, 2012 at 9:18 PM, Brendan Eich wrote:

 Brendan Eich wrote:

 This upholes the Array forEach (and all other extras) hole-skipping.
 The deck is stacked against for(;;) iteration in my view.


 LOL, This upholds, of course.

 I had hoped this was a clever pun :)


 Currently, devs expect for-loop and while (assuming common patterns in
 play here) to be the expected way that sparse array holes are exposed --
 so from the give me what I most likely expect perspective, I agree with
 the consistency wins argument: for-of should act like for-in


 To clarify, the behaviours I'm comparing are as follows:

 var i, a = [1, 2, , 4];
 for ( i in a ) {
   console.log( i, a[i] );
 }

 0 1
 1 2
 3 4

 var i, a = [1, 2, , 4];
 for ( i = 0; i  a.length; i++ ) {
   console.log( i, a[i] );
 }

  0 1
 1 2
 2 undefined
 3 4


 var i = 0, a = [1, 2, , 4];
 while ( i  a.length ) {
   console.log( i, a[i] ); i++;
 }

 0 1
 1 2
 2 undefined
 3 4


 Where the latter 2 require an explicit check (not present) against holes.
 So I would assume that for-of would behave like...

 var i, a = [1, 2, , 4];
 for ( i of a ) {
   console.log( i );
 }

 1
 2
 4





 Rick



 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: for-of statement of sparse array

2012-07-06 Thread Jussi Kalliokoski
On Fri, Jul 6, 2012 at 10:28 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Thu, Jul 5, 2012 at 11:49 PM, Jussi Kalliokoski
 jussi.kallioko...@gmail.com wrote:
  The only case where I've had a problem with forEach, map and friends
  skipping holes is when I want a quick (to type) way to create a populated
  array, say I wanted to do something like
 
  var powersOf2 = Array(16).map((item, index) = Math.pow(2, index))
 
  But that leads me to suggest Array.create() that would be another FP
 goodie,
  simplifies things as `item` becomes irrelevant:
 
  var powersOf2 = Array.create(16, (index) = Math.pow(2, index))
 
  What do you think?
 
  This being instead of the current:
 
  var powersOf2 = []
 
  for (var index=0; index16; index++) {
powersOf2.push(Math.pow(2, index))
  }
 
  I think it would align well with all these other array helpers.

 Once we have for-of and generators, making a range() generator that
 accomplishes the same thing is trivial, and more powerful.  (Arbitrary
 start/end/step.)

 ~TJ


Yes, maybe it isn't a worthy addition as is.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: for-of statement of sparse array

2012-07-06 Thread Jussi Kalliokoski
Hahah, sweet, but it doesn't make it faster to type needed at all! Quite
the opposite even, it requires more thinking. ;)

But maybe I'll just add yet another function to my boilerplate:

function createArray (l, cb) {
return Array.apply(null, new Array(l))
.map(Function.call.bind(Number))
.map(cb)
}

Cheers,
Jussi

On Fri, Jul 6, 2012 at 10:59 AM, Brandon Benvie
bran...@brandonbenvie.comwrote:

 Hey look, it's my favorite example.

 var powersOf2 = Array.apply(null, new
 Array(32)).map(Function.prototype.call.bind(Number)).map(Math.pow.bind(null,
 2))

   var powersOf2 = []
 
  for (var index=0; index16; index++) {
powersOf2.push(Math.pow(2, index))
  }


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: es-discuss Digest, Vol 65, Issue 29

2012-07-06 Thread Jussi Kalliokoski
I think you understood it right... But while the hard to implement argument
is now invalid at least for SpiderMonkey, it's a really bad idea to add
this to the language.

First of all, you read the blogpost, right? I explained pretty much how it
makes *everything* unpredictable and volatile even. Having something like
this would be a showstopper for having any serious logic done in
JavaScript. You can't do anything that makes a critical state unstable
without taking serious precautions and even then you might end up losing
the whole state. This is even worse than puttingenforcing locks and
mutexes in JS.

Second, it doesn't add any real value. If you take any real example where
something like a wait keyword would make the code easier to read, let's say
node.js only had an async readFile function:

var content

fs.readFile('blah.txt', function (e, data) {
  if (e) throw e

  content = data
}, 'utf8')

wait

// Do something with content

Here the wait will do *nothing*, because the callback isn't immediate,
except in very rare edge cases maybe, and the content will be undefined
when you start trying to process it.

So, while I still think it might be a prolific choice for browser vendors
to do this in the main thread for blocking operations, as it will make
everything go smoother regardless of a few bad eggs, and hardly anything
breaks, I don't think it should be provided elsewhere and especially not as
a language feature. And this is just my $0.02 (re: prolific choice), I
think browser vendors will disagree with my view there, but what they'll
most certainly agree with me on is that it's problematic to allow any kind
of blocking operations in the main thread (especially timing-critical), and
having alert(), confirm() and syncXHR in the first place was not a very
good choice in hindsight. That's why I didn't believe it was a bug in the
first place, seemed too designed.

Cheers,
Jussi

On Fri, Jul 6, 2012 at 12:03 PM, Patrik Stutz patrik.st...@gmail.comwrote:

 Just to be sure I understood that thing right.

 When this is the stack of the example by Jussi Kalliokoski (other
 colors mean other queue entry)

- console.log(1);
- console.log(2);
- wait;
- console.log(5);
- console.log(3);
- console.log(4);

 The wait keyword made it execute this way:

- console.log(1);
- console.log(2);
   - console.log(3);
   - console.log(4);
- console.log(5);

 Instead of this way:

- console.log(1);
- console.log(2);
- console.log(3);
- console.log(4);
- console.log(5);

 ?

 Ok, that's not how I suggested it to work. But to be honest, I think this
 approach is event better!
 You would'nt have to implement stack pausing/resuming at all! In fact, the
 wait or delay keyword would just create a new queue, move all entries
 from the main queue to that sub-queue and then run the queue blocking to
 its end. It all would just look like a normal function call and wouldn't be
 that hard to implement at all.

 You say that's a bug and this will indeed be right, but what about adding
 such a functionality to the spec?
 I think the argument about to hard to implement would be gone this way,
 right?





 2012/7/6 es-discuss-requ...@mozilla.org

 Send es-discuss mailing list submissions to
 es-discuss@mozilla.org

 To subscribe or unsubscribe via the World Wide Web, visit
 https://mail.mozilla.org/listinfo/es-discuss
 or, via email, send a message with subject or body 'help' to
 es-discuss-requ...@mozilla.org

 You can reach the person managing the list at
 es-discuss-ow...@mozilla.org

 When replying, please edit your Subject line so it is more specific
 than Re: Contents of es-discuss digest...

 Today's Topics:

1. Re: for-of statement of sparse array (Jason Orendorff)
2. Re: Static Module Resolution (Aymeric Vitte)
3. Re: for-of statement of sparse array (Brendan Eich)
4. Re: for-of statement of sparse array (Brendan Eich)
5. Re: Static Module Resolution (Brendan Eich)
6. Re: for-of statement of sparse array (Brendan Eich)
7. Re: for-of statement of sparse array (Rick Waldron)
8. Re: Fwd: delay keyword (Brendan Eich)


 -- Weitergeleitete Nachricht --
 From: Jason Orendorff jason.orendo...@gmail.com
 To: Allen Wirfs-Brock al...@wirfs-brock.com
 Cc: Brendan Eich bren...@mozilla.org, ES-Discuss 
 es-discuss@mozilla.org
 Date: Thu, 5 Jul 2012 18:10:29 -0500
 Subject: Re: for-of statement of sparse array
 On Thu, Jul 5, 2012 at 2:09 PM, Allen Wirfs-Brock al...@wirfs-brock.com
 wrote:
  Map may win at some point, who knows? It's not winning if one wants an
 array, numeric indexing, .length, the usual prototype methods.
 
  We could consider also have dense interators available:
 
   for (let v of array.denseValues) console.log(v);

 This makes sense to me, but most arrays are meant to be dense in
 practice. So perhaps it makes more sense to add methods specifically
 for sparse arrays, making

Re: delay keyword

2012-07-05 Thread Jussi Kalliokoski
On Thu, Jul 5, 2012 at 1:04 PM, David Bruant bruan...@gmail.com wrote:

  Le 05/07/2012 11:08, Patrik Stutz a écrit :

 I've read the articles you linked to and still think that 'delay' would be
 a great idea! I think 
 thishttp://calculist.org/blog/2011/12/14/why-coroutines-wont-work-on-the-web/
  post
 is wrong in many things. In my opinion, coroutines would be even simpler
 to implement (depending on the current design of the engine).

 I am not a JavaScript engine implementors (Dave Herman is, by the way),
 but from what I know, coroutines would require to store stack frames to
 enable running them later. If it's not something done already, it may be
 complicated to do. JavaScript engines are engineered for performance of
 today's JavaScript and today, there is no coroutine (this probably stands
 for generators, but some engines already have generators)


Just FYI, SpiderMonkey already has stack pause/resume implemented, I think
it has been there at least since Fx8. At least to some extent. What's
missing is a way for JS to leverage this (without hacks). Ugh, I really
need to write that blog post, this is a very complex subject.


  Also, using generators is complicated as hell and in the end still does
 nothing useful since yield is not asynchronus at all. All the libraries
 that use generators do a lot to make you feel like it were asynchronous
 behind the scenes, but it isnt.

  Do you really want to add a feature to JavaScript, for that you need a
 complicated library to use it? And even with such a library, it's still
 much more complicated to use than a simple delay keyword.

  While generators  libraries for it would overcomplicate JavaScript,
 delay would be dead simple to use. It would fit much better into the
 language since the rest of the language is also designed very simple.

 Your post contains a lot of opinions, feelings and things that you think
 and adding new features is not really about what a particular person thinks
 in my opinion.
 For any features to be added to the language, nothing is really about
 opinion. It all start with use cases.
 Could you show something that 'delay' enables that is not possible
 currently?
 For instance, private names cannot be implemented today and they answer to
 a need of JavaScript developers who want better encapsulation idioms.

 If there is no such thing, is it a major improvement by comparison to what
 is possible today?
 For instance, WeakMaps, Maps and Sets can actually be implemented today,
 but with bad performances or relying on mechanisms with unreliable
 performance, so having them implemented in the language offers some
 guarantees of performance.


 I am not asking for yes/no answers, but actual code snippets taken out of
 real projects showing how 'delay' would make the code easier to read or
 understand, to reason about.


 One point made by Dave in his article is very compelling:
 Once you add coroutines, you never know when someone might call yield.
 Any function you call has the right to pause and resume you whenever they
 want.
 = This makes very difficult to build programs using libraries.

 Generators are a lot like coroutines, with one important difference: they
 only suspend their own function activation.
 = And suddenly, your code is not affected by another function being a
 generator. When you call a function, it either never return (infinite
 loop), returns or throws an exception (which is just a different form of
 return for that matter).



  all you'd have change in the example is use something else than
 setTimeout, and make the delay a function call instead..

  Cool! But what are the alternatives to setTimeout on the browser side
 wich dont have any delay?

 setTimeout(f ,0) could work, but has a small delay for historical reasons.
 Microsoft is coming up with setImmediate (I haven't heard anyone
 following)
 https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/setImmediate/Overview.html
 There is also a trick with calling postMessage to send yourself a message:
 http://dbaron.org/log/20100309-faster-timeouts

 David


Yes, the postMessage trick is the one I was referring to, but setImmediate
would probably work as well. In node.js, there's process.nextTick(cb), but
I don't know of a way to pause/resume a stack in node.

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Fwd: delay keyword

2012-07-05 Thread Jussi Kalliokoski
Ok, here's the relevant blog post I mentioned writing:
blog.avd.io/posts/js-green-threads . You can discuss / give feedback here
or on HackerNews: http://news.ycombinator.com/item?id=4203749 .

Cheers,
Jussi

On Thu, Jul 5, 2012 at 7:38 PM, Brendan Eich bren...@mozilla.org wrote:

 Patrik Stutz wrote:

 Ok, maybe this can indeed work. But instead of the import x from y I
 really would make it like Isaac already suggested it:

 var a = import a.js;


 Read https://mail.mozilla.org/**pipermail/es-discuss/2012-**
 June/023760.htmlhttps://mail.mozilla.org/pipermail/es-discuss/2012-June/023760.htmland
  note control-insensitive.

 /be

 __**_
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/**listinfo/es-discusshttps://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A few more questions about the current module proposal

2012-07-05 Thread Jussi Kalliokoski
On Thu, Jul 5, 2012 at 8:06 PM, Russell Leggett
russell.legg...@gmail.comwrote:

 Sorry I haven't gotten a chance to get into this thread sooner, let me
 catch up a bit:

 On Wed, Jul 4, 2012 at 2:56 PM, Jussi Kalliokoski 
 jussi.kallioko...@gmail.com wrote:

 On Wed, Jul 4, 2012 at 9:13 PM, Sam Tobin-Hochstadt sa...@ccs.neu.edu
  wrote:

 On Wed, Jul 4, 2012 at 12:29 PM, Jussi Kalliokoski
 jussi.kallioko...@gmail.com wrote:
 
  1) How does the static resolution and static scoping behave when out
 of the
  normal context. As an example if `import` is in an `eval()` call, what
 would
  happen:
 
  var code = loadFromURL('http://example.org/foo.js') // content:
 `import foo
  from bar`
  eval(code)
  console.log(foo) // ???

 First, what does `loadFromURL` do?  That looks like sync IO to me.


 Indeed it is, to simplify things. Let's pretend it's a function that gets
 the text contents of a URL.


  Would this example block until the module is resolved and loaded?
 Would it

  throw? What happens, exactly? As my $0.02 goes, I think it's a bad
 idea to
  ban import in eval.

 Second, it depends on whether bar is a previously-loaded module.
 For example, if bar is a library provided by the host environment,
 such as the browser, then everything will be fine, and the code will
 import `foo` successfully.  If bar is a remote resource, this will
 throw -- we're not going to add synchronous IO to `eval` (or to
 anything else).


 So basically, eval()'ing something acquired via XHR would no longer give
 the same result as it does if the same script is in a script tag? Suffice
 to say I disagree strongly with this choice, but I'm sure the rationale
 behind this choice is strong.


 So I guess my take on it is that any import statement should be illegal
 inside of eval. Looking at the proposal, that doesn't sound like it,
 though. Let's take the loadFromUrl out of the equation.

 import foo from baz
 var code = 'import foo from bar';
 eval(code);
 console.log(foo);

 There is a reason why import got special syntax, and it wasn't just so
 that it would be easier to type. Putting it inside eval eliminates any
 ability for static analysis to happen upfront during the parse before
 actually executing. The import dependency cannot be seen, and in this case
 there is a collision on foo which should have been detected at
 compilation time. I can think of a dozen other reasons why imports should
 not be allowed in eval, but that's just one which seems like a pretty clear
 problem.


The implications of banning import in eval is that modules for an existing
evaling module loader can't adopt the new modules system, quite possibly
incurring this decision to projects using those modules as well. How much
this would allegedly slow down adoption, I can't tell. Maybe it's
insignificant.

Another thing it means is that eval() would no longer do what it says on
the box, i.e. evaluate an expression of JS, as the code inside eval() would
be a whole different JS.



 On Thu, Jul 5, 2012 at 8:56 AM, Kevin Smith khs4...@gmail.com wrote:


 One question, though:  branching on the file extension, as above, will
 not generally work.  The source code might be served through a URL that
 does not have a file extension.  On the web though, we'll generally have
 access to a Content-Type header.  In the current design, there's doesn't
 appear to be a way to get that information.


 This makes a lot of sense to me. Great idea.


+1

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Fwd: delay keyword

2012-07-05 Thread Jussi Kalliokoski
On Thu, Jul 5, 2012 at 8:50 PM, Brendan Eich bren...@mozilla.org wrote:

 Jussi Kalliokoski wrote:

 Ok, here's the relevant blog post I mentioned writing:
 blog.avd.io/posts/js-green-**threadshttp://blog.avd.io/posts/js-green-threads
 http://blog.avd.io/posts/js-**green-threadshttp://blog.avd.io/posts/js-green-threads
 . You can discuss / give feedback here or on HackerNews:
 http://news.ycombinator.com/**item?id=4203749http://news.ycombinator.com/item?id=4203749.


 Seems like a bug in Firefox, a violation of HTML5 even. The slow script
 dialog should not allow an event loop to nest. Cc'ing Boris for his opinion
 (this may be a known bug on file, my memory dims with age).


I don't think it's a bug, it seems to meaningful since all the blocking
calls have this behaviour. Looks like SM has stack pausing / continuing in
place as well; why, if not for this? Seems reasonable to me. But if it's a
bug, I'm extremely sorry I haven't filed it, I didn't think it was a bug
when I discovered it around Fx8.



 Chrome doesn't run onmessage when I continue (Wait) the Kill-or-Wait sad
 tab dialog. Safari and Opera (12) seem not to put up a slow script dialog,
 but perhaps I was impatient (I wanted a minute or so).


Yes, this is Firefox-specific. I've tested all other browsers at my
disposal and they don't have this behavior. Seems like a good idea adding
it though, could make the browsers a lot more responsive regardless of a
few bad websites, and I don't think those websites would break horribly
because of this.

Cheers,
Jussi
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


  1   2   >