Re: Cloning WeakSet/WeakMap

2018-02-09 Thread David Bruant
2018-02-09 10:05 GMT-05:00 Michał Wadas :

> English isn't my native language, so I probably made a mistake.
>
oh ok, sorry for my misinterpretation


> I was asked to add WeakSet.prototype.union(iterable) creating new WeakSet
> instance including data from both iterable and original WeakSet.
>
ok, I don't have an opinion on this idea

David


>
>
>
> On 9 Feb 2018 4:01 pm, "David Bruant"  wrote:
>
>> Hi,
>>
>> My understanding is that cloning a WeakSet into a Set would remove all
>> its properties related to security and garbage collection.
>>
>> The properties related to security and garbage collection of WeakSet are
>> based on the fact that its elements are not enumerable by someone who would
>> only be holding a reference to the WeakSet. If you want to "clone" a
>> WeakSet into a Set it means you have an expectation that the set of
>> elements are deterministically enumerable.
>>
>> WeakSets and Sets, despite there close name and API, are used in
>> different circumstances.
>>
>> David
>>
>>
>> 2018-02-09 9:53 GMT-05:00 Michał Wadas :
>>
>>> Hi.
>>>
>>> I was asked to include a way to clone WeakSet in Set builtins proposal.
>>> Is there any consensus on security of such operation?
>>>
>>> Michał Wadas
>>>
>>> ___
>>> es-discuss mailing list
>>> es-discuss@mozilla.org
>>> https://mail.mozilla.org/listinfo/es-discuss
>>>
>>>
>>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Cloning WeakSet/WeakMap

2018-02-09 Thread David Bruant
Hi,

My understanding is that cloning a WeakSet into a Set would remove all its
properties related to security and garbage collection.

The properties related to security and garbage collection of WeakSet are
based on the fact that its elements are not enumerable by someone who would
only be holding a reference to the WeakSet. If you want to "clone" a
WeakSet into a Set it means you have an expectation that the set of
elements are deterministically enumerable.

WeakSets and Sets, despite there close name and API, are used in different
circumstances.

David


2018-02-09 9:53 GMT-05:00 Michał Wadas :

> Hi.
>
> I was asked to include a way to clone WeakSet in Set builtins proposal. Is
> there any consensus on security of such operation?
>
> Michał Wadas
>
> ___
> es-discuss mailing list
> es-discuss@mozilla.org
> https://mail.mozilla.org/listinfo/es-discuss
>
>
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array Comprehensions

2017-02-07 Thread David Bruant

Le 06/02/2017 à 17:59, Ryan Birmingham a écrit :

Hello all,

I frequently find myself desiring a short array or generator 
comprehension syntax. I'm aware that there are functional ways around 
use of comprehension syntax, but I personally (at least) love the 
syntax in the ES reference 
(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Array_comprehensions).


The best previous discussion on this that I found was six years old 
(https://esdiscuss.org/topic/array-comprehensions-shorter-syntax) and 
answers some of my questions, raising others. That said, I wanted to ask:


  * Why is the Comprehension Syntax in the reference yet not more
standard? It feels almost like a tease.


Proposals to change the standard are listed here :
https://github.com/tc39/proposals
The process for a feature to become standard is described here :
https://tc39.github.io/process-document/


  * How do you usually approach or avoid this issue?
  * Do you think we should look at improving and standardizing the
comprehension syntax?

Some might argue it is yet another instance of "superficial sugar 
obsession" [1] :-p I don't know where I stand personally.


In any case, if you want to start, write down a proposal (can be 20 
lines in a gist [2]) including programs that are hard to express in 
JavaScript and which readability would significantly be improved with 
the new syntax.
Perhaps submit it to the mailing-list and try to find a "TC39 champion" 
(criterion to enter stage 1).
At the very least, the proposal will be listed in the stage 0 proposals 
list [3].


David

[1] https://twitter.com/mikeal/status/828674319651786754
[2] http://gist.github.com/
[3] https://github.com/tc39/proposals/blob/master/stage-0-proposals.md
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Has there been any discussions around standardizing socket or file io usage?

2016-06-17 Thread David Bruant

Hi Kris,

Le 17/06/2016 06:44, Kris Siegel a écrit :
I didn't see this in the archives but I was curious if any 
consideration has been given for standardizing on features more 
commonly found in most other language's standard library.


For example reading and writing to sockets in JavaScript requires 
platform specific libraries and works very differently between them. 
The same goes for file io (which would obviously need restrictions 
when run in, say, a web browser).


Building these in would make JavaScript more universal and easier to 
learn (you learn one way to access a resource instead of 2 or 3 very 
different ways).


I would be happy to work on a proposal for such changes if they were 
desired by the community. Thoughts?
I understand your motivation, but I believe standardisation isn't the 
right avenue for the problem you describe to be solved.


Specifically, even if there was a standard why would Node or browser 
makers implement it given they already have an API for the job and lots 
of code is already written on top of these APIs?


Writing a standard is not a guarantee for implementation. Implementing 
something is lots of work for browser vendors and Node.js (and they're 
not in shortage of things to do), so they usually need some confidence 
that the new thing adds enough value to be worth the cost.
One way to convey such confidence can be to start the work, implement it 
as a library on top of current APIs, show that there is adoption by lots 
of people. Adoption is usually is an excellent proxy for value. That's 
how we got document.querySelectorAll (via jQuery) and Promise (via the 
gazillion promise libraries and Promise/A+ spec) for instance.


In this case, from experience reading proposals on standards 
mailing-list come and go, I doubt this will be of interest to enough 
people to be worth it. But that's just my own opinion and I would love 
to be proven wrong.


One more thing to regret, maybe https://www.youtube.com/watch?v=7eNFQqMSxtU

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: PRNG - currently available solutions aren't addressing many use cases

2015-12-01 Thread David Bruant

Le 01/12/2015 20:20, Michał Wadas a écrit :


As we all know, JavaScript as language lacks builtin randomness 
related utilities.
All we have is Math.random() and environment provided RNG - 
window.crypto in browser and crypto module in NodeJS.

Sadly, these APIs have serious disadvantages for many applications:

Math.random
- implementation dependant
- not seedable
- unknown entropy
- unknown cycle
(...)

I'm surprised by the level of control you describe (knowing the cycle, 
seeding, etc.). If you have all of this, then, your PRNG is just a 
deterministic function. Why generating numbers which "look" random if 
you want to control how they're generated?



window.crypto
- not widely known

This is most certainly not a good reason to introduce a new API.

As we can see, all these either unreliable or designed mainly for 
cryptography.


That's we need easy to use, seedable random generator

Can you provide use cases the current options you listed make impossible 
or particularly hard?




Why shouldn't it be provided by library?

- average developer can't and don't want to find and verify quality of 
library - "cryptography is hard" and math is hard too


A library or a browser implementation would both need to be "validated" 
by a test suite verifying some statistical properties. My point is that 
it's the same amount of work to validate the "quality" of the 
implementation.



- library size limits it usability on Web


How big would the library be?
How much unreasonable would it be compared to other libraries for other 
use cases?
I'm not an expert on the topic, but of the few things I know, it's hard 
to imagine a PRNG function would be more than 10k


- no standard interface for PRNG - library can't be replaced as 
drop-in replacement


We've seen in the past that good libraries become de-facto standard (at 
the library level, not the platform level) and candidate to being 
shimmed when the library is useful and there is motivation for a drop-in 
replacement (jQuery > Zepto, underscore > lodash). This can happen.
We've also seen ES Promises respect the Promise A+ spec or close enough 
if they don't (I'm not very familiar with the details).


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: An update on Object.observe

2015-11-03 Thread David Bruant

Hi,

Le 03/11/2015 12:26, Alexander Jones a écrit :
In my opinion, the fundamental record type we build our JS on should 
be getting dumber, not smarter. It feels inappropriate to be piling 
more difficult-to-reason-about mechanismson top before reeling in 
exotic host objects.
JS objects were never only the record you're talking about. They were 
also used for OOP (used as dynamic this values if one property was a 
function and called after a dot).
And DOM objects also exposed things that did not have equivalent in ES 
objects (aside from the easy "host objects" escape), so the language 
needed to catch up (as it did in ES5) despite having to be more 
difficult to reason about.


Immutable data structures might be what you're looking for though
https://github.com/sebmarkbage/ecmascript-immutable-data-structures

With Proxy out of the bag, I'm not so hopeful for the humble Object 
anymore.
This is a surprising statement. By exposing the low-level object API as 
userlang API (proxy traps + Reflect API), proxies make the low-level 
object API subject to the same backward-compat constraints as every 
other API.
If nothing else, the very existence of proxies puts an end to the 
evolution of the object model.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: ECMAScript 2015 is now an Ecma Standard

2015-06-17 Thread David Bruant

Lots of the changes were long awaited. ES2015 is an important milestone.
More important even is the momentum and recent changes to the way people 
can contribute to the standard.


Thank you to everyone involved in making all of this happen!

David

Le 17/06/2015 17:46, Allen Wirfs-Brock a écrit :
Ecma international has announced that its General Assembly has 
approved ECMA-262-6 /The ECMAScript 2015 Language Specification/ as an 
Ecma standard http://www.ecma-international.org/news/index.html


The official document is now available from Ecma in HTML at
http://www.ecma-international.org/ecma-262/6.0

and as a PDF at
http://www.ecma-international.org/ecma-262/6.0/ECMA-262.pdf

I recommend that  people immediately start using the  Ecma HTML 
version in discussion where they need to link references to sections 
of the specification.


Allen



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


State of the Loader API

2015-02-23 Thread David Bruant

Hi,

I was trying to find the module Loader in the latest draft, but found 
out that it's been removed from it [1][2].
YK: The loader pipeline will be done in a "living spec" (a la HTML5) 
so that Node and the browser can collaborate on shared needs.

I haven't been able to find this new document yet.

The module loader wiki page [3] (is the wiki any relevant for anything 
else than historical reasons at this point?) points to the ES6 spec.


On the topic, I have found these :
https://gist.github.com/dherman/7568080
https://github.com/jorendorff/js-loaders
https://github.com/tc39/tc39-notes/blob/master/es6/2015-01/interfacing-with-loader-spec.pdf

What are the reference documents on module loader?

Thanks,

David

[1] 
https://github.com/rwaldron/tc39-notes/blob/b1af70ec299e996a9f1e2e34746269fbbb835d7e/es6/2014-09/sept-25.md#conclusionresolution-1
[2] 
https://github.com/rwaldron/tc39-notes/blob/844dfbcb87d66f3f8f1222ccb6f4a41e2ed4afd0/es6/2014-11/nov-18.md#41-es6-draft-status-update

[3] http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: How would we copy... Anything?

2015-02-23 Thread David Bruant

Hi,

Le 23/02/2015 10:10, Michał Wadas a écrit :

Cloning objects is long requested feature.
"clone object javascript" yields 1 480 000 results in Google.

I'd like to share this as an answer
http://facebook.github.io/immutable-js/#the-case-for-immutability
"If an object is immutable, it can be "copied" simply by making another 
reference to it instead of copying the entire object. Because a 
reference is much smaller than the object itself, this results in memory 
savings and a potential boost in execution speed for programs which rely 
on copies (such as an undo-stack)."


```js
var map1 = Immutable.Map({a:1, b:2, c:3});
var clone = map1;
```

Despite people *saying* all over the Internet they want cloning, maybe 
they want immutability?



My proposition is to create a new well known Symbol - Symbol.clone and
corresponding method on Object - Object.clone.

Default behavior for an object is to throw on clone try.
Object.prototype[Symbol.clone] = () => { throw TypeError; }
Users are encorauged to define their own Symbol.clone logic.

Primitives are cloned easily.
Number.prototype[Symbol.clone] = String.prototype[Symbol.clone] =
Boolean.prototype[Symbol.clone] = function() {return this.valueOf();}

Primitives are immutable, no need to clone them.
If you're referring to "primitive objects", it might be better to forget 
about this weird corner of the language than polish it.


Back to something you wrote above:

Users are encorauged to define their own Symbol.clone logic.
Perhaps this cloning protocol can be purely implemented in userland as a 
library and doesn't need support from the language. That's one of the 
reasons symbols have been introduced after all.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Object.freeze(Object.prototype) VS reality

2015-02-19 Thread David Bruant

Hi,

Half a million times the following meta-exchange happened on es-discuss:
- if an attacker modifies Object.prototype, then you're doomed in all 
sorts of ways

- Don't let anyone modify it. Just do Object.freeze(Object.prototype)!

I've done it on client-side projects with reasonable success. I've just 
tried on a Node project and lots of dependencies started throwing 
errors. (I imagine the difference is that in Node, it's easy to create 
projects with a big tree of dependencies which I haven't done too much 
on the client side).


I tracked down a few of these errors and they all seem to relate to the 
override mistake [1].
* In jsdom [2], trying to add a "constructor" property to an object 
fails because Object.prototype.constructor is configurable: false, 
writable: false
* in tough-cookie [3] (which is a dependency of the popular 'request' 
module), trying to set Cookie.prototype.toString fails because 
Object.prototype.toString is configurable: false, writable: false


Arguably, they could use Object.defineProperty, but they won't because 
it's less natural and it'd be absurd to try to fix npm. The 
Cookie.prototype.toString case is interesting. Of all the methods being 
added, only toString causes a problem. Using Object.defineProperty for 
this one would be an awkward inconsistency.



So, we're in a state where no module needs to modify Object.prototype, 
but I cannot freeze it because the override mistake makes throw any 
script that tries to set a toString property to an object.
Because of the override mistake, either I have to let Object.prototype 
mutable (depite no module needing it to be mutable) or freeze it first 
hand and not use popular modules like jsdom or request.


It's obviously possible to replace all built-in props by accessors [4], 
of course, but this is a bit ridiculous.
Can the override mistake be fixed? I imagine no web compat issues would 
occur since this change is about throwing less errors.


David

[1] http://wiki.ecmascript.org/doku.php?id=strawman:fixing_override_mistake
[2] 
https://github.com/tmpvar/jsdom/blob/6c5fe5be8cd01e0b4e91fa96d025341aff1db291/lib/jsdom/utils.js#L65-L95
[3] 
https://github.com/goinstant/tough-cookie/blob/c66bebadd634f4ff5d8a06519f9e0e4744986ab8/lib/cookie.js#L694
[4] 
https://github.com/rwaldron/tc39-notes/blob/c61f48cea5f2339a1ec65ca89827c8cff170779b/es6/2012-07/july-25.md#fix-override-mistake-aka-the-can-put-check

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Sharing a JavaScript implementation across realms

2015-01-13 Thread David Bruant

Le 13/01/2015 13:21, Anne van Kesteren a écrit :

A big challenge with self-hosting is memory consumption. A JavaScript
implementation is tied to a realm and therefore each realm will have
its own implementation. Contrast this with a C++ implementation of the
same feature that can be shared across many realms. The C++
implementation is much more efficient.
Why would a JS implementation *has to* be tied to a realm? I understand 
if this is how things are done today, but does it need to be?
Asked differently, what is so different about JS (vs C++) as an 
implementation language?
It seems like the sharings that are possible in C++ should be possible 
in JS.

What is (or can be) shared in C++ that cannot in JS?


PS: Alternative explanation available here:
https://annevankesteren.nl/2015/01/javascript-web-platform

From your post :
More concretely, this means that an implementation of 
|Array.prototype.map| in JavaScript will end up existing in each 
realm, whereas an identical implementation of that feature in C++ will 
only exists once.
Why? You could have a single privileged-JS implementation and each 
content-JS context (~realm) would only have access to a proxy to 
Array.prototype.map (transparently forwarding calls, which I imagine can 
be optimized/inlined by engines to be the direct call in the optimistic 
case). It would cost a proxy per content JS, but that already much much 
less than a full Array.prototype.map implementation.
In a hand-wavy fashion, I'd say the proxy handler can be shared across 
all content-JS. There is per-content storage to be created (lazily) in 
case Array.prototype.map is mutated (property added, etc.), but the 
normal case is fine (no mutation on built-ins means no cost)


One drawback is trying Object.freeze(Array.prototype.map). For this to 
work with proxies as they are, either the privileged-JS 
Array.prototype.map needs to be frozen (unacceptable, of course), or 
each proxy needs a new target (which is equivalently bad than one 
Array.prototype.map implementation per content-JS context).
The solution might be to allow proxies in privileged-JS contexts that 
are more powerful than the standard ones (for instance, they can pretend 
the object is frozen even when the underlying target isn't).


This is a bit annoying as a suggestion, because it means JS isn't really 
implemented in normal JS any longer, but it sounds like a reasonable 
trade-off (that's open for debate, of course).
The "problem" with proxies as they are today is that they were 
retroffited in JS which severely constrained their design making use 
cases like the one we're discussing (or even membranes) possible, but 
cumbersome.

Privileged-JS taking some liberties from this design sounds reasonable.

(It was pointed out to me that SpiderMonkey has some tricks to share 
the bytecode of a JavaScript implementation of a feature across 
realms, though not across threads (still expensive for workers). And 
SpiderMonkey has the ability to load these JavaScript implementations 
lazily and collect them when no longer used, further reducing memory 
footprint. However, this requires very special code that is currently 
not available for features outside of SpiderMonkey. Whether that is 
feasible might be up for investigation at some point.) 
For contexts running in parallel to be able to share (read-only) data in 
JS, we would need immutable data structures in JS, I believe.

https://mail.mozilla.org/pipermail/es-discuss/2014-November/040218.html
https://mail.mozilla.org/pipermail/es-discuss/2014-November/040219.html

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.forEach() et al with additional parameters

2014-12-22 Thread David Bruant

Le 20/12/2014 13:47, Gary Guo a écrit :

bindParameter function is not very hard to implement:
```
Function.prototype.bindParameter=function(idx, val){
var func=this;
return function(){
var arg=Array.prototype.slice.call(arguments);
arg[idx]=val;
func.apply(this, arg);
}
}
```

It's even easier if you use bind ;-)

Function.prototype.bindParameter = function(...args){
return this.bind(undefined, ...args)
}

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread David Bruant

Le 04/12/2014 09:55, Andreas Rossberg a écrit :

On 4 December 2014 at 00:54, David Bruant  wrote:

The way I see it, data structures are a tool to efficiently query data. They
don't *have* to be arbitrarily mutable anytime for this purpose.
It's a point of view problem, but in my opinion, mutability is the problem,
not sharing the same object. Being able to create and share structured data
should not have to mean it can be modified by anyone anytime. Hence
Object.freeze, hence the recent popularity of React.js.

I agree, but that is all irrelevant regarding the question of weak
maps, because you cannot freeze their content.
The heart of the problem is mutability and .clear is a mutability 
capability, so it's relevant. WeakMap are effectively frozen for some 
bindings if you don't have the keys.



So my question stands: What would be a plausible scenario where
handing a weak map to an untrusted third party is not utterly crazy to
start with?
Sometimes you call functions you don't have written and pass arguments 
to them. WeakMaps are new, but APIs will have functions with WeakMaps as 
arguments. I don't see what's crazy. It'd be nice if I don't have to 
review all NPM packages I use to make sure they dont use .clear when I 
pass a weakmap.
If you don't want to pass the WeakMap directly, you have to create a new 
object "just in case" (cloning or wrapping) which carries its own 
obvious efficiency. Security then comes at the cost of performance while 
both could have been achieved if the same safe-by-default weakmap can be 
shared.



In particular, when can giving them the ability to clear
be harmful, while the ability to add random entries, or attempt to
remove entries at guess, is not?

I don't have an answer to this case, now.
That said, I'm uncomfortable with the idea of seeing a decision being 
made that affects the language of the web until its end based on the 
inability of a few person to find a scenario that is deemed plausible by 
few other persons within a limited timeframe. It's almost calling for an 
"I told you so" one day.

I would return the question: can you demonstrate there are no such scenario?

We know ambiant authority is a bad thing, examples are endless in JS.
The ability to modify global variable has been the source of bugs and 
vulnerabilities.
JSON.parse implementations were modified by browsers because they used 
malicious versions of Array as a constructor which led to data leakage.
WeakMap.prototype.clear is ambiant authority. Admittedly, its effects 
are less broad and its malicious usage is certainly more subtle.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-03 Thread David Bruant

Le 03/12/2014 19:10, Jason Orendorff a écrit :

On Wed, Dec 3, 2014 at 9:04 AM, David Bruant  wrote:

A script which builds a weakmap may legitimately later assume the weakmap is
filled. However, passing the weakmap to a mixed-trusted (malicious or buggy)
script may result in the weakmap being cleared (and break the assumption of
the weakmap being filled and trigger all sorts of bugs). Like all dumb
things, at web-scale, it will happen.

OK. I read the whole thing, and I appreciate your writing it.

There's something important that's implicit in this argument that I
still don't have yet. If you were using literally any other data
structure, any other object, passing a direct reference to it around
to untrusted code would not only be dumb, but obviously something the
ES spec should not try to defend against. Right? It would be goofy.
Object.freeze and friends were added to the ES spec for the very purpose 
of being able to pass direct reference to an object and defend against 
unwanted mutations. à propos d'une

Is Object.freeze goofy?


The language just is not that hardened. Arguably, the point of a data
structure is to be useful for storing data, not to be "secure" against
code that **has a direct reference to it**. No?
The way I see it, data structures are a tool to efficiently query data. 
They don't *have* to be arbitrarily mutable anytime for this purpose.
It's a point of view problem, but in my opinion, mutability is the 
problem, not sharing the same object. Being able to create and share 
structured data should not have to mean it can be modified by anyone 
anytime. Hence Object.freeze, hence the recent popularity of React.js.



So what's missing here is, I imagine you must see WeakMap, unlike all
the other builtin data structures, as a security feature.
I'm not sure what you mean by "security feature". Any API is a security 
feature of sort.



Specifically, it must be a kind of secure data structure where
inserting or deleting particular keys and values into the WeakMap does
*not* pose a threat, but deleting them all does.

Can you explain that a bit more?
I see the invariant you're talking about, I agree it's elegant, but to
be useful it also has to line up with some plausible security use case
and threat model.
The ability to clear any WeakMap anytime needs to be equally justified 
in my opinion. I'm curious about plausible use cases.


What about making 'clear' an own property of weakmaps and make it only 
capable of clearing the weakmap it's attached to?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-03 Thread David Bruant

Le 03/12/2014 16:26, Jason Orendorff a écrit :

On Wed, Dec 3, 2014 at 8:35 AM, Andreas Rossberg  wrote:

(Back to the actual topic of this thread, you still owe me a reply
regarding why .clear is bad for security. ;) )

I'd like to hear this too, just for education value.
Unlike Map.prototype.clear, WeakMap.prototype.clear is a capability that 
cannot be userland implemented.
With WeakMap.prototype.clear, any script can clear any weakmap even if 
it knows none of the weakmap keys.
A script which builds a weakmap may legitimately later assume the 
weakmap is filled. However, passing the weakmap to a mixed-trusted 
(malicious or buggy) script may result in the weakmap being cleared (and 
break the assumption of the weakmap being filled and trigger all sorts 
of bugs). Like all dumb things, at web-scale, it will happen.
WeakMap.prototype.clear is ambiant authority which necessity remains to 
be proven.


It remains possible to create clearless weakmaps to pass around (by 
wrapping a weakmap, etc.), but it makes security (aka code robustness) 
an opt-in and not the default.


Opt-ins are cool, but are often forgotten, like CSP, like "use strict", 
like cookie HttpOnly, like HTTPS, you know the list :-) It would be cool 
if they were by default and people didn't have to learn about them all.


Security by default is cooler in my opinion.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Figuring out the behavior of WindowProxy in the face of non-configurable properties

2014-12-02 Thread David Bruant

Le 02/12/2014 14:24, David Bruant a écrit :

Hi,

I feel like I've been in an equivalent discussion some time ago

The topic felt familiar :-p
http://lists.w3.org/Archives/Public/public-script-coord/2012OctDec/0322.html

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Figuring out the behavior of WindowProxy in the face of non-configurable properties

2014-12-02 Thread David Bruant

Hi,

I feel like I've been in an equivalent discussion some time ago, so 
taking the liberty to answer.


Le 02/12/2014 13:59, Andreas Rossberg a écrit :

On 1 December 2014 at 03:12, Mark S. Miller  wrote:

On Sun, Nov 30, 2014 at 12:21 PM, Boris Zbarsky  wrote:

Per spec ES6, it seems to me like attempting to define a non-configurable
property on a WindowProxy should throw and getting a property descriptor for
a non-configurable property that got defined on the Window (e.g. via "var")
should report it as configurable.

Can you clarify? Do you mean that it should report properties as
configurable, but still reject attempts to actually reconfigure them?
Yes. This is doable with proxies (which the WindowProxy object needs to 
be anyway).

* the defineProperty trap throws when it sees configurable:false
* the getOwnPropertyDescriptor trap always reports configurable:true
* and the target has all properties actually configurable (but it's 
almost irrelevant to the discussion)



Also, how would you allow 'var' to even define non-configurable
properties? If you want DefineProperty to throw on any such attempt,
then 'var' semantics would somehow have to bypass the MOP.
Thinking in terms of proxies, the runtime can have access to the target 
and the handler while userland scripts only have access to the proxy 
(which the HTML Living standard mandates anyway with the difference 
between Window and WindowProxy objects. No userland script ever have 
access to the Window object).
The handler can have access to the list all declared variable to know 
which property should behave as if non-configurable.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxies as prototypes

2014-11-23 Thread David Bruant

Le 23/11/2014 07:41, Axel Rauschmayer a écrit :
I’d expect the following code to log `GET bla`, but it currently 
doesn’t in Firefox. That’s because the Firefox implementation of 
proxies isn’t finished yet, right?
Yes. That would be https://bugzilla.mozilla.org/show_bug.cgi?id=914314 I 
think.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.isArray(new Proxy([], {})) should be false (Bug 1096753)

2014-11-15 Thread David Bruant

Le 13/11/2014 17:29, Boris Zbarsky a écrit :

On 11/13/14, 6:44 AM, Andreas Rossberg wrote:

Well, the actual diabolic beast and universal foot gun in this example
is setPrototypeOf. ;)


Note that there is at least some discussion within Mozilla about 
trying to make the prototype of Object.prototype immutable (such that 
Object.getPrototypeOf(Object.prototype) is guaranteed to always return 
the same thing, modulo someone overriding Object.getPrototypeOf), 
along with a few other things along those lines.  See 
.
This would result in objects which [[Prototype]] cannot be changed but 
which properties can be changed.
This is not possible per ES6 semantics I believe unless the object is a 
proxy (which setPrototypeOf trap throws unconditionally and forwards the 
rest to the target). Is it a satisfactory explanation? Should new 
primitives be added?



Whether this is web-compatible, we'll see.

I guess my above questions can wait the answer to this part.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.isArray(new Proxy([], {})) should be false (Bug 1096753)

2014-11-13 Thread David Bruant

The best defense is Object.freeze(Object.prototype);
No application worth considering needs to arbitrarily modify 
Object.prototype at an arbitrary point in time (or someone should bring 
a use case for discussion). It usually shouldn't and even if it does, it 
should do it at startup and freeze it afterwards.


Le 13/11/2014 12:25, Andrea Giammarchi a écrit :

well, Proxy can be a diabolic beast

```js
Object.setPrototypeOf(
  Object.prototype,
  new Proxy(Object.prototype, evilPlan)
)
```

having no way to understand if an object is a Proxy looks like a 
footgun to me in the long term, for libraries, and "code alchemists"
You're giving guns to people and try to evaluate how to defend from 
them. Consider not letting guns around the rooms ;-)


David

You indeed wrote that different Array methods need to know if there's 
a Proxy in there ... if dev cannot know the same via code they are 
unable again to subclass properly or replicate native behaviors behind 
magic internal checks.


If there is a way and I'm missing it, then it's OK

Regards








On Thu, Nov 13, 2014 at 7:15 AM, Tom Van Cutsem > wrote:


2014-11-12 23:49 GMT+01:00 Andrea Giammarchi
mailto:andrea.giammar...@gmail.com>>:

If Array.isArray should fail for non "pure" Arrays, can we
have a Proxy.isProxy that never fails with proxies ?


We ruled out `Proxy.isProxy` very early on in the design. It's
antithetical to the desire of keeping proxies transparent. In
general, we want to discourage type checks like you just wrote.

If you're getting handed an object you don't trust and need very
strong guarantees on its behavior, you'll need to make a copy.
This is true regardless of proxies. In your example, even if the
array is genuine, there may be some pointer alias to the array
that can change the array at a later time.

Regards,
Tom




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.isArray(new Proxy([], {})) should be false (Bug 1096753)

2014-11-12 Thread David Bruant

Le 12/11/2014 17:23, Tom Van Cutsem a écrit :
I agree with your sentiment. I have previously advocated that 
Array.isArray should be transparent for proxies. My harmony-reflect 
shim explicitly differs from the spec on this point because people 
using the shim spontaneously reported this as the expected behaviour 
and thought it was a bug that Array.isArray didn't work transparently 
on proxies.

For reference https://github.com/tvcutsem/harmony-reflect/issues/13

As far as I can remember, the argument against making Array.isArray 
transparent is that it's ad hoc and doesn't generalize to other types 
/ type tests. My opinion is that array testing is fundamental to core 
JS and is worth the exception.

Agreed. Author usability should trump language purity.

David



Regards,
Tom

2014-11-12 17:04 GMT+01:00 Axel Rauschmayer >:


The subject is a SpiderMonkey bug.

Is that really desirable? Doesn’t it invalidate the Proxy’s role
as an interceptor?

-- 
Dr. Axel Rauschmayer

a...@rauschma.de 
rauschma.de 




___
es-discuss mailing list
es-discuss@mozilla.org 
https://mail.mozilla.org/listinfo/es-discuss




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Immutable collection values

2014-11-09 Thread David Bruant

Le 09/11/2014 15:07, Jussi Kalliokoski a écrit :
I figured I'd throw an idea out there, now that immutable data is 
starting to gain mainstream attention with JS and cowpaths are being 
paved. I've recently been playing around with the idea of introducing 
immutable collections as value types (as opposed to, say, instances of 
something).


So at the core there would be three new value types added:

* ImmutableMap.
* ImmutableArray.
* ImmutableSet.

Why would both Array and Set be needed?


We could also introduce nice syntactic sugar, such as:

var objectKey = {};

var map = {:
  [objectKey]: "foo",
  "bar": "baz",
}; // ImmutableMap [ [objectKey, "foo"], ["bar", "baz"] ]

var array = [:
  1,
  1,
  2,
  3,
]; // ImmutableArray [ 1, 2, 3, 4 ]

var set = <:
  1,
  2,
  3,
>; // ImmutableSet [ 1, 2, 3 ]

The syntax suggestions are up to debate of course, but I think the key 
takeaway from this proposal should be that the immutable collection 
types would be values and have an empty prototype chain.

I find ":" too discrete for readability purposes. What about # ?
That's what was proposed for records and tuples (which are pretty much 
the same thing as ImmutableMap and ImmutableSet respectively)

http://wiki.ecmascript.org/doku.php?id=strawman:records
http://wiki.ecmascript.org/doku.php?id=strawman:tuples
#SyntaxBikeshed

I think this would make a worthwhile addition to the language, 
especially considering functional compile-to-JS languages. With the 
syntactic sugar, it would probably even render a lot of their features 
irrelevant because the core of JS could provide a viable platform for 
functional programming (of course one might still be happier using 
abstraction layers that provide immutable APIs to the underlying 
platforms, such as DOM, but then that's not a problem in the JS' 
domain anymore).
It would also open the possibility of a new class of postMessage sharing 
(across iframes or WebWorkers) that allows parallel reading of a complex 
data structure without copying.


A use case that would benefit a lot from this would be computation of a 
force-layout algorithm with real-time rendering of the graph.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Event loops in navigated-away-from windows

2014-09-30 Thread David Bruant

Le 29/09/2014 23:08, Anne van Kesteren a écrit :

On Mon, Sep 29, 2014 at 8:18 PM, Ian Hickson  wrote:

I certainly wouldn't object to the ES spec's event loop algorithms being
turned inside out (search for "RunCode" on the esdiscuss thread above for
an e-mail where I propose this) but that would be purely an editorial
change, it wouldn't change the implementations.

The proposed setup from Allen will start failing the moment ECMAScript
wants something more complicated with its loop.

How likely is this?

David


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxy objects and collection

2014-09-02 Thread David Bruant

Le 02/09/2014 20:07, Daurnimator a écrit :
So, I'd like to see some sort of "trap" that is fired when a Proxy is 
collected.
To prevent over specifying how Javascript garbage collectors should 
operate,
I propose that the trap *may* only be called at some *undefined* point 
after the object is not strongly referenced.
As Brendan said, what you want has been discussed as Weak References on 
the list, not really proxies.


The question of not wanting to over-specify upfront has come in other 
places in the past. Sometimes, even when the spec leaves freedom to 
implementors, it happens that implemetors make some common choices, then 
people rely on the shared browser behavior of that spec-undefined 
functionality. Then, the feature has to be standardized de facto as 
commonly implemented afterwards.


My point here being that not specifying up front does not guarantee that 
the details won't have to be ever specified.

The enumeration order of object keys comes to mind.

I'm not saying that it's what will or even may happen in this case, but 
just remind that leaving things undefined can fire back and generate the 
opposite of what was intended.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Promise.prototype.Finally

2014-08-18 Thread David Bruant

Yes. Needed it recently.
Ended up doing ".then(f).catch(f)" which can be survived but feels stupid.

David

Le 18/08/2014 21:20, Domenic Denicola a écrit :

Here is the current design for Promise.prototype.finally. I agree it is a 
useful feature.


https://github.com/domenic/promises-unwrapping/issues/18
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Importing modules inside HTML imports

2014-08-17 Thread David Bruant

Le 17/08/2014 20:52, John Barton a écrit :


On Sun, Aug 17, 2014 at 11:14 AM, Rick Waldron > wrote:



On Sunday, August 17, 2014, John Barton mailto:johnjbar...@google.com>> wrote:


On Sun, Aug 17, 2014 at 10:08 AM, Brendan Eich
 wrote:

John Barton wrote:

On Sat, Aug 16, 2014 at 10:22 AM, Brendan Eich
mailto:bren...@mozilla.org>> wrote:

Yes -- inline scripts, like document.write, the
drive-in, disco,
and Fortran, will never die.


More things I don't suggest investing effort in.


Seriously, inline scripts were and are important, both for
avoiding extra requests (even with HTTP++ these cost) and,
more important, for easiest and smoothest
beginner/first-script on ramp.

I have no idea why anyone would seriously contend
otherwise. Latency still matters; tools didn't replace
hand-authoring. These are not subjective matters.


I agree, but the forces behind CSP control the servers.
 You'll have to convince them.


Forgive me, but I don't follow this—could you elaborate? It would
be appreciated.


The argument goes like this: we all want secure Web pages, we can't 
secure Web pages that allow inline scripts

How so? I can write secure web pages that allow inline scripts.
As far as I'm concerned, unsafe-inline is part of what I consider my 
default CSP policy.
Maybe we need to reconsider our server-side pratices that mostly consist 
of concatenating strings, though. I'm personally exploring generating a 
DOM on the server-side (with .textContent, etc.)


Assuming control of the server-side, can you give an example of an 
application where the page has inline scripts and cannot be secure?



therefore we have to ban inline scripts.

If the argument is wrong, ignore my advice, CSP will die. I personally 
think that would be great.
CSP isn't only about inline scripts. It's mostly about whitelisting 
domains a page can load data from and send data to. That's extremely useful.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Why using the size property in set

2014-07-31 Thread David Bruant

Le 31/07/2014 09:25, Maxime Warnier a écrit :

Hi everybody,

I was reading the doc for the new Set method and something suprised me :

Why Set uses the size method instead of the length property ?
IIRC and with my own words "length" refers more to something that can be 
measured contiguously (like a distance or a number of allocated bytes, 
etc.) while "size" doesn't have this contiguous aspect to it.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Reflect.hasOwn() ?

2014-07-27 Thread David Bruant

Le 27/07/2014 13:35, Peter van der Zee a écrit :

On Sat, Jul 26, 2014 at 5:14 PM, Mark S. Miller  wrote:

Hi Peter, what is the security issue you are concerned about?

Unless `Reflect` is completely sealed out of the box, you can never
know whether properties on it are the actual built-ins. That's all.

You can deeply freeze it yourself before any other script accesses it.

Even without doing so, let's say Reflect is not sealed.
If you change it yourself (by code you wrote or imported), you know what 
to expect (or you didn't audit code you import, but them, you also know 
you can only expect the worst).
If you don't change Reflect yourself, then it's third-party code which 
is. But then, why did you let this third-party code access to the 
capability of modifying the built-ins?
You could set up a proxy in your own domain, fetch thrid-party scripts 
from there and serve them to your own domain confined (with Caja or else).


My point being that there are ways to prevent any non-trusted scripts 
from modifying Reflect (assuming you stay away from script@src which 
doesn't allow any form of confinment on the imported script)



For ES6, I'm not clear yet on how the module loader will work with 
regards to cross-domain scripts. I believe part of the web platform 
security model relies on a page not being able to read the content of 
thrid-party scripts it imports via script@src (IIRC because some 
websites send private data based on cookies in such scripts, so being 
able to read the content of such scripts would lead to terrible data 
leakage).

Does the module loader preserves such a guarantee?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [PROPOSAL] "use" keyword

2014-07-25 Thread David Bruant

Hi,

Le 25/07/2014 20:52, Michaël Rouges a écrit :

Hi all,

There is any plan around un functionnality like the "use" keyword, in PHP.

Why something like that? Because, in JS, there is no way to inject 
some variables, without touch the this object known in a function.

Can you give an example of what you call "injecting some variables"?
More generally, can you give a concrete example of what you're trying to 
achieve?


Thanks,

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


TC39 vs "the community"

2014-06-20 Thread David Bruant

Hi,

I'm not quite sure what this is all about, so forking in hope for 
clarifications.
I'm sorry to send a message that will probably be read as noise by a lot 
of people, but I'm also tired of some of these pointless and 
unconstructive, if not destructive, fights among people (in here, on 
Twitter or elsewhere).
I hope to have a conversation to start the end of the alleged 
unharmonious relationship between TC39 and JS developers.


Domenic, your email suggests a fairly strong dichotomy between "TC39" 
and "the community". As far as I'm concerned, to begin with, I don't see 
anything that is called "the community" in JavaScript. I join Axel's 
point of view. I see lots of communities with different backgrounds and 
interests, especially among the JS devs.
I personnally don't feel associated with "the community" you describe. I 
encourage you to either speak only for yourself or provide a more 
specific description of whose point of view you're referring to 
(preferably without a definite article).


Le 19/06/2014 21:13, Domenic Denicola a écrit :

Unfortunately, that's not the world we live in, and instead TC39 is designing a 
module system based on their own priorities. (Static checking of multi-export 
names, mutable bindings, etc.)
If I knew nothing about how ES standardization works, I'd be thinking 
"who the fuck are these TC39 people who decide features based on their 
own agenda against the interest/experience of the developers? Who do 
they think they are anyway?"


Can you develop these particular accusations?
Why would TC39 have priorities that don't align with the needs of 
developers? especially on modules which are clearly one of the most 
awaited feature as far as developers are concerned?


I'm not quite sure I understand the dichotomy and the alleged TC39 
priorities that would be that far off from what JS devs otherwse need, 
so please get it off your chest so we can all move on.



They've (wisely) decided to add affordances for the community's use cases, such as layering default 
exports on top of the multi-export model. As well as Dave's proposal in this thread to de-grossify 
usage of modules like "fs". By doing so, they increase their chances of the module system 
being "good enough" for the community, so that the path of least resistance will be to 
adopt it, despite it not being designed for them primarily. It's still an open question whether 
this will be enough to win over the community from their existing tools, but with Dave's suggestion 
I think it has a better-than-even chance.

The transitional era will be a particularly vulnerable time for TC39's module design, however: as 
long as people are using transpilers, there's an opportunity for a particularly well-crafted, 
documented, and supported transpiler to give alternate semantics grounded in the community's 
preferred model, and win over enough of an audience to bleed the life out of TC39's modules. We 
already see signs of community interest in such "ES6+" transpilers, as Angular 
illustrates. Even a transpiler that maintains a subset of ES6 syntax would work: if it supported 
only `export default x`, and then gave `import { x } from "y"` destructuring semantics 
instead of named-binding-import semantics, that would do the trick. Interesting times.
Whatever TC39 settles in and is eventually part of the standard will 
inevitably have tooling associated to it. Maybe not by "the community" 
(whoever that is), but I'm fairly certain TypeScript will adopt it for 
instance. I'm fairly sure IDEs will all eventually have syntactic or 
"intelligent" support of the official standard modules (which is less 
clear for whatever-transpiler-modules).
Some people who aren't part of "the community" will write code in ES6 
modules. Whatever they end up being, I'll probably be on that end pretty 
much for the same reason I choose to not write coffeescript (because 
AFAIC my own taste in code has less worth than other's ability to 
understand the code I write).


Whatever they end up looking and behaving, ES6 modules will happen with 
"the community" or without it.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 5 June 2014 TC39 Meeting Notes

2014-06-14 Thread David Bruant

Le 12/06/2014 16:43, Domenic Denicola a écrit :
Also, David: s are not named; you cannot import them. Check 
out 
https://github.com/dherman/web-modules/blob/master/module-tag/explainer.md

Thanks, that's the context I was missing.

I'm uncomfortable with the "async" part of the proposal as currently 
(under?)specified. Sharing my thought process.


Async loading prevents the rendering blocking problem, but creates 
another problem.
async loading isn't an end in and of itself. As far as I'm concerned, I 
never use script@async for app initialization code (which is the target 
of the 

Re: 5 June 2014 TC39 Meeting Notes

2014-06-12 Thread David Bruant

Le 11/06/2014 18:21, Ben Newman a écrit :

## 7.1  status update (from DH)

DH: Would really rather have import { foo } from "bar"; ..., which is like