On Jun 23, 2009, at 10:03 AM, kevin curtis wrote:

The requirement of security and speed don't always coincide! The subset/dialect idea is interesting.

It's two-edged.

Adding standard subsets leads to case-analysis explosion, a recipe for bugs and reduction of interoperability.

How subsets might evolve in future editions is another axis for explosion of cases. Are subsets partially ordered in the same way across future editions? Or should there be a total order? Can strict mode get stricter over time?

   ES5 ---------------> ES6 ...
    ^                    ^
    |                    |
   ES5 strict <---?---> ES6 strict ...

The arrowhead points to the superset. At some point we might have ESn remove ancient cruft so the horizontal arrows might stop pointing rightward.

These are good questions and there may be compelling and simple answers, but only for very few standard subsets. If you can't draw the lattice, it's too complicated.

Ideally, IMHO, the only standard subset will be strict mode.


let x:int = 0 // ES6 type annotation to indicate this will be turned into a c int at some point
... etc

We do not want int "for performance" if it auto-widens to double. Adobe has experience here, and int as an annotation on local variables (e.g. loop controls) is often a de-optimizer, yet users over-use it "for speed".

If as you propose ("C int", capital C meaning the C language, I take it) we enable 32-bit machine int under a pragma, we'll have wraparound bugs on the web. (People will copy and paste the pragma to excess.)

Because of the web-as-it-is, implementors have had to optimize JS as it is used.

But this has demonstrated, to me at least, that the important language optimizations can be done well under the hood, without hinting. IMHO this is a good use of human capital, compared to the alternative of unleashing pragmas and machine types on the web developer masses, where the pragmas and types add complexity and often bite back.

The issue with "self-hosting" or "systems programming" goes beyond a machine int type, however. One would want packed structs that can be stack allocated and embedded in arrays (no references to heap- allocated objects). One would surely want flat vectors, not prototype- delegating hashmap-happy Arrays.

You may be right that this sauce for the goose would be wanted by the web-dev gander soon enough, but not all at once and prematurely standardized.

Such a variant of JS could be made memory safe, but it is overkill at this stage for the Web. The best way to proceed is for Mozilla, e.g., to prototype such a language. We're thinking seriously about it right now.

It's not an Ecma TC39 agenda item until years from now, when such prototypes have been deployed and used at scale, but in domain- specific silos.

We'll do our development, if we do it, of such a systems-programming dialect of JS, in the open and in open source, so everyone on this list who is interested can watch and participate. But it would be a distraction to overuse this list for discussions about such a dialect.


Also, performant c++ code seems to use templates rather than traditional OO with virtual methods. Nitro/v8/tm seem to doing a form of dynamic templating

Goes back to the Self work in the 90s, of course. No type annotations.


when at runtime they try to figure out the types that are being passed as parameters to functions and generate machine code. (Or in hot loops in tm's case).

TM infers static types on trace, this is a difference from the method- based speculative approaches. That is, we inline aggressively, so type annotations on parameters to otherwise generic methods could frustrate it.


Maybe the engines could be given a helping hand via the ES subset in perf modules - and type annotations.

This is a malinvestment for the masses, with too much blowback potential.


But too many subsets could add confusion.

I agree -- sorry, I should have read ahead, but you seemed not to see this earlier.


Type annotations are in ES6 (i think).

Not yet, and not as anything that resembles static machine types.


Could there be a subset that meets both security and performance issues.

Why are you mixing the two still? As you note they conflict sometimes.

SES is an experiment, not ready for standardization. I don't think a "PES" can be supported by Ecma TC39 at this point, even if that were the place. But again, it's not: implementors need to experiment, in the open but ideally in more than one lab, with more than one approach.


Or even perf as a subset of secure. e.g: use secure, perf. Would a performance subset need to be unsafe/unmanaged.

On the web, definitely safe/managed (memory-safe at the least!).

In systems programming domains where C++ is used, possibly not. It depends on the domain.

For Mozilla's systems-ish domains, we would want memory safety, control flow integrity, and other properties to be enforced. But we would be willing to take advantage of static/dynamic analysis duality, and spend more time on static analysis to achieve memory safety and other properties, with the benefit of lower runtime overhead.

/be
_______________________________________________
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to