Re: System.import()?
On Fri, Aug 21, 2015 at 8:12 AM, Domenic Denicola wrote: > No, custom elements cannot prototype the module tag, because custom elements > will always be parsed as unknown elements. So e.g. Ah, thanks for pointing that out. I was thinking more of the script src style of tag. The need for module bodies directly inside HTML tags, vs loaded as separate files, is of lower importance, less leveraged impact, than the other module/loader APIs. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: System.import()?
On Tue, Aug 18, 2015 at 11:36 AM, Dave Herman wrote: > https://github.com/whatwg/loader/blob/master/roadmap.md >From a loader/tool perspective, it seems like working out more of the Stage 1 items before many of the Stage 0 items will lead to higher leverage payoffs: the dynamic loading and module meta help existing transpiling efforts, better inform userland module concatenation efforts. In the spirit of the extensible web, defining these lower level APIs and more of the loader would make it possible to use custom elements to help prototype a tag. The custom element mechanism can be used in a similar way to how JS transpilers are used for the existing module syntax. If the Stage 0 normalization is about normalizing IDs to URLs as the internal normalized storage IDs, I suggest reaching out to talk with more people in the AMD loader community about the reasons behind it, how it sits with AMD loader plugin IDs, seeing IDs more like scoped, nested property identifiers, and module concatenation. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Are ES6 modules in browsers going to get loaded level-by-level?
On Fri, Apr 24, 2015 at 11:39 AM, Brendan Eich wrote: > Not "bundling" in full; your previous post talked about HTTP2 but mixed > dependency handling and bundling. You seemed not to advert to the problem > of one big bundle being updated just to update one small member-of-bundle. > One can be skeptical of HTTP2 but the promise is there to beat bundling. > > So in a future where ES6 or above is baseline for web developers, and > HTTP2 is old hat, there won't be the full bundling and module body > desugaring you seem to be insisting we must have in perpetuity. (Yes, there > will be dependency graphs expressed concisely -- that's not bundling.) > Right? There are some nice things with HTTP2 and being able to update a smaller set of files vs needing to change a bundle. I am mostly concerned about startup performance primarily on mobile devices, and in the offline cases where HTTP2 is not part of the equation, at least not after first request. For the Firefox OS Gaia apps, they are currently zip files installed on the device. The same local disk profile exists with service worker-backed apps that work offline. In the Firefox OS case, loading the bundle of modules performs better than not bundling, because multiple reads to local disk was slower than the one read to a bundled JS file. I expect this to be true in the future regardless of ES6 baseline or the existence of HTTP2. A bundle of modules that have already been traced, usually ordered by least dependencies first, most dependencies last in one linearized fetch vs. in the unbundled case, the dependency tree needs to be discovered and then fetched as the modules are parsed. It is hard to see the second one winning enough to discard wanting to bundle modules. Even if the bundle alternative is some sort of zip format that requires the whole thing to be available in memory. There is still the read, parse, back-and-forth traffic to the memory area, converting that to file responses. With service workers in play, it just adds to the delay. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Are ES6 modules in browsers going to get loaded level-by-level?
On Fri, Apr 24, 2015 at 8:42 AM, Allen Wirfs-Brock wrote: > > I think you're barking up the wrong tree. ECMAScript has never said > anything about the external representation of scripts (called "Programs" > prior to ES 2015) and the ES 2015 spec. doesn't impose any requirements > upon the external representation of Modules. One Script or Module per > external contain or multiple Scripts and Modules per external container - > it makes no difference to the ES 2015 semantics.. Such encoding issues are > entirely up to the host platform or ES implementation to define. But the > platform/implementation has no choice in regard to the semantics of a > Module (including mutability of slots or anything else in the ES 2015 > specification). No matter a Module is externally stored it must conform > to the ES 2015 module semantics to be a valid ES 2015 implementation. > > Understood: the ES2015 spec makes it a point to not get into this. I was hoping that the module champions involved with the ES2015 spec would be on this list to respond to how to use modules in practice. So perhaps I was incorrect to ask for "officially blessed", but more "a bundling form that module champions know will meet the ES2015 semantics of a Module". The difficulty is precisely that ES2015 sets strong semantics on a Module that seem difficult to translate to a script form that could allow bundling. I expect module meta to play a fairly important role for that translation, so having that defined, in some ES spec or elsewhere, and how that might work in bundling, would also be really helpful to complete the module picture. > > So, if you want physical bundling, you need to convince the platform > designers (eg, web, node, etc) to support that. Personally, I think a zip > file makes a fine "bundle" and is something I would support if I was > building a command-line level ES engine. > See my first post to this thread why when we had this in practice in FirefoxOS, a zip file with the contents, it was decided to use script bundling to increase performance. With the extensible web and more userland JS needed to bootstrap things like view selection and custom elements, getting the JS up and running as soon as possible is even more important. The arguments so far against script bundling have been "there are better things that can be made for performance", but I do not see that in practice, particularly for the offline web on mobile devices. Besides that, I see modules as units of reusable code, like functions, which do allow bundling, nesting. I can understand that is not the goal of ES2015, so hopefully the use case feedback will be useful to help flesh out a full module system that can use the ES module semantics. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Are ES6 modules in browsers going to get loaded level-by-level?
On Thu, Apr 23, 2015 at 4:48 PM, Brendan Eich wrote: > Your lament poses a question that answers itself: in time, ES6 will be > the base level, not ES3 or ES5. Then, the loader can be nativized. > Complaining about this now seems churlish. :-| > > So let's stay on this specific point: bundling will still be done even with ES modules and a loader that would natively understand ES modules in unbundled form. Hopefully the rest of my previous message gave enough data as to why. If not natively supported in ES, it would be great to get a pointer to the officially blessed transform of an ES module body to something that can be bundled. Something that preserves the behaviors of the mutable slots, and allows using the module meta. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Re: Are ES6 modules in browsers going to get loaded level-by-level?
On Thu, Apr 23, 2015 at 7:47 AM, Domenic Denicola wrote: > Indeed, there is no built-in facility for bundling since as explained in > this thread that will actually slow down your performance, and there’s no > desire to include an antipattern in the language. > > > Some counterpoint: For privileged/certified FirefoxOS apps, they are delivered as zip files right now. No HTTP involved. Asking for multiple files from these local packages was still slower than fetching one file with scripts bundled, due to slower IO on devices, so the certified apps in FirefoxOS right now still do bundling for speed concerns. No network in play, just file IO. With service workers, it is hard to see that also being faster since the worker needs to be consulted for every request, so in that FirefoxOS app case, I would still want bundling. With HTTP2, something still needs to do the same work as bundling, where it traces the dependencies and builds a graph so that all the modules in that graph can be sent back in the HTTP2 connection. So the main complexity of bundling, a "build" step that traces dependencies and makes a graph, is still there. Might as well bundle them so that even when serving from browser cache it will be faster, see device IO concerns above. Plus, bundling modules together can be more than just a speed concern: a library may want to use modules in separate files and then bundle them into one file for easier encapsulation/distribution. I am sure the hope is that package managers may help for the distribution case, but this highlights another use related to bundling: encapsulation. Just like nested functions are allowed in the language, nested module definitions make sense long term. Both functions and modules are about reusing units of code. Ideally both could be nested. I believe that is a bigger design hurdle to overcome and maybe that also made it harder for the module champions to consider any sort of bundling, but bundling really is a thing, and it is unfortunate it is not natively supported for ES modules. The fun part about leaving this to transpilers is trying to emulate the mutable slots for import identifiers. I think it may work by replacing the identifiers with `loader.get('id').exportName`, or whatever the module meta/loader APIs might be, so having those APIs are even more important for a usable module system. There is probably more nuance to the transformation than that though. Like making sure to add in "use strict" to the function wrapper. It is kind of sad that to use ES modules means to actually not really use them at runtime, to transpile back to ES5-level of code, and needing to ship a bootstrap loader script that allows slotting that the ES5-level code into the ES loader. For the extra script and transpiling concerns, it does not seem like an improvement over an existing ES5-based module systems. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Any news about the `` element?
On Fri, Dec 19, 2014 at 8:55 PM, caridy wrote: > Yeah, the idea is to get engines (V8 maybe) to implement what is spec’d in > ES6 today, and get a very basic implementation of
Re: Any news about the `` element?
On Thu, Dec 18, 2014 at 6:13 PM, caridy wrote: > What does this means? > > * no loader (if you need on-demand loading, you can insert script tags with > type=module, similar to what we do today for scripts) > * no hooks or settings (if you need more advanced features, you will have to > deal with those manually) > > Open questions: > > * how to fallback? ideally, we will need a way to detect modules support, > equivalent to in semantic. > * we need to reserve some resolution rules to support mappings and hooks in > the future (e.g.: `import foo from "foo/mod.js"` will not work because `foo` > will require loader configs or hooks to be defined, while `import foo from > “./foo/mod.js”` and `import foo from “//cdn.com/foo/mod.js”` will work just > fine). Also: * How does dynamic loading work in a web worker? In general, how does dynamic loading work when there is no DOM. * module IDs should not be paths with .js values (see package/main types of layout). Maybe that is what you meant by your second question. Perhaps the module tag is a DOM implementation detail that backs the standardized API for dynamic loading, but seems odd to focus on that backing detail first. I am sure there is more nuance though, perhaps you were trying to give quick feedback, and if so, apologies for reading too much into it. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Modules: suggestions from the field
I added a doc about module inlining/nesting, and why it should be supported in a module system. Mentions SPDY/HTTP2, packaged formats, Node’s Browserify, and transpiling: https://github.com/jrburke/module/blob/master/docs/inlining.md James On Mon, Jun 16, 2014 at 1:21 PM, James Burke wrote: > I have suggested alterations to the modules effort, and an in-progress > prototype[1]. > > It is based on field usage of existing JS module systems, and my work > supporting AMD module solutions for the past few years. > > There is a document describing what it attempts to fix[2]. The table > of contents from that document: > > — > > This project reuses a lot of thinking that has gone into the > ECMAScript 6 modules effort so far, but suggests these changes: > > * Parse for module instead of import/export > * Each module body gets its own unique module object > * Use function wrapping for module scope > > They are motivated by the following reasons: > > * import syntax disparity with System.import > * Solves the moduleMeta problem > * Solves nested modules and allows inlining > * Easy for base libraries to opt in to ES modules > > It has these tradeoffs: > > * Cycle support > * Export name checking > > — > > I am willing to talk to TC-39 members in realtime channels (video/in > person) that may need more background or might want to discuss > further, but I am less likely to discuss it in email threads. > > I will likely continue that prototype effort even if the more recently > visible issues for modules are solved differently, as the current > state of the baseline ES system will still require bootstrap loader > scripts. For the bootstrap scripts, I will need some of the concepts > in the prototype for the AMD consumers I have traditionally supported. > > There is a “story time” document[3] for a narrative around how the > prototype relates to smaller ideas around code referencing and reuse. > > [1] https://github.com/jrburke/module > [2] https://github.com/jrburke/module/blob/master/docs/module-from-es.md > [3] https://github.com/jrburke/module/blob/master/docs/story-time.md > > James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: ModuleImport
On Thu, Jun 19, 2014 at 12:13 PM, Domenic Denicola wrote: > From: es-discuss on behalf of James Burke > > >> 1) Only allow export default or named exports, not both. > > As a modification of the current design, this hurts use cases like > > ```js > import glob, { sync as syncGlob } from "glob"; > import _, { zip } from "underscore"; > ``` It is just as likely the module author will specify sync and zip as properties of their respective default export. Particularly since those are coming from JS modules that come from existing module systems. So a destructuring assignment would be needed, following the default import. It works out though, and is not that much more typing. That, or those pieces would be available as ‘underscore/zip’ or ‘glob/sync' imports. The argument for allowing both a default and named exports seems ill-defined based on data points known so far, and by avoiding it, it reduces the number of import forms and aligns better with System.import use. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: ModuleImport
On Thu, Jun 19, 2014 at 1:15 AM, David Herman wrote: > ## Proposal > > OK, so we're talking about a better syntax for importing a module and binding > its named exports to a variable (as distinct from importing a module and > binding its default export to a variable). Here's my proposal: > ```js > import * as fs from "fs"; // importing the named exports as an object > import Dict from "dict"; // importing a default export, same as ever > ``` Two other possibilities: 1) Only allow export default or named exports, not both. The reason default export is used in module systems today is because there is just one thing that wants to be exported, and it does not matter what its name is because it is indicated by the module ID. Sometimes it is also easier to just use an object literal syntax for the export than expanding that out into individual export statements. Allowing both default and named exports from the same module is providing this syntax/API extension. If there are ancillary capabilities available, a submodule in a package is more likely the way it will be used, accessed as a default export via a module ID like ‘mainModule/sub', instead of wanting to use a default and named export from the same module. It would look like this: import fs from ‘fs’; // only has named exports, so get object holding all the exports import { readFile } from ‘fs’; // only the readFile export import Dict from ‘dict’; // a default export — or — 2) Only allow `export` of one thing from a module, and `import {}` just means allowing getting the first property on that export. This removes the named export checking, but that benefit was always a bit weak, even weaker with the favoring of default export. //d.js, module with a default export, note it does not need a name, //`export` can only appear once in a file. export function() {}; //fs.js, module with “multiple” us export { readFile: function(){} }; //main.js using ‘d’ and ‘fs' import d from ‘d’; import { readFile } from ‘fs’; — Both of those possibilities also fix the disjointed Sytem.import() use with a default export. No need to know that a `.default` is needed to get the usable part. It will match the `import Name from ‘id’` syntax better. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Modules: suggestions from the field
I have suggested alterations to the modules effort, and an in-progress prototype[1]. It is based on field usage of existing JS module systems, and my work supporting AMD module solutions for the past few years. There is a document describing what it attempts to fix[2]. The table of contents from that document: — This project reuses a lot of thinking that has gone into the ECMAScript 6 modules effort so far, but suggests these changes: * Parse for module instead of import/export * Each module body gets its own unique module object * Use function wrapping for module scope They are motivated by the following reasons: * import syntax disparity with System.import * Solves the moduleMeta problem * Solves nested modules and allows inlining * Easy for base libraries to opt in to ES modules It has these tradeoffs: * Cycle support * Export name checking — I am willing to talk to TC-39 members in realtime channels (video/in person) that may need more background or might want to discuss further, but I am less likely to discuss it in email threads. I will likely continue that prototype effort even if the more recently visible issues for modules are solved differently, as the current state of the baseline ES system will still require bootstrap loader scripts. For the bootstrap scripts, I will need some of the concepts in the prototype for the AMD consumers I have traditionally supported. There is a “story time” document[3] for a narrative around how the prototype relates to smaller ideas around code referencing and reuse. [1] https://github.com/jrburke/module [2] https://github.com/jrburke/module/blob/master/docs/module-from-es.md [3] https://github.com/jrburke/module/blob/master/docs/story-time.md James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: multiple modules with the same name
On Mon, Jan 27, 2014 at 11:53 PM, Marius Gundersen wrote: > AFAIK ES-6 modules cannot be bundled (yet). But if/when they can be bundled > this is an argument for silently ignoring duplicates Loader.prototype.define seems to allow bundling, by passing strings for the module bodies: https://people.mozilla.org/~jorendorff/js-loaders/Loader.html ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: multiple modules with the same name
On Mon, Jan 27, 2014 at 3:14 PM, John Barton wrote: > On Mon, Jan 27, 2014 at 2:50 PM, Sam Tobin-Hochstadt > wrote: >> Imagine that some browser has an ok-but-not-complete implementation of >> the X library, but you want to use jQuery 17, which requires a better >> version. You need to be able to replace X with a polyfilled update to >> X, and then load jQuery on top of that. >> >> Note that this involves indirect access in the same library (jQuery) >> to two versions of X (the polyfill and the browser version), which is >> why I don't think Marius's worry is fixable without throwing out the >> baby with the bathwater. > > > Guy Bedford, based on experiences within the requirejs and commonjs worlds, > has a much better solution for these scenarios. (It's also similar to how > npm works). > > Your jQuery should depend upon the name X, but you Loader should map the > name X when loaded by jQuery to the new version in Loader.normalize(). The > table of name mappings can be configured at run time. > > For example, if some other code depends on X@1.6 and jQuery needs X@1.7, > they each load exactly the version they need because the normalized module > names embed the version number. In the AMD world, map config has been sufficient for these needs[1]. As a point of reference, requirejs only lets the first module registration win, any subsequent registrations for the same module ID are ignored. “ignore" was chosen over “error" because with multiple module bundles, the same module ID/definition could show up in two different bundles (think multi-page apps that have a “common” and page-specific bundles). I do not believe that case should trigger an error. It is just inefficient, and tooling for bundles could offer to enforce finding these inefficiencies vs the browser stopping the app from working by throwing an error. It is true that the errors introduced by “ignore” could be harder to detect given that things may mostly work. The general guide in this case for requirejs was to be flexible in the same spirit of HTML parsing. Redefinition seems to allow breaking the expectation that the module value for a given normalized ID is always the same. When the developer wants to explicitly orchestrate different module values for specific module ID sets, something like AMD’s map config is a good solution as it is a more declarative statement of intent vs code in a module body deciding to redefine. Also, code in module bodies do not have enough information to properly redefine for a set of module IDs like map config can. Map config has been really useful for supporting two different versions of a module in an app, and for providing mocks to certain modules for unit testing. Given what has been said so far for use cases, I would prefer either “ignore” or “error” over redefinition, with a preference of “ignore” over “error" based on the above. [1] https://github.com/amdjs/amdjs-api/wiki/Common-Config#wiki-map James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Mutable slots/indirect value API
On Sun, Nov 3, 2013 at 12:34 PM, David Herman wrote: > IOW expose the first-class "reference type" of ECMA-262 via a standard > library? Just say no! :) I was thinking that if they were used anyway by the module system, formalizing them might help, the "provide the primitives" sort of API design. I am sure that kind of consideration is probably quite a bit of work though (sounds like scary too from your response), so sorry for the distraction. > BTW, this whole module-function rigamarole only exists for AMD compatibility, > so it's only important for it to demonstrate interoperability. For normal ES6 > use cases there's just no need to use it. I was not asking about it related to AMD compatibility. AMD's cycle support is not that strong, so this capability would not be used specifically for AMD conversions, as that concept is not possible now with AMD. This came out of wanting some other way to inline multiple ES-type of modules in a file, without needing to rely on the JS-strings-in-JS that were mentioned in the previously mentioned thread. The thought of being able to use a function instead was appealing, but wanted to be sure a similar cycle support to ES modules could work in that format. Not an exact match for true import/export syntax, but may be good enough for single, default export type of modules. I will try to wait until the later November design artifacts are done before asking more questions. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Mutable slots/indirect value API
With the import/export mutable slots concept, does it make sense to allow an API that has expresses that type of concept? I think it would allow creating a module-function thunk that would allow robust cycles as mentioned here: https://mail.mozilla.org/pipermail/es-discuss/2013-November/034576.html So something that returns a value, but when it is looked up by the engine, it really is just an indirection to some other value that can be set later. Call it IndirectValue for purposes of illustration, but the name is not that important for this message: var value = new IndirectValue(); // something is a kind of "ref to a ref" engine type under the covers var something = value.ref(); typeof something === 'undefined' // true at this point // Some time later, the code that created the // IndirectValue sets the final value for it. value.set([100, 200]); // Then later, Array.isArray(something) // true now IndirectValue.prototype.set() could only be called once, and the engine under the covers could optimize the indirections after the set() is called so that the indirection would not longer be needed. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Modules loader define method
On Fri, Nov 1, 2013 at 2:19 PM, Jeff Morrison wrote: > (I'm operating on the assumption that the Module constructor is still part > of the spec): > > ``` > System.define({ > A: { > deps: ['B','C'], > factory: function(B, C) { > var stuff = B.doSomething(); > return new Module({stuff: stuff}); > } > }, > B: { > deps: ['D', 'E'], > factory: function(D, E) { > return new Module({ > doSomething: function() { ... } > }); > } > } > }); > ``` Do you know if the factory arguments are regular variable references, or are they actually an import-like mutable slot that at a later time may hold the Module value? I did not think it was possible that they could be mutable slots, those were reserved for import/export statements? But maybe the factory args *are* mutable slot entities, instead of just variable references? If so, that fixes my disconnect. Still trying to understand the nuances with the mutable slots. If they are just regular variables, I do not believe they work for a cycle, at least it does not for AMD-type of systems, as the factory argument would be undefined for at least one part of the cycle chain: ``` System.define({ A: { deps: ['B'], factory: function(B) { return new Module({ prefix: 'thing', action: function() { return B.doSomething(); }) } }, B: { deps: ['A'], factory: function(A) { return new Module({ doSomething: function() { return A.prefix; } }); } } }); ``` James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Modules loader define method
On Fri, Nov 1, 2013 at 1:04 PM, Jeff Morrison wrote: > No, that's why I said the function generates an instance of a Module object > imperatively (we're already in imperative definition land with this API > anyway). > No need for `import` or `export` My understanding is that there is no way to express a mutable slot like the ones that import/export creates using existing syntax, or some property on an object. I very well could be incorrect. Looking at this: http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders#module_objects and: https://github.com/jorendorff/js-loaders I cannot see how that might work, but the info seems sparse, or at least I could have misunderstood it. Perhaps you know how a mutable slot could be expressed using existing syntax for creating Module objects? Illustrating how would clear up a big disconnect for me. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Modules loader define method
On Thu, Oct 31, 2013 at 8:32 PM, Jeff Morrison wrote: > Throwing this out there while I stew on the pros/cons of it (so others can > as well): > I wonder how terrible it would be to have this API define module bodies in > terms of a passed function that, say, accepted and/or returned a module > object? This would mean allowing `import` and `export` inside a function, which starts to break down the semantic meaning of what a module "is", and how to refer to them. Any function would be allowed to import or export. What does that mean? Does a function name now qualify as a module ID? Why do import statements use string IDs? If the function wrapping was restricted to only System.* calls to express dependencies, then it loses out on the cycle benefits of import and export: it would not be possible to adequately convert a module that used export/import to a plain function form. For me, at least as end user, it seems more straightforward to just allow `module 'id' {}`. It also avoids the ugliness of having strings of JS inside JS files. I appreciate it may have notable semantic impacts though. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Standard modules - concept or concrete?
On Thu, Jun 20, 2013 at 9:08 AM, Sam Tobin-Hochstadt wrote: > On Thu, Jun 20, 2013 at 10:26 AM, Kevin Smith wrote: >> I wonder, though, if this might create issues for polyfilling: >> >> // polyfillz.js >> if (this.Promise === void 0) >> this.Promise = function() { ... } >> >> // main.js >> import "polyfillz.js"; >> new Promise(); >> >> This would refuse to compile, right? We'd have to introduce all of our >> polyfills in a separate (previous) compilation/execution cycle. > > Yes, like so: > > > > Note that this is already the way people suggest using polyfills; see > [1] for an example. I have found that once I have module loading, I want the dependencies to be specified by the modules that use them, either via the declarative dependency syntax or via module loader APIs, and at the very least, avoid script tags as the optimization tools can work solely by tracing module/JS loading APIs. In this case, only the "model" set of modules would care about setting up indexeddb access, not the top level of the app. Example, this AMD module: https://github.com/jrburke/carmino/blob/master/www/lib/IDB.js Asks for "indexedDB!", which is an AMD loader plugin: https://github.com/jrburke/carmino/blob/master/www/lib/indexedDB.js which feature detects and uses a module loader API to load a shim if it is needed. So the "IDB" module will not execute until that optional shim work is done. I believe this will also work via the ES Module Loader API, but calling it out just in case I missed something. I want to be sure there are options that do not require using
Re: Minor questions on new module BNF
On Tue, Jun 4, 2013 at 8:02 AM, Domenic Denicola wrote: > From: Yehuda Katz [wyc...@gmail.com] > >> In general, expectations about side-effects that happen during module >> loading are really edge-cases. I would go as far as to say that modules that >> produce side effects during initial execution are "doing it wrong", and are >> likely to produce sadness. > >> In this case, the `import` statement is just asking the module loader to >> download "someModule", but allowing the app to move on with life and not >> bother executing it. This would allow an app to depend on a bunch of >> top-level modules that got executed only once the user entered a particular >> area, saving on initial boot time. > > I don't think this is correct. It is strongly counter to current practice, at > the very least, and I offered some examples up-thread which I thought were > pretty compelling in showing how such side-effecting code is fairly widely > used today. > > This isn't a terribly important thing, to be sure. But IMO it will be very > surprising if > > ```js > import x from "x"; > ``` > > executes the module "x", producing side effects, but > > ```js > import "x"; > ``` > > does not. It's surprising precisely because it's in that second case that > side effects are desired, whereas I'd agree that for modules whose purpose is > to export things producing side effects is "doing it wrong." Agreed, and this is at least what is expected in AMD code today. Not all scripts export something, but are still part of a dependency relationship (may be event listeners/emitters). The `import "x"` expresses that relationship. I do like the idea of module bodies not being executed (or even parsed?) if it is not part of an explicit System.load or import chain. For code that wanted to delay the execution of some modules though, I expect that trick to be worked out via a delayed System.load() call than something to do with an import "x" combined with a System.get(). This is how it works in AMD today: define()'d modules are not executed until part of a require chain. Some folks use this to deliver define()'d modules in a bundle, but only trigger their execution on some later runtime event, and then do a require(["x"]) call (which is like System.load) that would then execute the define()'d "x" module. So: yes to delayed execution (and even delayed parse), but not via import "x" + System.get(), just via System.load(), and all import forms doing the same thing for module body execution. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Module naming and declarations
On Mon, May 20, 2013 at 12:07 PM, Kevin Smith wrote: > On the other hand, I think it is possible with URLs to create a system which > truly does work out-of-the-box. > > Let's imagine a world where publicly available modules are located at sites > specifically designed for naming and serving JS modules. Call it a web > registry. Intra-package dependencies are bundled together using lexical > modules - the package is the smallest unit that can be referenced in such a > registry. The registry operates using SPDY, with a fallback on HTTPS, so > for modern browsers multiplexing is not a critical issue. In such a world, There are lots of problems with this kind of URL-based IDs with a web registry, which I will not enumerate because they basically boil down to the problems with using URLs: URLs, particularly when version information gets involved, is too restrictive. The IDs need to have some fuzziness to make library code *sharing* easier. I have given some real world examples previously. That fuzziness needs to be resolved, but it should be done once, at dependency install time, not for every run of the module code. "Dependency install time" can just mean, "create a file at this location", does not mandate tools. At this point, I would like to see "only URLs as default IDs" tabled unless someone actually builds a system that used them and that system got some level of adoption. If it was a great idea and it solved problems better than other solutions, I would expect it to get good adoption. However all the data points so far, ones from other languages, and ones from systems implemented in JS, indicate the URL choice is not desirable. Note that this problem domain is different from something that needs new language capabilities, like the design around mutable slots for "import". This is just basic code referencing and code layout. It does not require any new magic from the language, it is something that could be built in code now. Side note: existing HTML script tag use of URLs is not a demonstration of the success of URLs for a module system since they are decoupled from the references of code in JavaScript, and requires the developer to manually code the dependency graph without much help from the actual code. Another side note: if someone wanted to use a web registry for library dependencies, it could just set the baseURL to the web registry location and have one config call to set the location for app-local code. It would end up with less string and config overhead than "only URLs as IDs". There has even been a prototype done to this extent: http://jspm.io/ -- it is backed by the module ID approach used for AMD modules/requirejs. But again, these are all side notes. The weight of implementations, and real world use cases, indicate "only URLs as default IDs" are not the way to go. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Module naming and declarations
On Wed, May 8, 2013 at 11:35 AM, Domenic Denicola wrote: > From: sam...@gmail.com [sam...@gmail.com] on behalf of Sam Tobin-Hochstadt > [sa...@ccs.neu.edu] > >> There's a default place to fetch files from, because there has to be _some_ >> default. > > Why? > > This is the core of my problem with AMD, at least as I have used it in the > real world with RequireJS. You have no idea what `require("string")` > means---is `"string"` a package or a URL relative to the base URL? It can be > either in RequireJS, and it sounds like that would be the idea here. > Super-confusing! What part is confusing? Logical IDs are found at baseURL + ID + '.js', and if it is not there, then look at the require.config call to find where it came from. By not having a default, it would mean *always* needing to set up configuration or specialized module loader bootstrap script to start a project, and still requires the developer to introspect a config or understand the loader bootstrap script to find things. Why always force a config step and/or a specialized module loader bootstrap? There are simple cases that can get by fine without any configuration or loader bootstrap. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Module naming and declarations
On Wed, May 8, 2013 at 10:44 AM, Sam Tobin-Hochstadt wrote: > How is this in disagreement with what Jason said? His point is that > if you're in the module "a/b/c", "./controllers" refers to > "a/b/controllers", and "backbone" refers to "backbone". Once you have > a module name, there's a default resolution semantics to produce a URL > for the fetch hook, which you describe accurately. For a developer coming from Node, this may be slightly new to them, and I think when Jason mentions "package", it may not all fit together with how they understand Node to work. Here is a shot at trying to bridge that gap: Node resolves relative IDs relative to the path for the reference module, and not relative to the reference module's ID. This is a subtle distinction, but one where node and AMD systems differ. AMD resolves relative IDs relative to reference module ID, then translates that to a path, similar to what Sam describes above. I believe Node's behavior mainly falls out from Node using the path to store the module export instead of the ID, and it makes it easier to support the nested node_modules case, and the package.json "main" introspection. However, that approach is not a good one for the browser, where concatenation should preserve logical IDs not use paths for IDs. This allows disparate concatenated bundles and CDN-loaded resources to coordinate. For ES modules and Node's directory+package.json main property resolution, I expect it would work something like this: Node would supply a Module Loader instance with some normalize and resolve hooks such that 'backbone' is normalized to the module ID 'backbone/backbone' after reading the backbone/package.json file's main property that points to 'backbone.js'. The custom resolver maps 'backbone'/backbone' to the node_modules/backbone/backbone.js file. For nested node_modules case, Node could decide to either make new Module Loader instances seeded with the parent instance's data, or just expand the IDs to be unique to include the nested "node_modules" directory in the normalized logical ID name. If 'backbone', expanded to 'backbone/backbone' after directory/package.json scanning, asked for an import via a relative ID, './other', that could still be resolved to 'backbone/other', which would be found inside the "package" folder. So I think it works out. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Module naming and declarations
On Tue, May 7, 2013 at 5:21 PM, Domenic Denicola wrote: > I'm not sure the Node.js scheme is the best idea for the web, but I *would* > like to emphasize that the AMD scheme is not a good one and causes exactly > the confusion we're discussing here between URLs and module IDs. I believe this is mixing the argument of allowing URLs in IDs with how logical IDs might be resolved in node vs AMD. Not all AMD loaders allow URLs as dependency string IDs, but requirejs does. That choice is separate from how logical IDs are resolved. If normal logical ID notation is used with an AMD loader (like, 'some/thing', as in Node/CommonJS), then it is similar to node/CJS. It just has a different resolution semantic than node, which is probably an adjustment for a node developer. But the resolution is different for good reason. Multiple IO scans to find a matching package are a no-go over a network. I think the main stumbling block in this thread is just "should module IDs allow both URLs and logical IDs". While I find the full URL a nice convenience (explained more below), I think it would be fine to just limit the IDs to logical IDs if this was too difficult to agree on. That choice is still compatible with AMD loaders though, and some AMD loaders make the choice to only support logical IDs already. --- Perhaps the confusing choice in requirejs was treating IDs ending in '.js' to be URLs, and not a logical ID. I did this originally to make it easy for a user to just copy paste their existing script src urls and dump them into a require() call. However, in practice this was not really useful, and I believe the main source of confusion with URL support in requirejs. I would probably not do that if I were to do it over again. However, in that do-over case, I would still consider allowing full URLs, using the other URL detection logic in requirejs: if it starts with a / or has a protocol, like http:, before the first /, it is an URL and not a logical ID. That has been useful for one-off dependencies, like third party analytics code that lived on another domain (so referenced either with an explicit protocol, http://some.domain, or protocol-less, //some.domain.com), which is only "imported" once, in the main app module, just to get some code on the page. Needing to set up a logical ID mapping for it just seemed overkill in those cases. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Module naming and declarations
On Wed, May 1, 2013 at 8:28 AM, Tab Atkins Jr. wrote: > Central naming authorities are only necessary if you need complete > machine-verifiable consistency without collisions. As long as humans > are in the loop, they tend to do a pretty good job of avoiding > collisions, and managing them when they do happen. I would go further: because humans are involved, requiring a central naming authority, like an URL, for module IDs are a worse choice. There are subcultures that ascribe slightly different meanings to identifiers, but still want to use code that mostly fits that identifier but is from another subculture. The current approach to module IDs in ES modules allows for that fuzzy meaning very well, with resolution against any global locations occurring at dependency install/configure time, when the subculture and context is known. It would require more config, generate more friction and more typing with runtime resolution of URL IDs. Examples from the AMD module world: 1) Some projects want to use jQuery with some plugins already wired up to it. They can set up 'jquery' to be a module that imports the real jQuery and all the plugins they want, and then return that modified jQuery as the value for 'jquery'. Any third party code that asks for 'jquery' still gets a valid value for that dependency. With ES modules in their current form, they could do this without needing any Module Loader configuration, and all the modules use a short 'jquery' module ID. 2) A project developer want to use jQuery from the project's CDN. A third party module may need jQuery as a dependency, but the author of that third party module specified a specific version range that does not match the current project. However, the project developer knows it will work out fine. The human that specified the version range in that third party module did not have enough context to adequately express the version range or the URL location. The best the library author can express is "I know it probably works with this version range of jQuery". If all the modules just use 'jquery' for the ID, the project developer just needs one top level, app config to point 'jquery' to the project's CDN, and it all works out. An URL ID approach, particularly when version ranges are in play, would mean much more configuration that is needed for the runtime code. All the IDs would require more typing, particularly if version ranges are to be expressed in the URLs. Summary: It is best if the suggestions on where to fetch a dependency from a globally unique location and what version range is applicable are done in separate metadata, like package.json, bower.json, component.json. But note that these are just suggestions, not requirements, and the suggestions may vary based on the project context. For example, browser-based vs. server-based, or mobile vs. desktop. Only the end consumer has enough knowledge to do the final resolution. It would be awkward to try to encode all of the version and context choices in the module IDs used for import statements as some kind of URL. Even if it was attempted, it could not be complete on its own -- end context is only known by the consumer. So it would lead to large config blocks that need to be loaded *by the runtime* to resolve the URL IDs to the actual location. With short names like 'jquery', there is a chance to just use convention based layout, which means no runtime config at all, and if there is config, a much smaller size to that config than what would be needed for URL-based IDs. Any resolution against global locations happens once, when the dependency is brought into the project, and not needed for every runtime execution. Plus less typing is needed for the module IDs in the actual JS source. In addition, the shorter names have been used successfully in real world systems, examples mentioned already on this thread, so they have been proven to scale. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Modules feedback from March 2013 meeting
On Tue, Mar 26, 2013 at 3:23 AM, Andreas Rossberg wrote: > On 25 March 2013 18:31, James Burke wrote: >> ### Single Anonymous Export ### > Also, optimising the entire syntax for one special use case while > uglifying all regular ones will be a hard sell. I believe this is one of the points of disconnect, at least with people in the node and AMD communities. Single exports is regular form, multiple export are seen as the special case. But my main point with this section was: I was hoping that by turning the syntax around (export for single anonymous, some other export with label for multiple export) maybe that opened up some syntax options. But syntax is hard, and I do not envy TC39's job to sort it all out. Sorry if this was just noise. > As I have explained earlier on this list, destructuring import and > destructuring let are not the same. The former introduces aliases, not > new stateful bindings. This is relevant if you want to be able to > export mutable entities. So no, we cannot drop import. Right, thank you for the reminder of the previous thread. I can see mutable entities helping cycle cases. I am curious to know what else it helps. But cycle cases are important, so that alone is nice. I was hoping that with single export, since a mutable entity was a special, new thing, it had more freedom to write the rules around it. However, it seems different enough that it cannot not fit in with `let` or `var`. I wonder if this implies later assignment to the import name is not allowed: import { decrypt } from 'crypto'; //This would be an error? decrypt = function () {}; If so, that really drives home that it is a new special kind of thing. In any case, thanks for your response. With that information, this is the kind of summary I would give to node and AMD users: * import exists because it creates something new in the language, a reference to a mutable slot. This is really important for cycle resolution. let and var cannot handle this type of mutable slot. * multiple exports exists because it allows for better static checking, and due to how import/export works with mutable slots, allows cycles with those exports. While your community may not prefer a multiple export style, there are others that do. Also, in some cases there are "roll up" modules that aggregate an interface to multiple module exports, and the multiple exports allows that to work even with cycles. * single anonymous export will be supported, so you can code all your modules in that style and it all works out, and you even get better cycle support when non-function exports are involved. (I have seen cycles in node rely on function hoisting and strategically placed require/module.exports assignment to work -- non-function exports are harder to support with that pattern) * node's imperative require is not deterministic enough for a general loading solution, particularly for the web and network fetching. The ES spec solves this by using string literals for dependencies that are language-enforced to be top level, with System.load() for anything that is computed dependency. The mutable slots provided by import give robust cycle support. * there are enough hooks in the Module Loader spec to allow node to internally maintain its synchronous require, so it does not have to force all modules to upgrade to a new syntax, and a good level of interop with ES6 modules is possible. * AMD's dependency resolution has the right amount of determinism, but suffers from weak cycle support. It also less clear semantically since require('StringLiteral') can be used in control structures like if/else, but operates more like System.get('StringLiteral') --- it just returns the cached module value, it does not trigger conditional code loading. All require('StringLiteral') calls are effectively "hoisted" to the top level for module loading purposes, which can be surprising to the end user, particularly when coming from node. * since the ES6 Module Loader can load scripts with the same browser security rules as script tags (load cross domain without CORS, avoids problems with eval, like CSP restrictions) then the need in AMD for a function wrapper in single module per file source form goes away, and you recover a level of indent. * there are enough hooks in the Module Loader spec that a hybrid AMD/ES6 loader can be made, so no need to force upgrade all your AMD modules, it can be done over time. Since AMD's execution model aligns pretty well with the ES6 model, it will be easy to write conversion scripts. >> ### Nested modules ### > I agree with your goal, and that is why I still maintain my point of > view that modules should be denoted by regular lexically scoped > identifiers, like any good language citizen. Then we'd get the right > rules for free, in a clean, declarative manner, and wouldn't need to > re
Re: Modules feedback from March 2013 meeting
Hi Sam, I really was not expecting a reply, as it was a lot of feedback. Just wanted to get some things in the "to be considered at some point/use case" queue. Some clarifications, but I do not think it is worth continuing discussion here given the breadth of the feedback and the stage of the spec development: On Mon, Mar 25, 2013 at 12:56 PM, Sam Tobin-Hochstadt wrote: >> With that capability, it may be possible to go without `import` at >> all, at least at this stage of ES (macros later may require it). The >> one case where I think `import` may help are cycles, but if the cycle >> parts are placed in separate modules with a single export, it may >> still work out. Using the assumption of single anonymous export and >> the "odd even" example from the doku wiki: > > Therefore, these changes aren't really simplifying things. Does it complicate things more though? While import checking may not change much, hopefully the simplifications are: * removal of an import syntax keyword (just have `from`) * possibly reducing the scope of export syntax * possible improvement in general destructuring, even when a module is not the target. Maybe this makes some things much more complicated. If so, it would be good to document why/include in the use cases at some point. I believe this is at the heart of some of the "this seems complicated" feedback, but hopefully expressed in a more precise, targeted way as to what needs to be explained to someone who might think that. It does not need to be explained now, just calling out a candidate for the documentation/use case queue. >> ### Loader pipeline hooks ### > The system doesn't build AMD-style plugins into the core of the module > proposal, however. They're neither fully-general (you could configure > based on something other than a prefix) nor used everywhere in > existing JS, and we don't want to prematurely standardize on one > system. AMD-style plugins would purposely not be fully general in this system, that is the job of the loader pipeline hooks. It does not have to be AMD-style directly, but something where I could specify a module ID that could handle a type of resource ID, that module gets loaded (with its dependencies), and it gets automatically wired into the pipeline if it exposes a property whose name matches a pipeline hook name. This is also why I suggest more declarative config, like shim vs an imperative link hook. It is still useful to have the loader hooks as they are, but just like "ondemand" is being considered, there are others that have been proven useful without prone-to-error imperative overrides of hooks (don't forget to check for a previous one and call it (before, after?) you run your code). Since the loader pipeline stuff is still under development though, this feedback may be much to early. >> ### Nested modules ### > You could express this as: > > module "publicThing/j" {} > module "publicThing/k" {} > > module "publicThing" { > >export …. //something visible outside publicThing > } I am sorry, I mixed how the code would be on disk with how it may be organized later conceptually after loading. How the example would be on disk: //In publicThing.js module "j" {} module "k" {} export …. //something visible outside this module Then, this module is imported by some other module via the "publicThing" name. So, publicThing.js does not know its final ID, and "j" and "k" are not meant to be exposed as public modules, they are just for publicThing's internal use. The ID lookup tables I visualized more like the "maps with prototypes" with the prototype being a parent module space map. Unfortunately we probably do not share a common vocabulary here, so I will stop trying to suggest a solution and just point out the use case. >> ### Legacy opt-in use case ### >> //Some base library that needs to be in ES5 syntax: >> var dep1, dep2 >> if (typeof System !== 'undefined' && System.get) { >> //ES6 module loader. The loader will fetch and process >> //these dependencies before executing this file >> dep1 = System.get('dep1'); >> dep2 = System.get('dep2'); >> } else { >> //browser globals case, assume the scripts have already loaded >> dep1 = global.dep1; >> dep2 = global.dep2; >> } > > In this setting, you could just run the code exactly as you wrote it, > without changing the default loader at all, and it would work provided > that the dependencies were, in fact, already loaded, just the way it's > assumed in the browser globals case. I imagine that lots of libraries > will work exactly like this, the same way jQuery plugins expect jQuery > to already be loaded today. We have found in the AMD world that once the developer has a module loader API, they want to avoid loading scripts in some manually constructed order specified outside of JS, in HTML. If the user needs a third party script loader to do this on top of ES modules, that seems redundant. > Adding a build step that performs this analysis expli
Modules feedback from March 2013 meeting
I expect the module champions to be busy, so I am not expecting a response. This is just some feedback to consider or discard at their discretion. I'll wait for the next public update on modules to see where things end up. In general, sounds promising. I'm going off the meeting notes from here (thanks Rick and all who make these possible!): https://github.com/rwldrn/tc39-notes/blob/master/es6/2013-03/mar-12.md#42-modules ### Single Anonymous Export ### The latest update was more about semantics, but some thoughts on how single anonymous export might work: Just use `export` for the single anonymous export: module "m" { export function calculate() {} } where calculate is just the local name for use internally by the module, but `calculate` is not visible to outside modules, they just import that single anonymous export. For exporting a named property: module "n" { export calculate: function () {} } This would still result in a local `calculate`, let-equivalent local name, but then also allows for other modules to import `calculate` from this module. Single anonymous export of something that is not a function: module "crypto" { export let c = { encrypt: function () {}, decrypt: function () {} } } Inside this module, `c` is just the local name within the module, not visible to the outside world. Syntax is hard though, so I will not be surprised if this falls down. ### Import ### If the above holds together: For importing single anonymous export, using the "m" above: import calc from "m"; This module gets a handle on the single anonymous export and calls it calc locally. The "n" example: import { calculate } from "n"; start extremely speculative section: This next part is very speculative, and the most likely of this feedback to be a waste of your time: "crypto" is a bit more interesting. It would be neat to allow: let { encrypt } from "crypto"; which is shorthand for: import temp from "crypto"; let { encrypt } = temp; This could all work out because `from` would still be restricted to the top level of a module (not nested in control structures). `from` would be the parse hook for finding dependencies, and not `import`. If refutable matching: http://wiki.ecmascript.org/doku.php?id=harmony:refutable_matching applied to destructuring allowed some sort of "throw if property is not there" semantics (making this up, assume a ! prefix for that): let { !encrypt } from "crypto"; This would give a similar validity check to what `import { namedExport }` would give. It may happen later in the lifecycle of the module (so when the code was run, vs linking time) but since `from` is top level, it would seem difficult to observe the difference. Going one step further: With that capability, it may be possible to go without `import` at all, at least at this stage of ES (macros later may require it). The one case where I think `import` may help are cycles, but if the cycle parts are placed in separate modules with a single export, it may still work out. Using the assumption of single anonymous export and the "odd even" example from the doku wiki: module 'E' { let odd from 'O'; export function even(n) { return n == 0 || odd(n - 1); } } module 'O' { let even from 'E'; export function odd(n) { return n != 0 && even(n - 1); } } Going even further: then the `export publicName: value` syntax may not be needed either. My gut says getting to this point may not be possible. Maybe it is for the short term/ES 6, but macros may require `import` and `export publicName:`. When documenting the final design decisions, it would be good to address where these speculative steps fall down, as I expect there are some folks in the node community that would also take this train of thought. --- end extremely speculative section ### Loader pipeline hooks ### I know more needs to be specified for this, so this feedback may be too early. The examples look like they assign to the hooks: System.translate = function () {} Are these additive assignments? If more than one thing wants to translate, is this more like addEventListener? I recommend allowing something like AMD loader plugins, since they allow participating in the pipeline without needing to be loaded first before any other modules. It also allows the caller to decide what hooks/transforms should be done, instead of some global handler sneaking in and making the decision. So, if a dependency is "t...@some.html" (AMD loader plugins use !, "text!some.html", pick whatever you all think works as a separator): * load the text module. * Grab the export value. If the export value has a property (or an explicit export property) that matches one of the pipeline names, normalize, resolve, fetch, translate, etc…, use those when trying to process "some.html". The main feedback here is to not expect to load all pipeline hooks up front. This would break dependency encapsulation. It seems fine
Re: Modules spec procedure suggestion (Re: Jan 31 TC39 Meeting Notes)
On Thu, Feb 7, 2013 at 1:00 PM, Claus Reinke wrote: > There are some existing ES.next modules shims/transpilers that > could be used as a starting point. Here is an experiment I did with a JS module syntax that allowed for a static compile time hooks for things like macros, but still allowed JS that needed to exist both in pre-ES.next and ES.next to opt into modules (think base libraries like jQuery, underscore, backbone). https://github.com/jrburke/modus It has some tests, and the modus.js file runs in ES5 browsers. Since the modules desugar to runtime APIs after the compile time pass for macros, it means not getting the lower level variable bindings that the ES module format is targeting. This means it has similar cycle dependency support as AMD -- it can be done, but may not be as nice as ES module variable binding, and has some restrictions. The API and syntax was just provisional, just something chosen to wire things together, and it is not a fully fleshed out system. Also, it was not done to advocate macros in particular. Macros were used to demonstrate how a static, "compile time" feature could be supported in a system that desugared to runtime API calls, and it helped that sweetjs was available for that heavier lifting. It may be useful for others to look at as a way to prototype any official ES spec approach, even if it does not give full fidelity. In particular, the use of the sweetjs reader to do a first pass transform of the module syntax. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?
On Thu, Dec 20, 2012 at 11:51 AM, Sam Tobin-Hochstadt wrote: > - I don't see what a mutable `exports` object would add on top of > this system, but maybe I'm not understanding what you're saying. It is one way to allow circular dependencies in CommonJS/Node/AMD systems. The other way is to call require() at runtime to get the cached module value at the time of actual use. Some examples here: http://requirejs.org/docs/api.html#circular James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?
On Thu, Dec 20, 2012 at 8:22 AM, Kevin Smith wrote: > This is exactly the use case that my OP addresses. The logic goes like > this: in order to apply that boilerplate, you have to know whether the > module is ES5 or ES6. In order to know that, you have to parse it. > Pre-parsing every single module is not practical for a production system. > Therefore applying such boilerplate is not practical for a production > system. That was not my impression of how backcompat would be done. I was under the impression it would be more like this: * The module loader API exposes a "runtime" API that is not new syntax, just an API. From some earlier Module Loader API drafts, I thought it was something like System.get() to get a dependency, System.set() to set the value that will be used as the export. * Base libraries that need to live in current ES and ES.next worlds (jquery, underscore, backbone, etc…) would *not* use the ES.next module syntax, but feature detect the System API and call it to participate in an ES.next module scenario, similar to how a module today detects if it wants to register for node, AMD or browser globals: https://github.com/umdjs/umd/blob/master/returnExportsGlobal.js * Modules using the ES.next module syntax will most likely be contained to "app logic" at first because not all browsers will have ES.next capabilities right away, and only apps that can restrict themselves to ES.next browsers will use the module syntax. Everything else will use the runtime API. Otherwise, forcing existing libraries that need to exist in non-ES.next browsers to provide a "ES.next" copy of their library that force the use of new JS module syntax is effectively creating a "2JS" system, and if that is going to happen, might as well do more backwards incompatible changes for ES.next. Previous discussion on this list seem to indicate a desire to keep with 1JS. For using ES5 libraries that do not call the ES Module Loader runtime API, a "shim" declarative config could be supported by the ES Module Loader API, similar to the one in use by AMD loaders: http://requirejs.org/docs/api.html#config-shim this allows the end developer to consume the old code in a modular fashion, and the parsing is done by the ES Module Loader, not userland JS. So, there is not a case where someone would try to ship a module loader that does full JS parsing to detect new module syntax, except for more experimental purposes. Or just one used in dev, but then do a build to translate to ES5 syntax, converting module syntax to the runtime API forms so that it could run in either ES.next browsers or in ES5 browsers with an API shim. > No - the solution for Node WRT ES6 modules, in my mind, is to "pull off the > bandaid". The solution should not be to make compromises on the module > design side. With the runtime System API, node can adapt their module system to use the ES.next Module Loader API hooks for resolve/fetch and hopefully a way to register a require function for each module that underneath calls System.get() and module.exports calling System.set. However, the ES.next Module Loader API is doing the actual parsing of the file, scanning for ES.next module syntax, so Node itself does not need to deliver an in-JS parser. Maybe instead of (I would like to see in addition to) System.set() there is a System.exports, like the CommonJS `exports`, that would allow avoiding the "exports assignment" pattern for modules that want to do that. Summary: If that all of the above holds true (getting clarification on the Module Loader API is needed), then I do not believe the original post about parsing of old and new code is a strong case for avoiding export assignment. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?
On Wed, Dec 19, 2012 at 11:44 AM, Kevin Smith wrote: > But that cowpath was only created because of the problems inherent in a > dynamic, variable-copy module system, as I argue here > (https://gist.github.com/4337062). In CommonJS, modules are about > variables. In ES6, modules are about bindings. The difference is subtle, > but makes all the difference. Those slightly different things are still about naming, and my reply was about naming. Whether it is a "variable" or a "binding", end result is whether the caller of the code need to start with a name specified by the module or with a name of the caller's choosing. The same design aesthetics are in play. This is illustrated by an example from Dave Herman, for a language (sorry I do not recall which), where developers ended up using "_t", or some convention like that, to indicate a single export value that they did not want to name. As I recall, that language had something more like "bindings" than "variables". That would be ugly to see a "_t" convention in JS (IMO). In summary, I do not believe there is a technical issue with export assignment and backcompat, which was the what started this thread. A different argument (and probably different thread) against export assignment needs to be made, with more details on the actual harm it causes. If the desire to not have export assignment is a style preference, it will be hard to make that argument given the style in use in existing JS, both in node and AMD. Real world use and adoption should have have more weight when making the style choice. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?
On Tue, Dec 18, 2012 at 12:56 PM, Kevin Smith wrote: > At first glance, it seems like anonymous exports might provide a way for > pre-ES6 (read: Node) modules and ES6 modules to coexist. After all: > > exports = function A() {}; > > just looks so much like: > > module.exports = function A() {}; > > But is that the case? Does this oddball syntax actually help? > > My conclusion is that it does not, *unless* the loading environment is > willing to statically analyze every single module it wishes to load. > Moreover, the desired interop is not even possible without performing static > analysis. I feel this is mixing up backcompat dependency matching (which is has much larger issues than exports assignment) with a preference to just not have exports assignment. I believe the backcompat issues and parsing things are workable. I have done some code experiments, but we need a more info on the module loader API, specifically the runtime API, like System.set/get before getting a solid answer on it. exports assignment is not about backcompat specifically, although it helps. Exports assignment is more about keeping the anonymous natures of modules preserved. In ES modules, modules do not name themselves if it is a single module in a file. The name is given by the code that refers to that code. If a module only exports one thing, and chooses a name, it is effectively naming itself. Example from the browser world: jQuery and Zepto provide similar functionality. If jQuery exports its value as "jQuery", then that would mean Zepto would then need to export a "jQuery" exports if it wanted to be used in places where the jQuery module is used. But if someone just wanted to use Zepto as "Zepto", then Zepto would need to add more export properties. Saying "well have them both use $" is just as bad. It is simpler to just allow each of them to export a function as the module value, and avoids these weird naming issues. Assigning a single exports also nudges people to make small modules that do one thing. It is a design aesthetic that has been established in the JS community, both in node and in AMD modules, in real code used by many people. So allowing export assignment is more about paving an existing cowpath than a specific technical issue with backcompat. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Comments on Meeting Notes
On Tue, Dec 4, 2012 at 8:54 AM, Kevin Smith wrote: > === Modules === > > Perhaps I'm misreading the notes, but I am concerned about the amount of > churn that I'm seeing in module discussions. Particularly worrisome to me > is the suggestion that the default loading behavior should map: > > import x from "foo"; > > to: > > System.baseURL + "foo" + ".js" > > This is contrary to all url resolution algorithms on the web, and involves > way too much magic. [sorry Kevin, sent this to you directly when meant to sent to list, so sending again] This is what AMD loaders use. For me, it seems straightforward, not much magic on its own. The old dojo one did too, except it used dots instead of slashes in IDs, but it was effectively baseUrl + id + '.js'. I believe it has held up well. What other ID-to-URL resolution algorithms are used on the web that have decent adoption, besides just plain URLs? Alternatively, what problems do you see with that algorithm? It is important to use an ID type vs a plain URL to allow sharing of code between environments that may have different path resolution logic. For example, for node, it may choose to check a few file paths for a given ID. For networked loading though it is good to have a reasonable one IO lookup rule per ID. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Modules, Concatenation, and Better Solutions
On Tue, Oct 16, 2012 at 2:58 PM, David Herman wrote: > prints "a" then "b" then "main". That's clearly a problem for simple > concatenation. On the one hand, the point of eager execution was to make the > execution model simple and consistent with corresponding IIFE code. On the > other hand, executing external modules by need is good for usually (except in > some cases with cyclic dependencies) ensuring that the module you're > importing from is fully initialized by the time you import from it. In earlier versions of requirejs, I used to eagerly evaluate define() calls as they were encountered, trying to duplicate the IIFE feel. This caused a problem for concatenation: some build scenarios build in all the modules used for a page into one JS script. However, only half the modules may be used for the first "screen render", with the second half of the modules for a second "screen render" that is triggered by a user action. The secondary set of modules can have global state changes, like CSS/style changes. By eagerly evaluating the modules as they were encountered in the built script, the page would have unwanted style changes applied during the first screen render when they should have been held until the second set of module use for the second render. By switching to "evaluate module factory functions by need" in requirejs, it gained the following benefits: * concat code executes closer to the order in non-concat form. * delaying work that does not need to be done up front. If optimizations like delayed function parsing (like v8 does?) extended to modules, even parse time could be avoided. * modules can be concatenated in an order that does not strictly match the linearized dependency chain (the benefit Patrick Mueller mentions earlier in the thread). James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: A few more questions about the current module proposal
On Thu, Jul 5, 2012 at 5:56 AM, Kevin Smith wrote: >> Will heterogenous transpiling in a web app be supported? Can a JS >> module depend on a CoffeeScript file, and vice versa? > > > Right - Sam's example of having a specific CoffeeScript loader isn't going > to actually work for this reason. Instead, we'd have to figure out which > "to-JS" compiler to use inside of the translate hook. > > let maybeCoffeeLoader = new Loader(System, { > > translate(src, relURL, baseURL, resolved) { > > // If file extension is ".coffee", then use the coffee-to-js > compiler > if (extension(relURL) === ".coffee") > src = coffeeToJS(src); > > return src; > } > > }); > > You could use the resolve hook in concert with the translate hook to create > AMD-style plugin directives. It looks pretty flexible to me. Right, I do not believe file extension-based loader branching is the right way to go, see the multiple text template transpiler uses for .html in AMD loader plugins. The module depending on the resource needs to choose the type of transpiler. So as you mention, a custom resolver may need to be used. This means that there will be non-uniform dependency IDs floating around. That seems to lead to this chain of events: * packages that use these special IDs need to communicate that the end developer needs to use a particular Module Loader implementation. * the end developer will need to load a script file before doing any harmony module loading when using those dependencies. * People end up using loaders like requirejs. * Which leads to the dark side. At least a side I do not want to see. It is also unclear to me what happens if package A wants a particular ModuleLoader 1 where package B wants ModuleLoader 2, and both loaders like to resolve IDs differently. This is why I favor "specify transpiler in the ID, transpiler is just another module with a specific API". If the default module loader understands something along the lines of "something!resource" means to call the "something" module as a transpiler to resolve and load "resource", the module IDs get uniform, and we can avoid a tower of babel around module IDs, and the need for bootstrap script translators. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: A few more questions about the current module proposal
On Wed, Jul 4, 2012 at 11:13 AM, Sam Tobin-Hochstadt wrote: > We've thought a lot about compile-to-JS languages, and a bunch of the > features of the module loader system are there specifically to support > these languages. You can build a loader that uses the `translate` > hook to perform arbitrary translation, such as running the > CoffeeScript compiler, before actually executing the code. So you'll > be able to write something like this: > > let CL = new CoffeeScriptLoader(); > CL.load("code/something.coffee", function(m) { ... }); Will heterogenous transpiling in a web app be supported? Can a JS module depend on a CoffeeScript file, and vice versa? What about a JS module depending on a CoffeeScript and text resource? What would that look like? For instance, it is common in requirejs projects to use coffeescript and text resources via the loader plugin system. While the text plugin is fairly simple, it can be thought of as a transpiler, converting text files to module values that are JS strings. It could also be "text template" transpiler that converts the text to a JS function, which when given data produces a custom HTML string. For requirejs/AMD systems, the transpiler handler is part of the module ID. This means that nested dependencies can use a transpiler without the top level application developer needing to map out what loader transpilers are in play and somehow configure transpiler capabilities at the top level before starting main module loading. It also makes it clear which transpiler should be used for a given module dependency. Each module gets to choose the type of transpiler: for a given .html file, one module may want to use a text template transpiler where another module may just want a raw text-to-string transpiler. Both of those modules can be used in the same project as nested dependencies without the end developer needing to wire them up at the top level. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Module loader use for optimizers (was Re: ES modules: syntax import vs preprocessing cs plugins)
On Tue, Jul 3, 2012 at 4:27 PM, Sam Tobin-Hochstadt wrote: > On Tue, Jul 3, 2012 at 7:19 PM, Allen Wirfs-Brock > wrote: >> Sam, >> Isn't it also the case that the full characteristics of the default module >> loader used by browsers still remain to be specified? This might be >> somewhat out of scope for TC39 put practically speaking it's something we >> will need (and want) to be involved with. > > Yes, this needs to be fully specified, but Dave and I have thought a > bunch about this particular issue, and I think the issues here are > better understood, because they're similar to other ES/HTML > integration issues. As an example, where the system loader looks for > JS source specified with a relative path should be related to how the > browser does this for script tags. Along those lines, I would like to see how the resolution logic that is used for browsers could be used for optimizers that combine modules together into scripts for performance reasons. Those optimizers normally run in a non-browser environment, so figuring out how an optimizer that runs in those other environments can know the browser rules. Maybe it means the optimizers need to hand code the rules themselves and handle the parsing of module syntax themselves. In my ideal world though, the ES module specs come with default resolution logic that works best for browsers (but allows overrides) and a Loader could be used in some kind of "trace mode", to get the dependency graph without executing the modules. This would help eliminate cross browser and cross-tooling bugs. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Modules: use of non-module code as a dependency
A good chunk of my previous feedback is based on a higher level design goal that is worth getting a common understanding on, since it may invalidate my feedback: Question: What happens when a module depends on a non-module code? Example: I have an ES module, 'foo', it depends on jQuery. foo uses the new module keywords, but jQuery does not. jQuery could be doing one of two things: a) Nothing, just exports a global. b) May be modified to call the ES Loader's runtime API, something like System.set(). Code for module foo: import jQuery from 'jquery.js' Possible answers: 1) Unsupported. Error occurs. Developer needs to use a custom loader that could somehow get jQuery loaded before foo is parsed. Or just tell the user to stick with existing module schemes until ES module support has saturated the market. 2) jQuery is suggested to provide a jquery.es.js file that uses the new keywords. 3) Proposed: When jquery.js is compiled, and no import/module/export modules are found, then the Loader will execute jquery.js to see if it exports a value via a runtime API. It uses that value to then finish wiring up the local jQuery reference for module foo. I believe #1 will complicate ES module adoption. #2 feels like "there are now two javascript languages, make your choice on what line you are on". I'm not sure if #3 could be supported with the current module design. James ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: ES Modules: suggestions for improvement
On Thu, Jun 28, 2012 at 7:56 AM, Sam Tobin-Hochstadt wrote: > On Thu, Jun 28, 2012 at 10:40 AM, Kevin Smith wrote: >>> >>>