On May 15, 2013, at 10:42 AM, Andreas Rossberg <rossb...@google.com> wrote:

> (1) Change the set-up of .ondemand calls.
> (2) Change the invocation of your bundling tool.
> 
> As soon as you have to go there, you've lost almost all advantages of
> the ID-based declaration form. Its assumed convenience doesn't scale
> to non-trivial scenarios.

No. You've missed the point. Configuration code is not the problem; there's at 
most a 1:1 relationship between modules and their configuration.

There is a 1:many relationship between a module and import declarations that 
refer to it. This is what matters: even if you reconfigure the app to change 
the way the module is shipped, every client module's import declaration should 
remain the same. This may include repackaging, file renaming, switching between 
CDN's, switching between versions, etc.

> I do realise now, however, that it gets uglier when an import triggers
> _multiple_ ondemand scripts at once, because then their execution
> would have to be sequentialized.

That's exactly what I meant. If you have to sequentialize the execution of the 
transitive *package* dependency graph, then you over-sequentialize the fetching 
of the transitive *module* dependency graph.

> When is the script body
> executed? And what effect do updates to the loader during execution of
> that script have?

The execution semantics goes roughly like this: once all the module 
dependencies are computed and fetched, the linking process begins. If there are 
no module-factories (i.e., the AMD-style modules that require their 
dependencies to have been executed before they can compute their exports), 
linking is observably atomic. Then the bodies (script bodies and any as-yet 
unexecuted module bodies that have any clients importing from them) are 
executed sequentially. If, however, there are module-factories, the process is 
more interleaved: the system atomically links all declarative modules 
transitively required by each module factory, then executes those declarative 
modules, then executes the module factory, rinse, repeat.

When linking is atomic, loader updates don't matter. When the interleaving 
happens, loader updates can affect future linkage. It's not a good idea to be 
mucking the loader in the middle of initializing modules. It's better to place 
it at the very beginning of an application, before any importing starts 
happening.

> - intra-package module references should be internal and fixed,

You keep making this claim as if there's some black-and-white distinction. When 
we pointed out that your intra-package references are not tamper-proof given 
the translate hook, you said "well but in practice..." So, that argument cuts 
both ways. In practice, intra-package references will not be messed with 
externally. They are only hookable at compile-time; once runtime starts they 
are completely hard-bound. And if there are any collisions, there will be a 
compile-time error anyway. This is a tempest in a teapot.

>> - implement the package in multiple files via some extension to ECMAScript 
>> (e.g., include) that requires a tool to assemble it back together in a 
>> single file with only lexical modules.
> 
> Why would that require an extension? Import from URL is just fine for
> that purpose. And when you deploy, you run the tool, see above.

It requires non-standard semantics if you want to allow cyclic module 
dependencies, and probably also if you want to preserve the same execution 
semantics as lexical modules.

> Counter question: what other options are available in the current
> design, under your no-tool and no-staging assumption? AFAICS, you can
> do either of the following:
> 
> - Write script files with module declarations...
> 
> - Write module files...

I never said staging is bad, I said the staging in your solution is broken: it 
over-sequentialize the fetching of resources.

The requirement I'm talking about -- which is absolutely critical for the 
practicality and adoption of ES6 modules -- is that the system has to work well 
out of the box without tools. And the current system does. The only thing you 
have to change if you decide to repackage modules into scripts is a *single* 
piece of code staged before application loading that configures the locations 
of the modules. Note that this is how require.js works.

> I don't envision many people would want ever to use the first option...
> And the other option requires tool support.

That's simply not true. Look at require.js:

    http://requirejs.org/docs/api.html#config

With even just a reasonably convenient ondemand, you can easily move some or 
all of your modules into scripts by hand. Even without ondemand, the resolve 
hook is straightforward. (BTW I hate the name `ondemand`; it'll get renamed to 
something more declarative-sounding, like AMD's `paths`.)

Your suggestions would result in a system that is unusable without tools. 
That's not acceptable and it's not going to happen.

Dave

_______________________________________________
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

Reply via email to