On 11 Sep 2009, at 1:37 am, John Cowan wrote:

> But no!  That's a result of conflating modules and compilation units.
> There is nothing to prevent you from compiling a file containing no
> module
> declarations, which when loaded is loaded directly into the
> interaction
> environment.  Remember, loading compiled code is like loading its
> source,
> which is like typing in the source.

Hmmm. Would/should a compiler accept such a file? Can you meaningfully
compile some random Scheme (without the scope boundaries of a module
declaration) which, when loaded *after* some random macro definitions,
then works as if you'd type it in?

What if you loaded it just after a (define-syntax lambda ...)?

Surely in such a case, the compiler can do nothing, and really needs
to just keep the source so it can be included?

AIUI, the unit of compilation should be a module or a program - both
of which start in a known environment with explicit imports.

I've heard of many module systems that make the explicit assumption
that imported bindings are immutable, too. This allows the
implementation to know which bindings *are* mutable since there has to
be a set! in lexical scope, which allows for much excellent analysis
of the language.

In general, with all dynamic-library systems, there's an issue of what
information is gleaned from the library at compile time, and what at
load/run time. The compiler author would love to load and inline the
entire library; the programmer would perhaps rather the library be
loaded purely at run time, so they can use them to implement plugin
systems, and/or have applications they can distribute that will just
pick up "the local" implementation of SRFI-NN.

Clearly, a dividing line has to be drawn, and where it is drawn will
depend on the implementation techniques employed, as I don't think the
standard should be mandating how these things can work.

Here's a few proposals:

BASE LEVEL - there's no reloading of modules allowed, and there's no
real distinction made between compile/load/run times; so a module/
program that uses a module can be considered to take a snapshot of it
at the time the source code is presented to the system and that
'import' declaration first seen. Changing the library in the
filesystem or whatever under the application can then, depending on
the implementation, either have no effect (it was all inlined anyway,
or multiple versions are kept around in the library store), or is an
error.

IN AN IDEAL WORLD - any module can be reloaded, and all modules/
programs that use it will in effect be 'recompiled'; any changes due
to the reloaded module take immediate effect (even if the new version
of the module redefines lambda). This pretty much requires that the
implementation recompile anything that depends on that module, if it
does compilation at all; IMHO it definitely requires that access to
the source code be kept, or at least some minimally-processed form of
it. How you reload a module will depend on the implementation; perhaps
compiling a module, then changing a module it uses, then loading the
original compiled module, should cause an immediate recompilation due
to the changed module source being spotted; perhaps you need to
explicitly load them into a registry; who knows.

EVERYTHING BUT THE SYNTAX - when an import is processed (eg, at
compile time if there's a compiler, or load time if not), then macros
are read from the module, and used to process the source code of the
module/program that loads them, there and then. The names of bindings
are used to resolve lexical scope, but the values are not; as if the
module were a giant lambda expecting the imported identifiers as
arguments. "load time" then consists of reading the values from the
imported module (which may have changed since compiler time, but must
still export the same bound names) and applying that function to feed
the values in. This is a tip of the hat towards separate compilation;
in effect, it's how C libraries work (the .h has the macros and bound
names in, the .a or .so supplies the values).

I think implementations should have the freedom to do any of the
above; the spec needs to be vague on this - but implementation manuals
must not be :-) IMhO, the semantics of updating modules has far-
reaching consequences for implementation techniques, and mandating one
will, in effect, mandate a certain implementation technique.

Perhaps we need to define several levels of dynamicity for a Scheme:

STATIC - for embedded or other similarly static systems; there's a
compiler that, in effect, inlines the entire module graph, and spits
out a binary that might not even have symbols in it. No REPL. No
reloading.

...intermediate levels...

FULLY DYNAMIC - there's a REPL, you can reload modules and all modules
that depend on them will take note of the change, you can ask any
closure what its lexical scope is (a module name and symbol for a top-
level binding, or that plus a list of names forming a hierarchical
scope for a more local definition, with #fs allowed instead of symbols
for closures that really have no name, plus a source code file+line
+column location for the definition, etc), introspective abilities to
dismantle a continuation to extract the "stack trace" at any point in
execution, and so on.

ABS

--
Alaric Snell-Pym
Work: http://www.snell-systems.co.uk/
Play: http://www.snell-pym.org.uk/alaric/
Blog: http://www.snell-pym.org.uk/archives/author/alaric/




_______________________________________________
r6rs-discuss mailing list
[email protected]
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss

Reply via email to