On Mon, 2008-11-03 at 12:35 -0800, Thomas Lord wrote:
> I hoped that two things would follow ratification:
>
> 1) R6RS believers would start cranking out the holy grail of "lots of
> useful libraries," -- the much anticipated phenomenon that would
> supposedly make Scheme as practical for many as, say, Python, Perl, and
> Ruby are today.
R6RS contained no chance at enabling people to do this. "useful
libraries" at this technical juncture of history mostly require
ways for Scheme code to connect with whatever applications form
the "standard method" for doing things. This is particularly
true of the business environment, in which particular applications
(which are mostly NOT written in scheme) are established as "known
solutions" and "best practices." Companies will not consider moving
away from these applications at this time so if scheme can't talk
to them then the adoption of scheme by these companies is a lost
cause.
For example, we will see no widespread scheme web applications
until there is a standard way for Scheme (not a particular
implementation of scheme, but *standard* scheme) to interface with
Apache, IIS, Firefox, and Internet Explorer. In order to create
such a way we have to agree on at least a syntax allowing scheme
code to make calls into C and C++ runtimes and handle "callbacks"
from C and C++ runtimes. That's not part of the core language,
but it's a *VERY* important library that a scheme system probably
shouldn't be shipped without.
> 2) R6RS critics would converge on a real "correctional" -- make an
> alternative Scheme.
I ... made an alternative lisp. By myself, without talking to
anyone else. The R6 process made me so frustrated with talking
to other people about lispy language design, that I considered
such talk to be not worth my time. I had started by implementing
scheme, or something very close to it, but after R6 actually
*passed*, there seemed no further point in maintaining any kind
of scheme compatibility, so I allowed things to drift (further)
away from it, and further away from the history of scheme. Now
it is very much its own little lisp and I don't think it's even
very closely related to scheme anymore.
(brief description of my toy lisp shifted to end of mail; not
relevant to this response).
> The net effect would be to create a kind of semi-democratic, non-profit,
> Institute for the Advancement of Scheme. I'm not sure there is any
> lesser hope worth aiming for.
In a purely practical engineering sense, I cannot emphasize enough
the importance to corporate or other serious adoption of scheme
of a standard way to interoperate with the standard applications
used throughout the business world. Scheme cannot succeed if one
cannot easily write standard portable libraries interfacing scheme
code to:
Apache
Firefox
IIS
IE
ODBC-compliant databases
Word
Excel
Exchange
Xwindows
Unix Shell (environment variables, command line arguments, etc)
Mplayer
Flash
OpenGL
PostScript and TTF fonts
Mouse and keyboard events
The system Clipboard
and about a hundred other "standard accepted solutions." In
practical terms, this means we need to standardize at least a
scheme syntax for making a call into a foreign runtime and a
convention for handling "callbacks" from that runtime.
We need to *NOT* standardize exactly how an implementation
represents things and whether those things are binary-images
comprehensible to other runtimes. We don't presume to dictate
implementation strategies.
We need to *NOT* standardize any kind of data pointers between
runtimes; competing garbage collectors or garbage collection
that can't count pointers in the other runtime are both Bad
Ideas.
But we do need standard ways to get ordinary values into
arguments and from function returns in other languages'
runtimes.
Bear
post scriptus: skip this unless interested in a non-scheme
lisp which still has massive problems of being slow and a
memory hog.
Now my pet toy lisp has:
* a new calling discipline in which pointers to source-code
argument expressions get packaged up with pointers to
environments to create a formal argument, to be evaluated
zero or more times or picked apart as syntax by the
called function. This makes what scheme considers functions
and syntax be the same kind of entity called in the same
kind of way. They are all "functions" in the new system.
Because functions with lazy semantics, or functions that
"lazify" others, can be easily defined with this mechanism,
"delay" and "force" got pushed out to libraries.
* Lexical environments defined using lambda or let, as in
scheme; dynamic environments defined using "with", which
has the same syntax as let. This is important, because
it has enabled me to take a lot of the things that are
part of the dynamic environment in scheme (such as the
input and output ports, etc) out of the core language and
into libraries, along with functions that implicitly
access them.
* Reified continuations are no longer captured with call/cc;
lambda has been extended and may now bind the function's
"escape" continuation with a name as if it had been an
argument. Calling the escape continuation with a value
causes a function return in which the function returns that
value. Assigning a variable with a larger scope to the
continuation, or returning its value or a structure containing
it, creates a reified continuation (and an analysis-time
warning because experience teaches that creating a reified
continuation is usually a bad idea). Of course you can
implement the function "call/cc" with this, but it's no
longer primitive and got pushed out to a library.
* I no longer have a "character" datatype distinct from strings
of length 1. This seems to make handling character sets
(including but not limited to Unicode) considerably smoother.
"character" routines from scheme are all "string" routines now
and this eliminates a bunch of redundant functions.
* I've been working on good ways to handle multiple character
sets in the same program gracefully; still trying alternatives
but for most purposes things seem to be smoothest when "default-
character-set" is a variable in the dynamic environment and can
be shadowed using "with". I have libraries that allow working
with several different character sets, including unicode-16
and unicode-21, (only those characters expressible as a
single unicode codepoint), unicode-n (an infinite character
set where each character consists of a unicode base codepoint
plus nondefective stream of variant selectors and modifier
codepoints), a couple of different ISO codepages, 'keyboard
encodings' (base keyboard character plus zero or more of a
limited set of modifiers such as 'escaped', 'shifted',
'control', 'Left-Alt', etc), and ascii-7.
* variables are also symbols; you can add any number of named
and/or array-indexed "properties" to them. Arrays as such
no longer exist. Records and objects were also visibly redundant
and have been removed. Libraries and managed namespaces are
symbols too; the "variable names" in a given library are
"property names" within the symbol representing that library.
(let ((httplib) (import-lib "http"))
... )
for example creates a lexical environment in which the variable
httplib is a symbol that contains all the things defined in the
library "http". within this lexical environment, you can make
calls to httplib.encode-url, httplib.format-anchor, and so on.
Each named or array-indexed property is also a symbol that
further properties can be added to. (Thus they are "hierarchical"
or "nested" in representation). Certain characters that are part
of variable names or reserved characters in scheme (specifically
the period and square brackets) have been appropriated as syntax
to simplify symbol/property references.
"apply," or any procedure call, is semantically identical
to packing all arguments into a symbol and making a two-
argument call to "sym-apply" with the procedure name and the
packed symbol. All function returns are semantically identical
to packing all function return values into a symbol and making
a two-argument call to "sym-apply" with the function's
continuation and the packed symbol. In other words, the call
frames and return frames, with their ordered arguments or ordered
returns, have the same representation as symbols with ordered
(indexed, or array) values, and a primitive form "sym-apply"
exists for making function calls using symbols directly.
Therefore the semantics are that of a single (albeit complex)
argument lambda calculus, and multi-argument continuations
have been dropped. If they are ever needed, they are easily
implemented/simulated as "sym-apply" to the function's
continuation and an anonymous symbol. This is exactly how
multi-argument calls and returns work anyway.
"apply" has been moved out to a library. Its implementation
is in terms of sym-apply.
* defining a new fundamental type, or extending the syntax or
semantics of an existing one, means providing a routine to
write it, providing a routine to read it, providing a routine
to evaluate it if it's not self-evaluating, and/or providing
a routine to apply it if it has call semantics. This allows
me to do everything that common lisp does with readtables
and everything that scheme does with symbol-macros, while
leaving the cruft out of the core language and managing type
and "syntax" definitions with scopes the same way one manages
any other definitions. This in turn allows most of the
numeric tower (including the syntaxes of complex numbers
and the syntaxes of non-decimal numbers) to be pushed out
into libraries.
* lambda may also capture and bind a name to "restarts", which
are like continuations in that control jumps to the return point
of the function, but calling them also "undoes" all stack and
heap mutations done since the call that captured the restart
or the previous call to the restart (This is done with heap
and frame version numbering and copy-on-write). Functions
that "return" by having restarts called return the number of
times the restart has been invoked since it was captured.
This is still pretty experimental, and contributes mightily to
the aforementioned memory-hoggery. At this point restarts on
different points in the program interfere with each other;
The ones that were captured first "undefine" all that were
captured subsequently; invoking one that was captured subsequently
resets the number of iterations on all that were captured earlier
to the number they had when the subsequently-captured restart
was captured. Although the theory is sound and the logic
consistent, I'm still trying to decide if this is the most
correct and useful behavior for restarts.
* errors are just values that functions can return. Functions can
be called with an error in place of one or more arguments, and
if so will simply return that error immediately. This simplifies
error handling a lot, in that you needn't bother to check for
errors anyplace except where you're handling them. Under the
hood this is optimized to a try/throw/catch model where most
"error continuation" returns are properly tailrecursive and
eliminated by tail recursion optimization, but it's conceptually
much simpler for the programmer.
* I've been looking at APL math using lazy operators on arrays.
Don't know if I will attempt it yet, but the new calling
discipline certainly allows free intermixing of lazy and
eager functions, so it'd work. The question is whether it
would be worth it.
_______________________________________________
r6rs-discuss mailing list
[email protected]
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss