On 8 Sep 2009, at 1:54 pm, Adrien Pied PiƩrard wrote:

[snip]
> In this kind of discussions, I am totally lost.

I know what you mean :-)

> Perhaps am I deeply misunderstanding the potential of the standard,
> but I would expect it to be something designed _to_get_things_done.
> And to get things done, I need hashtables. I need sockets (or directly
> clients for given protocols). I need to be able to run concurrently
> two procedures which may do side effects. I need to read and write
> files, to move to the end, rewind to the beginning, and insert text in
> the middle. And I need them available immediately.

Indeed. My own take on the recent discussions is that people are  
arguing somewhat at cross purposes.

Some people say "things should be in the standard because they want to  
be able to use them" (sockets etc). I agree, we need a standard way to  
do sockets, but it needn't be in "core Scheme". It should be  
standardised in an SRFI, or maybe in "large Scheme". It definitely  
needs standardising - even if it's a nasty low-level BSD sockets  
wrapper that we have to write a *PORTABLE* library on top of to make  
nice and high level. At least if we standardise that BSD sockets  
layer, then the library will be usable across all implementations that  
provide sockets. I'd like to see at least SRFIs for sockets and decent  
binary I/O, and a POSIX SRFI for subprocesses and UIDs and chroots and  
all that useful jazz (I do systems programming in Scheme, you see).

Some people say "things should be in the standard because it's  
expensive to implement them in a portable manner", an argument I saw  
in the threaded started by an email from John Cowan arguing for  
including case-lambda and parameters and other such things. I don't  
think the boundary of the standard needs to be the boundary of the  
implementation, though; I think that if useful features *can* be  
implemented portably in terms of the standard, they should be left out  
of the core, but standardised in an SRFI or the large-scheme standard  
so that at least they're the same in all implementations that have  
them. Implementations that lack them can use the slow portable  
implementation.

Some people might think things need to be in the standard because  
they're impossible to implement in terms of "core scheme", such as  
sockets, native threads, and other "platform facilities", but I think  
that's just confusing the boundary of the standard with the boundary  
of the implementation again. There's no reason why an implementation  
can't implement things beyond the Core Scheme standard. It can still  
call itself Core Scheme plus SRFIs/modules/optional features/whatever  
X,Y,Z.

I've seen a good argument that it will confuse matters to have too  
much optionality; that to be called Scheme, an implementation will  
need to contain a rather large set of features (including networking).  
I can see their point - languages like Java, Python and Ruby tend to  
have this property - but I think that excludes some key use cases for  
Scheme implementers; I think "core Scheme" needs to be easy to  
implement. As easy as possible, within reason (and there's the weasel  
word). So that academics can produce a core scheme implementation in  
terms of third-order eigenfunctors over the space of partial monoids  
or whatever, and then be able to test it "in practice" by slopping on  
an off-the-shelf set of core libraries (SRFI-1 et al) and producing a  
Scheme environment that can run just about any computation-only (eg,  
not inherently requiring platform functionaliy). This also means that  
we get more interesting and innovative implementations that aren't  
"just toys"; they'll lack sockets and so on, but only until somebody  
hooks up an FFI...

There's been plenty of prior work in establishing "profiles" of  
standards. The Java folks have a "core Java", which is the JVM and the  
Java syntax, then a broad set of libraries. This is then organised  
into profiles: J2ME (mobile edition), J2SE (standard edition), J2EE  
(enterprise edition). The names of those profiles are silly marketing  
buzzwords, but the underlying concept is sound - a "J2ME app" can  
expect to find the core data structure libraries in place, and IIRC a  
few things like HTTP, but can't necessarily get to a filesystem or a  
raw sockets layer. While J2EE adds XML processing and all sorts of  
stuff.

So we could structure R7RS like this:

1) Core Scheme, the little language, which is just enough to provide  
the core Scheme computational model: lambda, function application,  
call/cc, and dynamic-wind along with the basic Scheme data types (at  
least integers, rest of the tower optional). We'd have read and write  
in terms of some unspecified "standard input/output", but no further  
detail on what those might be. But most importantly, we'd have a  
method for applications and libraries to declare their requirement for  
specified features to be present, or to announce that (as a library)  
they provide a certain feature; this can be very crude at this level,  
implementable with little more than include semantics, while leaving  
the door open for a complex, powerful, backwards-compatible module  
system further down the road.

2) SRFIs defining the APIs for platform features like custom ports  
(provide procedures for read/write), socket I/O, filesystem I/O, and  
the POSIX process model

3) SRFIs for useful things we can build in terms of the core (or, more  
efficiently, into the implementation itself), such as SRFI-1, string I/ 
O, threads (be they low-level, or the sort of high-level stuff we're  
talking about in the Implicit Parallel Scheme thread), parameters, and  
various syntactic sugars and utilities, including more and more module- 
system features (renaming, etc).

4) Named profiles, listing which SRFIs (and other things optional in  
the core, such as complex numbers) are needed to meet the profile.

So we might have a "Full Scheme" (sucky name, I know, let's not stop  
talking about a better one...) profile that mandates all the useful  
syntactic sugar and libraries at least 10% of people want to be  
consistently available to any new Scheme programmer picking up the  
language, to be considered a normal part of "mainstream Scheme", plus  
all the similarly mainstream platform features (filesystems, TCP  
sockets, processes, stdout/stdin/stderr), plus libraries that can be  
built atop them (http, core "This procedure can handle HTTP requests"  
support that can be implemented in terms of HTTP/CGI/FCGI/whatever,  
support for various data formats, and so on).

And then even a large embedded implementation can call itself "Full  
Scheme minus the networking stuff, as I just don't have the hardware  
for that".

I firmly believe that everyone can be happy: we can have a minimal  
standard Scheme, and we can have a broad and complex set of library  
APIs, and a wide range of implementations that meet various different  
levels of needs.

I wrote more about this on my blog: 
http://www.snell-pym.org.uk/archives/2009/09/04/r7rs/

-- 
Alaric Snell-Pym
Work: http://www.snell-systems.co.uk/
Play: http://www.snell-pym.org.uk/alaric/
Blog: http://www.snell-pym.org.uk/archives/author/alaric/




_______________________________________________
r6rs-discuss mailing list
[email protected]
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss

Reply via email to