On Sun, 2009-08-23 at 19:55 -0700, Vincent Manis wrote: > OK, here's my $0.02 worth (it's Canadian funds, so less than 2 cents > in the US or the Eurozone :) > > 1. The division into large and small is a VERY good idea.
For what it's worth: I agree. I think the only way forward is for the path to fork. > I agree with > Elf that (or (equal? large-language industrial-language) (equal? small- > language teaching-language)) => #f, so I guess I'd like to see > different terminology, though I'm not sure what it might be. Also agree. The thing that makes scheme a good teaching language is not smallness. It's that there's a small set of constructs and concepts which is also useful, without requiring tons of additional details to be mastered. Additional features not yet in use in a course don't present any difficulty at all in that course, in the same way that the existence of multivariate differential equations doesn't make it any harder to teach first-graders to add. > 2. This probably marks the retirement of the term `Scheme' to mean a > specific programming language, just as `Lisp' was retired long ago. Could be. Don't really care much about names. Before picking names, people should do the standard checks at http://people.ku.edu/~nkinners/LangList/Extras/langlist.htm and http://en.wikipedia.org/wiki/List_of_programming_languages to avoid clashing with the name of an extant (or extinct) language. > 4. I'd be opposed to having the large language document describe a set > of libraries that in some way are optional. I wouldn't. The large language document should describe libs required for conformance to the large-language standard, libs required for implementing scheme in particular environments, and possibly optional libs as well. Absolutely key, though, is that each library it describes should be defined for a particular task and that they can be loaded (or - and this is very important - *not* loaded) separately. One of the biggest failings of the R6 document was its failure to separate libraries in any way. > 5. I'm neutral on the subject of standardizing FFI. On the one hand, > the benefits of a portable standard are obvious (both for portability > across implementations and for practical SWIG support); on the other, > it could paralyze the Working Group. I am firmly of the opinion that FFI should NEVER be a "hard" standardized feature of the language, because in order to do it you have to go beyond the language itself and make assumptions or mandates about internal representations of data and the representations used by systems you're interfacing with. An FFI is an implementation feature, and any requirements relating to it belong in an implementation standard, not a language standard. This may be splitting hairs as far as a lot of folks are concerned, but to me it's a very important distinction. A language exists independently of a particular machine, architecture, encoding, ABI, or environment; A language standard should neither refer to nor depend on any of those things. A language standard should be like math; it exists without reference to the real world and you can use it to prove theorems. An implementation of that language on the other hand has to be hosted in a particular environment and deal with all of those things. When you talk about how to deal with those things it isn't the language standard as such; it's an implementation standard. One of the reasons R6 made me unhappy was that it lost sight of the distinction between language and implementation. That said, it's entirely reasonable to create an implementation standard that says things about implementation features, with requirements binding only on those implementations that exist in environments with particular architectures, encodings, operating systems, and ABI's. For example, system libraries and environment variables on a unix should be reachable in a standard way for implementations that run on a unix. Java call FFI should be doable in a standard way for implementations that run on the JRE, etc. If the native string encoding for some environment is the 16-bit subset of unicode, then the implementation standard should bind implementations there to using unicode at least 16 bits wide to represent characters and require them to provide I/O functions on 16-bit unicode. But that's an implementation standard. It's not the same as a language standard. The point here is you have to make a standard that requires things of implementations running in particular environments, but you have to do it without requiring every implementation everywhere to duplicate the salient features of each and every environment. I wish I had thought this through this clearly when R6 was still going on; it's an important point but I was treating it as "given" and I was not at all articulate about it. > 6. I hope the Working Group considers including some sort of CLOS-like > facility in the large language. This would allow a clean unification > of records and conditions, and thus would help the large language > comply with the first sentence of the Introduction to the Scheme > reports. I don't think CLOS is an example of stripping away the restrictions that have made additional features appear necessary. CLOS, on the contrary, is a methodology for providing every feature that anyone ever thought of. There's nothing wrong with that, really, and it's a stunning piece of engineering. But it's certainly not an example of the spirit of that first sentence. Bear _______________________________________________ r6rs-discuss mailing list [email protected] http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss
