Dear Simon, Thanks for your question at my Scheme talk today.
I think I got a little off-topic at the end and really failed to explain what benefit Scheme gets from its model of a central report and multiple implementations. So I hope you don’t mind if I have a do-over on my answer :-) The benefit is that these multiple implementations have different specializations. A while ago, Andy Wingo wrote a good guide to some of the most popular: <https://wingolog.org/archives/2013/01/07/an-opinionated-guide-to-scheme-implementations> albeit it predates Chez Scheme being made open source, which was quite a significant event in the implementation landscape. For a somewhat quicker example, there is a new implementation called Hoot which targets WebAssembly in the browser. As a compiler, it is not (yet?) self-hosting but runs on an existing implementation, Guile, reuses part of Guile’s runtime, aims to support much existing Guile code, but its back end is completely new – beginning at a point much earlier on in the traditional compilation pipeline than the IR which the WASM backends of LLVM etc. start with. Because it targets the browser, in terms of traditional compiler optimization goals, it has to aim much more strongly at code size optimization than its parent implementation, which can afford to have more code for more speed. The cost, as implied in my talk, is that even considered purely as a technical exercise, it can sometimes be hard to find the right things to put in the report that will work for all of these different environments people are implementing the language for. Add in the different philosophical views of what Scheme is and how it should be, and it only gets harder. I think with Haskell, even when YHC and so on were still around, you had a much clearer view of what kind of language you were trying to make. You were on one side of RPG’s dynamic systems/static languages distinction from the beginning, for example. Best wishes, Daphne
