This is related to the conversation on the Synopses, but its sufficiently different that it probably justifies its own thread.
I want to start by making it clear that I'm not criticising the design of Perl 6, or any of the people working so hard to make it great. I'm just trying to address what I see as an obstacle to its adoption, (but may smply be my personal limitations). I've been following the project from the beginning, and have "Perl 6 and Parrot (2nd. ed)", http://shop.oreilly.com/product/9780596007379.do which I understand is no longer relevant. People on this list are undoubtedly familiar with the uncomfortable feeling that they're the smartest person in the room. I get it occasionally, but usually in the Perl community, I can relax. When giving training classes, however, it wasn't uncommon. Generally, groups would appear quite comfortable when shown the first four arithmetic operators, (+,-,*,/), but when the modulus operator was introduced they would become clearly uneasy. That meant I was in for some work, but it's the level of enthusiasm I think we have to consider. People working on open-source projects naturally want to solve the sort of problems that interest them. Languages developed by compiler writers and language theorists will tend to address questions which ordinary coders will not even understand, let alone want to solve. The development team obviously need a vocabulary to discuss such topics, so on top of natural obscurity, they've develped an IRC culture nearly impenetrable to outsiders, (at least this one). I've tried to follow it, but don't want to get in the way and slow down people doing useful work by asking too many dumb "noob" questions. However, some of the topics seem pretty esoteric. It takes time and effort to understand features of a language, so unless they are necessary to a problem, in a sense they become bugs. "Solutions" should not generally be more difficult than the problems they are supposed to solve. The "vertical" view of the specification presented by the Synopses, exploring each area in depth, makes perfect sense as a design document, but presents an enormous challenge to a human memory trying to load the whole language at once. This is compounded by them not necessarily reflecting what the language should actually do at any moment, either because the feature hasn't been implemented yet, or because it has been revised and the Synopsis hasn't. I've made a couple of attempts to get up to speed by writing sample programs, but I keep crashing into obstacles. Translating "Mastering Algorithms in Perl" to Perl 6 stumbled almost instantly, because it uses CPAN modules, so I turned to the Unix utilities for specifications. wc seems like a simple task, but even that promptly ran into the question of $.'s replacement, which doesn't appear to work as advertised. Reading Perl 6 examples on Rosettacode helps a little, but the site's structure makes it a rather cumbersome process. Having recently read Herbert Simon's "The Sciences Of The Artificial", (and listened to TIMTOWDI's "State Of The Onion" addresses), I wonder if a "layered" approach is the answer to Perl 6's sheer size? A series of defined subsets of increasing abstraction, from the basics up to the sophistication of grammar redefinition, would let beginners solve simple problems with a simple language, but offer a path to more advanced topics as far as they wish to go. (Just as Perl 5 goes from Llama to Camel and onward to Jaguar.) Even a guru has to take the path to enlightenment one step at a time. Unfortunately, the starting point and boundaries of the path aren't well-defined. This leaves the sage wandering a plain of obscurity, in pursuit of ever more tenuous metaphors. One particular historical analogue that occurs to me is PL/1. You may be familiar with the sage of IBM's attempt to develop One Language To Rule Them All, but in case you aren't, it goes something like this. After tossing Cobol, Fortran, and Algol into the cauldron, IBM kept stirring vigorously, to the point where scope creep was putting the development behinder the longer it went on. The lab at Hursley finally took on the project and delivered something, by nailing down the subset of features they could promise to implement, with the rest to follow later. (An early example of "Agile" practices, I suppose.) If a reasonably immutable basic language exists and can be published, we can get on with learning it, preparing exercises and training materials regardless of the obscure controversies over object interface properties going on at the internal levels. Areas clearly marked "not done yet, do not try this" can be bypassed. It might not be release 1.0, but at least 0.nn (nn > 0.5). Is that realistic, or have I missed something vital?