Geoffrey Broadwell wrote:
The problem with this method is that there are usually *several* ways to
implement each feature in terms of some number of other features.  The
creators of the shared prelude are then stuck with the problem of
deciding which of these to use.  If their choices do not match the way a
particular implementation is designed, it will then be necessary for the
implementation to replace large swaths of the Prelude to get decent
performance.

For example, implementations in pure C, Common Lisp, and PIR will
probably have VASTLY different concepts of available and optimized
primitive operations.  A prelude written with any one of them in mind
may well be pessimal for one of the others.

That's not to say it's not a useful idea for helping to jumpstart new
implementations -- I just somewhat doubt that a mature implementation
will be able to use more than a fraction of a "common" prelude.

I have a few answers to this:

0. I agree that, no matter what, the implementation will still want to substitute in its own versions. But so what? Having a reasonably more complete high-level reference implementation of the Prelude in Perl 6 won't lose us anything over what we have now and should on average gain something.

1. What we *should* be doing with the Prelude, like with STD.pm, is write under the assumption that the implementation is also written in Perl 6.

We should write the Prelude in as declarative a manner as possible, saying *what* we want to happen rather than how, such as you do when writing in a functional language.

We should make use of Perl's higher-level tools like hyper-operators and reduce-operators etc and write in a fashion that is developer focused, same as writing normal Perl 6.

We do not, except where it makes sense, want to be defining things in terms of the lowest level things possible, as that would be premature optimization, which may help some implementations and harm others.

We should instead be defining all the low level operators in terms of the high level operators, where possible. It is easier for an average implementation to translate a high level source operation to low level native operations on average, than try to amalgamate a whole bunch of low level source operations to fewer high level implementation operations.

(Note for example I suggested using big/unlimited-size Int as a basis of definition rather than a machine-int.)

Don't be afraid to be recursive, if it is even possible. For example, one could define Hash in terms of Array *and* define Array in terms of Hash. Or Int in terms of Rat *and* Rat in terms of Int.

2. We should be able to live with the benefit of at least short term hindsight, seeing what likely implementations we will have, and aim for the middle. For example, write as if high-level concepts are supported in the implementation.

3. Perl 6 does have multi-methods. Maybe make use of them to define alternative sample implementations where it makes sense, though don't go too far citing combinational exponents.

Or if multi-methods don't work quite that way, we could add a kind of trait to them or something that says use one or the other but not both, and implementations can mark in their override file to pick a particular version.

Even if not all implementations are the same, some are similar to each other and can share that work.

Call them "possible representations" or "possible implementation versions" or something.

To some extent I think Perl 6 does have what we need in a different fashion, such as where you can declare a class or attribute and indicate an implementation type like Opaque vs Hash or what-have-you.

P.S.  I did this sort of thing once -- a Forth prelude that attempted to
minimize the primitive set, and it *was* very nice from an abstract
perspective.  Unfortunately, it also made some operations take millions
of cycles that would take no more than one assembly instruction on just
about every CPU known to man.  It's a REALLY easy trap to fall into.

That may be because you wrote in terms of a few low level operations rather than a few high level ones; what if you did the reverse?

As for me, I am in the process of doing this now with my Muldis D language, which is fairly high level and should run on everything Perl 6 does, though with a stronger implied support for functional-paradigm-supporting languages, and also run over SQL; though at the same time each implementation can choose to leave out some features as it sees fit, some are more important to include than others. In my implementation of Muldis D over Perl (Muldis Rosetta's default engine), most built-in types and operators are defined as users would, in terms of other ones, save for its only 4 truly-primitive types, [Int (opaque unlim-size scalar), String (dense-array-of-Int), QTuple (heterogenous collection), QRelation (homogenous collection)], but the Perl class/object representing a user/as-user-defined type is sub-classed for various other built-in types like [Bool, Blob, Text, Rat, Set, Maybe, Array, Bag, etc] which override implementation details to use the Perl primitives directly. My point is here that most of the Muldis D implementation that I have to do is written in Muldis D, so that part can generally be shared between Perl 5, Perl 6, PIR, Haskell, etc; by design all the Muldis D code is translated to equivalent Perl/PIR/Haskell etc code first anyway, letting their compilers do the real work, and so overriding with a non-translated Perl/Haskell etc routine is easy to do. Bringing this back on topic, the large amount of Muldis D built-ins written in that language are analogous to the Perl 6 Prelude. As long as the Perl 6 Prelude is written in sufficiently high-level a fashion, it should be effectively reusable.

-- Darren Duncan

Reply via email to