HaloO Damian,

you wrote:
No. If the dispatch behaviour changes under a Manhattan metric, then it only ever changes to a more specific variant.

This statement is contradicting itself. A metric chooses the *closest*
not the most specific target. Take e.g. the three-argument cases
7 == 1+2+4 == 0+0+7 == 2+2+3 which are all out-performed by 6 == 2+2+2.
But is that more specific? If yes, why?


Since MMD is all about choosing the most specific variant, that's entirely appropriate, in the light of the new information. If you change type relationships, any semantics based on type relationships must naturally change.

Again: do you identify 'most specific' with closest? The point of
pure MMD is that specificity relations are not defineable for all
combinations of types. That is the types form a partial order.


On the other hand, under a pure ordering scheme, if you change type relationships, any semantics based on type relationships immediately *break*. That's not a graceful response to additional information. Under pure ordering, if you make a change to the type hierarchy, you have to *keep* making changes until the dispatch semantics stabilize again. For most developers that will mean a bout of "evolutionary programming", where they try adding extra types in a semi-random fashion until they seem to get the result they want. :-(

Well, designing a type hierarchy is a difficult task. But we shouldn't
mix it with *using* types! My view is that the common folks will write
classes which are either untyped---which basically means type Any---or
conform to a built-in type like Str or Num. That's the whole point of
having a type system. What is the benefit of re-inventing a module
specific Int?


Perhaps I've made this argument before, but let me just ask a
question:  if B derives from A, C derives from A, and D derives from
C, is it sensible to say that D is "more derived" from A than B is? Now consider the following definitions:

    class A { }
    class B is A {
        method foo () { 1 }
        method bar () { 2 }
        method baz () { 3 }
    }
    class C is A {
        method foo () { 1 }
    }
    class D is C {
        method bar () { 2 }
    }

Now it looks like B is more derived than D is.  But that is, of
course, impossible to tell.  Basically I'm saying that you can't tell
the relative relationship of D and B when talking about A.  They're
both derived by some "amount" that is impossible for a compiler to
detect.  What you *can* say is that D is more derived than C.


Huh. I don't understand this at all.

My understanding is that Luke tries to express that the metric distances
are D -> A == 2 and B -> A == 1. And that this naturally leads to a
programmer thinking of B beeing more "specific" than D because in multis
where instances of both classes are applicable---that is ones with
formal parameters of type A---it gives smaller summands for the total
distance. E.g. calling multi (A,A,A) with (B,B,B) gives distance 3 and
distance 6 for (D,D,D).



In MMD you have an argument of a given type and you're trying to find the most specifically compatible parameter. That means you only ever look upwards in a hierarchy. If your argument is of type D, then you can unequivocally say that C is more compatible than A (because they share more common components), and you can also say that B is not compatible at all. The relative derivation distances of B and D *never* matter since they can never be in competition, when viewed from the perspective of a particular argument.

Nice discription of methods defined in classes. BTW, can there be
methods *outside* of class definitions? These would effectively be
multi subs with a single invocant without privileged access.


What we're really talking about here is how do we *combine* the compatibility measures of two or more arguments to determine the best overall fit. Pure Ordering does it in a "take my bat and go home" manner, Manhattan distance does it by weighing all arguments equally.

We should first agree on *what* the dispatch actually works. The
class hierarchy or the type hierarchy. In the former case I think
it can't be allowed to insert multi targets from unconnected
hierarchies while in the latter the privileged access to the implementing
class environment cannot be granted at all. But this might actually
be the distinction between multi methods and multi subs!

The thing that pure approach spares the designer is the dought that
there might be parallel targets that are more type specific on *some*
of the parameters. This might give rise to introspection orgies
and low-level programming.


In conclusion, the reason that manhattan distance scares me so, and
the reason that I'm not satisfied with "use mmd 'pure'" is that for
the builtins that heavily use MMD, we require *precision rather than
dwimmyness*.  A module author who /inserts/ a type in the standard
hierarchy can change the semantics of things that aren't aware that
that type even exists.  If you're going to go messing with the
standard types, you'd better be clear about your abstractions, and if
you're not, the program deserves to die, not "dwim" around it.

I second this. The dwimminess would lead to half-cooked handling
if the dispatch doesn't hit the most specific handler. Or in other
words the bar is raised higher for modules that want to add types
to the standard hierarchy.


That *might* be an argument that builtins ought to do "pure ordering" dispatch, but it isn't an argument in the more general case. Most people won't be futzing around with the standard type hierarchy, except at the leaves, where it doesn't matter. Most people will be using MMD for applications development, on their own type hierarchies. Someone who's (for example) building an image processing library wants to be able to add/merge/mask any two image types using the most specific available method. Adding a new image type (or supertype) might change which method is most specific, but won't change the image processor's desire. Indeed, changing the dispatch to a now more-specific method *is* the image processor's desire.

Ohh, no. Take e.g. alpha blending. If you add that to a former
implementation that handled monochrome and RGB one doesn't want to
be unsure if it happens or not just because three Grays outperformed
one Alpha in the style of 0+1+1+1 > 1+0+0+0. Changes to the type
hierarchy *are* major changes. A type declares *what* is done while
a class defines *how* it is done. And with roles it is quite easy to
get more than one class implementing the same type.


So, in counter-conclusion, Pure Ordering MMD maximizes ambiguity and thereby imposes extra development costs to achieve (generally) the same effect as Manhattan MMD achieves automatically. Both schemes can be made to detect ambiguities, and under both schemes those ambiguities can be resolved by the judicious addition of either variants or interim classes. However, by default, when POMMD detects an ambiguity, it just gives up and makes that ambiguity fatal, whereas MMMD always automatically resolves ambiguities in a plausible manner. I know which approach I'd prefer.

That sounds to me like the easier approach of C's implicit type
conversions where C++ requires at least a brutal cast to get it
through the compiler. Also CLOS and Dylans approch to MMD by means
of class precedence lists is not considered a success by many.
Unfortunately I have no large project, long term experience with
these languages.

Well MMMD also gives up if there is more than one target
with the same distance. Would you also argue in favour of
secondary disambiguation? E.g. by averaging over the summands
such that 2+2+2 is better than 1+1+4?


I think we should discuss the intentions of classes, roles and
constraints and how they relate to the type system.

If Perl6 wants to live up to the claim of (optional) strong typing
then the dispatch must be first on the type lattice and then on the
class hierarchy. The folks who don't want to adhere to typing might
avoid the type dispatch and appear to the type dispatchers as Anys
or some scoped package or module type.

Regards,
--
TSa (Thomas Sandlaß)


Reply via email to