> Doesn't it also make it easier on new users that things with
> different semantics have different names?

The main benefit of overloading is (in my view) to support constrained
polymorphism: an operation (or value) can be used for all types for
which there are certain (other) operations (or values) defined.

Usual example: it is easy (even for new users) to define

      square x = x * x

and think of square as a function that performs x * x for all types
for which * is defined. Having to define things like

      square_int:: Int -> Int
      square_int x = x * x

      square_float:: Float -> Float
      square_float x = x * x

(or perhaps not making "*" overloaded?) is perhaps not so easy to
justify (for new users). Or perhaps it is, in a polymorphic language
without constrained polymorphism...

Type classes allow definitions such as square x = x * x, but
unfortunately it requires programmers to specify (anticipate) what is
the "reasonable" most general type of "*". Type a->a->a may adequately
cover all possible adequate uses of "*", but in other cases some
possible uses of an overloaded symbol might be excluded only because
the programmer did not anticipate well. In fact, this a priori
anticipation is not necessary at all. The type of square may be
inferred, from the definitions of "*" that occur in the relevant
context. (With separate compilation, this relevant context is given
according to the import interface of a compilation unit.) (See also
"Type Inference for Overloading without Restrictions, Declarations or
Annotations", http://www.dcc.ufmg.br/~camarao.)

In my view this is no minor issue... Similar remarks would apply for
overloading monadic operations, allowing, for example, the definition
of modular monadic interpreters (by the way, without wrapping and
unwrapping with newtypes as well...).

Carlos






Reply via email to