Re: [Haskell-cafe] recursive matrix computations

2006-04-21 Thread Brian Boutel


On 19/04/2006, at 10:32 PM, Andrew U. Frank wrote:



it appears possible to define matrix operations like LU- 
decomposition, SVD,

inverse etc. in a recursive fashion (see transpose in the prelude).

is this possible? has anybody code in haskell which does this (not  
asking

for high performance here, more instructive value).

thank you!




I recall Paul Hudak coming up with a version of LU-decomposition in  
Haskell using Dooliitle's method.
This was in response to a challenge by (I think) Arvind, at a meeting  
in Mystic CT, in 1989.


--brian


Brian Boutel

Wellington
New Zealand

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Beta Reduction, undefined vs Type Classes

2003-11-10 Thread Brian Boutel
Jared Warren wrote:

Consider:

 

class Thing t where
 thing :: t
instance Thing Int where
 thing = 0
instance Thing Char where
 thing = 'a'
   

Can someone please explain why

 

fst (1,thing)
   

...gets an 'ambiguous type variable' error, but

 

fst (1,undefined)
   

...doesn't? 

Perhaps because "thing" has an ambiguous type (it's either Int of Char), 
but "undefined" doesn't, (it's for all a.a).

Remember that type inference in the Haskell type system does not assume 
knowledge of the semantics of functions. In order to deduce the type of 
an application of "fst", you need to determine the type of its argument 
- you can't discard one of ithe tuple components just because you know 
you will lose it when you apply a projection function. Type checking is 
done before any evaulation, compile-time or run-time.

You might argue that it would be a good idea to apply some program 
transformations early to avoid this kind of unnecessary ambiguity, but I 
have doubts about its general usefullness.

And is there anything I can change to make the former work
as well? Even modifying fst doesn't work:
 

fst' :: Thing b => (a,b) -> a
fst' (~a,~b) = a
   

 

You should not expect it to work. The problem is with the type 
ambiguity, not with the semantics of fst.

--brian



--
Brian Boutel
Wellington New Zealand
Note the NOSPAM

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: character syntax

2002-02-07 Thread brian boutel

I would prefer that a language syntax is designed to be good for users, 
even if that means it presents problems for implementors.

You have not argued that these issues indicate bad design from the point 
of view of an application programmer.

If you think about languages that have been designed to be easy to 
parse, are these really languages that you would want to use?

Ian Zimmerman wrote:

> itz> All this taken together, I mean, _really_, is the lexical
> itz> structure of Haskell a botch, or what?
> 
> Jon> No. Innovative. All the problems described in this thread reflect
> Jon> unwarranted assumptions inherited in emacs. It's plainly possible
> Jon> to parse Haskell, and not hard either.
> 
> First, parsing of a complete program (eg. by a compiler) is quite
> different from parsing a buffer that is being edited by a human.  The
> latter is hard, even for fairly well-specified languages.
> Irregularities only make it harder.
> 
> Second, this argument would be easier to accept if there in fact were
> an equally innovative tool capable of providing all the editing
> goodies Emacs normally does, for Haskell.  But I don't know of one,
> even now, 10 years or so after Haskell's birth.
> 
> 

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Haskell 98 - Standard Prelude - Floating Class

2001-10-16 Thread Brian Boutel

Kent Karlsson wrote:
> 
> Default definitions may be inefficient, but in my opinion, default definitions
> for approximate operations should not give drastically lower accuracy, and
> should certainly not violate any other reasonable expectations (like that sin x
> returns x for x close to 0).
> 

I agree. 

The idea of giving defaults was that class mathods are not always
independent, so no harm is done by defining some in terms of others, but
there is no obligation to supply them. This thinking did not take into
account problems of numerical accuracy with types with fixed-size
floating-point representations.

If a default definition can have seriously bad numerical behaviour then
it should never be used, and I see no point in supplying it. To do so
would just encourage users to accept the definition and get bad results.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Reasons behind the "one instance per type" limitation

2001-10-09 Thread Brian Boutel



"Iavor S. Diatchki" wrote:
> 
> hello,
> 
> > Why aren't instance declarations handled by the module system like
> > every other symbol, so that a module can decide when to import an
> > instance declaration and when to export it? Are there technical
> > difficulties with this approach?
> 
> i beleive the reason is that instances don't  have names and the
> module system deals with names.  on the other hand i don't think
> this is a very good reason and completely agree with you that
> it is a problem.  i think some work is being done on introducing
> named instances to Hasekll (there was a paper in the Haskell workshop
> i think).

This is actually quite messy. The first point that needs to be made is
that instance code is invoked silently. Writing x==y invokes instance
code, but there is no way to say which instance should be used, so it is
important that there is precisely one instance declaration in scope for
that class and type. The current definition of Haskell achieves this by
insisting that there is only one such declaration on the whole progam,
although earlier versions achieved it by more complex, and less
convenient rules, that had the advantage that they could be checked
locally, at module compile time. One could envisage other means, such as
defining, by some new syntactic feature, which of the possible (named)
instance declarations was to be used in a particular scope.

Having different instance definitions in scope at different places can
casue problems, as code would just use whichever was in scope where it
was invoked. This is like the dynamic scope rules of early Lisp, which
is not considered good design. An alternative is to treat instances as
part of the static environment at declaration time, and include them in
closures when functions are passed around or exported. This ensures that
a named function always uses the same instance code for the same type,
but still has problems. In a language with referential transparency, one
should be able to replace a function call by its body code, with
substitution of arguments, (I'm assuming top-level functions with no
global variables). Instances behave like global variables, so calling an
imported function which brings its instances with it will in general
give different results from substitution of body code for the function,
and getting whichever instance is in scope at the point of call.
Personally, I don't like this prospect.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: let/where

2001-09-19 Thread Brian Boutel



Norman Ramsey wrote:

> Was it not considered to forbid such ambiguous expressions, requiring
> explicit parentheses to resolve the ambiguity?
> 

Yes. At one time some expressions using infix operators at the same
precedence had to be parenthesised if there was no "natural"
associativity familiar from mathematics. 
>From memory I think a `mod` b * c is an example. This caused some
complaints and was quietly dropped.

For let/where, my recollection is that the issue was raised and the
solution agreed on the same day at one of the Glasgow meetings in '88.
After that it was outside the domain of potentially ambiguous expression
constructs and was not considered further.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: let/where

2001-09-19 Thread Brian Boutel

John Hughes wrote:
> 
> That's the reason Haskell has two binding constructions.
> 

Its a reason why it was a good decision to have both constructs. 

There was some debate about which to have - it's a matter of personal
preference and style. The problem is that if you have both let and where
in the expression syntax, then

let a in b where c

is ambiguous unless you make an arbitrary decision about the precedence,
and that kind of arbitrariness is an opportunity for error - the
programmer mis-remembers the rule and gets a syntactically valid program
that does not do what was intended.

I always though that the resolution, allowing both but making where part
of the declaration syntax, thus both having a justification for the
parsing order and creating a means to write local definitions which span
guarded equations, was a very fine decision.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: "Lambda Dance", Haskell polemic,...

2001-04-02 Thread Brian Boutel

Jerzy Karczmarczuk wrote:

> 2. You neglect, and I suspect that you do it on purpose, that the main
>driving force behind the evolution of Haskell is *RESEARCH*. An issue
>absent from many "popular" languages which are meant to be
> immediately
>exploitable with very flat learning curve. 

This is not what is said in the Preface to the Haskell Report.

"It was decided that a committee should be formed to design such a
language, providing faster communication of new ideas, a stable
foundation for real applications development, and a vehicle through
which others would be encouraged to use functional languages."

And the first goal:

"1.It should be suitable for teaching, research, and applications,
including building large systems."

I think it is fair to say that Haskell has not been as successful in
achieving its goals as we would have liked. Tbe points made about
libraries are good ones. The problem seems to be the lack of
well-coordinated, well-funded, development resources - and this is a
problem with most open-source, volunteer-staffed efforts. Some of these
are nevertheless very successful, because, despite their weaknesses,
they do the job better that the proprietary competition. Why then has
Haskell not done as well as sendmail or apache? Perhaps because the
battleground is different. To get people to adopt Haskell (or any other
new language) is an issue of availability of people-skills, and
subjective quasi-religious arguments about the merits of one language
against another, not about which of two products does a well-defined
task better. To win that battle you need massive resources and apparent
commitment, as was the case with Sun and Java. Why did Sun do that? Did
it have anything to do with the real merits of Java?

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Linearity Requirement for Patterns

2001-03-19 Thread brian boutel



Michael Hanus wrote:
> 
> Brian Boutel wrote:
> > In any case, the same effect can be obtained using guards, where the
> > extra power of expression syntax solves the problem. Instead of
> > f x x = e1
> > f (illegal pattern) = e2
> > ...
> >
> > you can write the single equation
> >
> > f x y | x == y = e1
> >   | x /= y = e2
> 
> It depends on the meaning of pattern matching wrt. non-linear
> patterns. The standard notion (e.g., term rewriting) matches
> a pattern by finding a substitution for the pattern variables
> so that the instantiated pattern is identical to the function
> call. In this case, it can also match partially defined
> functions in contrast to your transformation. For instance,
> consider the definitions
> 
>   g z = 1 + g z
>   f x x = 0
> 
> Then a call like (f (g 0) (g 0)) is reducible to 0
> (wrt. standard notion of pattern matching using the instantiation
> {x |-> (g 0)}) whereas it is undefined if we transform this definition to
> 
>   f x y | x==y = 0
> 
> (which requires the evaluation of both arguments).
> Thus, I would say that non-linear patterns gives you
> extra power.
> 

I don't think so.

Haskell is not a term rewriting system, and lacks the notion of
syntactic identity of expressions that would be required for your
scheme. If non-linear patterns were to be added, I do not think this
would change, and there would be no increase in power.

Moreover, sometimes your scheme would be less powerful. Consider
f x x = 0
f x y = f x (y+1)

and call f (factorial 5) (15 * 8)

This will be undefined unless you evaluate both arguments before
matching - which you say above that you wish to avoid. And if you do not
eveluate both, you lose referential transparency, for if I substutute
the argument values, and write f 120 120, you would then match the first
case and get 0 instead of bottom.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Linearity Requirement for Patterns

2001-03-17 Thread Brian Boutel

Dylan Thurston wrote:
> 
>
> 
> Only numeric literals actually use the '==' operator, correct?  
> (As opposed to built-in equality for data constructors and characters.)
> 

I should have been more specific. Not that it affects the point being
made, and I didn't want to lose that in the detail.

Pattern matching for numeric literals uses whatever (==) has been
defined for the type, except, of course for the literals in n+k patterns
where (>=) is used.

The Character type is treated as a set of data constructors. Data
constructors match if they are the same constructor, and fail to match
otherwise. This is not the same as equality, where you are free to
define any result when comparing values, but it is the same as the
equality that would be derived for a type that consisted of a set of
nullary constructors. 

String literals are lists of characters. Consequently, string matching
will use constructor matching for the List constructors and for the
characters.

So, yes, (==) is used when matching most numeric literals, and not used
for other literals.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Linearity Requirement for Patterns

2001-03-16 Thread Brian Boutel

"Samuel E. Moelius III" wrote:
> 
> What is the reasoning for the requirement that patterns be linear?  Am I
> correct to say that if non-linear patterns were allowed, it would require
> equality to be defined for all data types?  Is this the reason, or are there
> others?
>

If that were the reason, then to be consistent, there would be no
literals in patterns, as these are tested using equality.

Here is a reason. Some people believe that the equations that form a
function definition should be disjoint. A good reason for this view is
that fold/unfold transformations can be then performed without reference
to the other cases in the definition - if it matches, it is correct to
use it.  Although Haskell does not require disjoint cases, and defines
the semantics in terms of a top-to-bottom testing order, the rejection
of non-linear patterns was largely because it is not generally possible
to specify, as a pattern, the complement of the set of values that match
a non-linear pattern, and so not possible to write a set of disjoint
case equations if non-linear patterns occur.

In any case, the same effect can be obtained using guards, where the
extra power of expression syntax solves the problem. Instead of 
f x x = e1
f (illegal pattern) = e2
...

you can write the single equation

f x y | x == y = e1
  | x /= y = e2


--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: A sample revised prelude for numeric classes

2001-02-11 Thread Brian Boutel

Dylan Thurston wrote:
> 
> I've started writing up a more concrete proposal for what I'd like the
> Prelude to look like in terms of numeric classes.  Please find it
> attached below.  It's still a draft and rather incomplete, but please
> let me know any comments, questions, or suggestions.
> 
>

This is a good basis for discussion, and it helps to see something
concrete. 

Here are a few comments:

> Thus these laws should be interpreted as guidelines rather than
> absolute rules.  In particular, the compiler is not allowed to use
> them.  Unless stated otherwise, default definitions should also be
> taken as laws.

Including laws was discussed very early in the development of the
language, but was rejected. IIRC Miranda had them. The argument against
laws was that their presence might mislead users into the assumption
that they did hold, yet if they were not enforcable then they might not
hold and that could have serious consequences. Also, some laws do not
hold in domains with bottom, e.g. a + (negate a) === 0 is only true if a
is not bottom. 


> class (Additive a) => Num a where
> (*) :: a -> a -> a
> one :: a
> fromInteger :: Integer -> a
>
>   -- Minimal definition: (*), one
> fromInteger 0 = zero
> fromInteger n | n < 0 = negate (fromInteger (-n))
> fromInteger n | n > 0 = reduceRepeat (+) one n

This definition requires both Eq and Ord!!!



As does this one:

> class (Num a, Additive b) => Powerful a b where
> (^) :: a -> b -> a
> instance (Num a) => Powerful a (Positive Integer) where
> a ^ 0 = one
> a ^ n = reduceRepeated (*) a n
> instance (Fractional a) => Powerful a Integer where
> a ^ n | n < 0 = recip (a ^ (negate n))
> a ^ n = a ^ (positive n)


and several others further down. 


> (4) In some cases, the hierarchy is not finely-grained enough:
> operations that are often defined independently are lumped
> together.  For instance, in a financial application one might want
> a type "Dollar", or in a graphics application one might want a
> type "Vector".  It is reasonable to add two Vectors or Dollars,
> but not, in general, reasonable to multiply them.  But the
> programmer is currently forced to define a method for (*) when she
> defines a method for (+).

Why do you stop at allowing addition on Dollars and not include
multiplication by a scalar? Division is also readily defined on Dollar
values, with a scalar result, but this, too, is not available in the
proposal. 

Having Units as types, with the idea of preventing adding Apples to
Oranges, or Dollars to Roubles, is a venerable idea, but is not in
widespread use in actual programming languages. Why not?

Vectors, too, can be multiplied, producing both scalar- and
vector-products.

It seems that you are content with going as far as the proposal permits,
though you cannot define, even within the revised Class system, all the
common and useful operations on these types. This is the same situation
as with Haskell as it stands. The question is whether the (IMHO)
marginal increase in flexibility is worth the cost.

This is not an argument for not separating Additive from Num, but it
does weaken the argument for doing it.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-11 Thread Brian Boutel

Marcin 'Qrczak' Kowalczyk wrote:


> I'm against removing Eq from the numeric hierarchy, against making Num
> instances for functions, but I would probably remove Show. I haven't
> seen a sensible proposal of a replacement of the whole hierarchy.
> 

Then we probably are in agreement. 

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-10 Thread Brian Boutel

Fergus Henderson wrote:
> 
> On 09-Feb-2001, Brian Boutel <[EMAIL PROTECTED]> wrote:
> > Patrik Jansson wrote:
> > >
> > > The fact that equality can be trivially defined as bottom does not imply
> > > that it should be a superclass of Num, it only explains that there is an
> > > ugly way of working around the problem.
> ...
> >
> > There is nothing trivial or ugly about a definition that reflects
> > reality and bottoms only where equality is undefined.
> 
> I disagree.  Haskell is a statically typed language, and having errors
> which could easily be detected at compile instead being deferred to
> run time is ugly in a statically typed language.

There may be some misunderstanding here. If you are talking about type
for which equality is always undefined, then I agree with you, but that
is not what I was talking about. I was thinking about types where
equality is defined for some pairs of argument values and undefined for
others - I think the original example was some kind of arbitrary
precision reals. My remark about "a definition that reflects reality and
bottoms only where equality is undefined" was referring to this
situation.

Returning to the basic issue, I understood the desire to remove Eq as a
superclass of Num was so that people were not required to implement
equality if they did not need it, not that there were significant
numbers of useful numeric types for which equality was not meaningful. 

Whichever of these was meant, I feel strongly that accomodating this and
other similar changes by weakening the constraints on what Num in
Haskell implies, is going too far. It devalues the Class structure in
Haskell to the point where its purpose, to control ad hoc polymorphism
in a way that ensures that operators are overloaded only on closely
related types, is lost, and one might as well abandon Classes and allow
arbitrary overloading.

--brian





--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-10 Thread Brian Boutel

Marcin 'Qrczak' Kowalczyk wrote:
> 
> Sat, 10 Feb 2001 14:09:59 +1300, Brian Boutel <[EMAIL PROTECTED]> pisze:
> 
> > Can you demonstrate a revised hierarchy without Eq? What would happen to
> > Ord, and the numeric classes that require Eq because they need signum?
> 
> signum doesn't require Eq. You can use signum without having Eq, and
> you can sometimes define signum without having Eq (e.g. on functions).
> Sometimes you do require (==) to define signum, but it has nothing to
> do with superclasses.
> 

Let me restate my question more carefully:

Can you demonstrate a revised hierarchy without Eq? What would happen to
Ord and the numeric classes with default class method definitions that
use (==) either explicitly or in pattern matching against numeric
literals? Both Integral and RealFrac do this to compare or test the
value of signum.

In an instance declaration, if a method requires operations of another
class which is not a superclass of the class being instanced, it is
sufficient to place the requirement in the context, but for default
class method definitions, all class methods used must belong to the
class being defined or its superclasses.


--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-09 Thread Brian Boutel

Ketil Malde wrote:
> 
> Brian Boutel <[EMAIL PROTECTED]> writes:
> 
> > - Having a class hierarchy at all (or making any design decision)
> > implies compromise.
> 
> I think the argument is that we should move Eq and Show *out* of the
> Num hierarchy.  Less hierarchy - less compromise.


Can you demonstrate a revised hierarchy without Eq? What would happen to
Ord, and the numeric classes that require Eq because they need signum? 


> 
> > - The current hierarchy (and its predecessors) represent a reasonable
> > compromise that meets most needs.
> 
> Obviously a lot of people seem to think we could find compromises that
> are more reasonable.

I would put this differently. "A particular group of people want to
change the language to make it more convenient for their special
interests."

> 
> > - Users have a choice: either work within the class hierarchy and
> > accept the pain of having to define things you don't need in order
> > to get the things that come for free,
> 
> Isn't it a good idea to reduce the amount of pain?

Not always.

> 
> > or omit the instance declarations and work outside the hierarchy. In
> > that case you will not be able to use the overloaded operator
> > symbols of the class, but that is just a matter of concrete syntax,
> > and ultimately unimportant.
> 
> I don't think syntax is unimportant.
>

I wrote that *concrete* syntax is ultimately unimportant, not *syntax*.
There is a big difference. In particular, *lexical syntax*, the choice
of marks on paper used to represent a language element, is not
important, although it does give rise to arguments, as do all mattters
of taste and style.

Thre are not enough usable operator symbols to go round, so they get
overloaded. Mathematicians have overloaded common symbols like (+) and
(*) for concepts that have may some affinity with addition and
multiplication in arithmetic, but which are actually quite different.
That's fine, because, in context, expert human readers can distinguish
what is meant. From a software engineering point of view, though, such
free overloading is dangerous, because readers may assume, incorrectly,
that an operator has properties that are typically associated with
operators using that symbol. This may not matter in a private world
where the program writer is the only person who will see and use the
code, and no mission-critial decisions depend on the results, but it
should not be the fate of Haskell to be confined to such use.

Haskell could have allowed free ad hoc overloading, but one of the first
major decisions made by the Haskell Committee in 1988 was not to do so.
Instead, it adopted John Hughes' proposal to introduce type classes to
control overloading. A symbol could only be overloaded if the whole of a
group of related symbols (the Class) was overloaded with it, and the
class hierarchy provided an even stronger constraint by restricting
overloading of the class operators to cases where other classes,
intended to be closely related, were also overloaded. This tended to
ensure that the new type at which the classes were overloaded had strong
resemblences to the standard types. Simplifying the hierarchy weakens
these constraints and so should be approached with extreme caution. Of
course, the details of the classes and the hierarchy have changed over
the years - there is, always has been and always will be pressure to
make changes to meet particular needs - but the essence is still there,
and the essence is of a general-purpose language, not a domain-specific
language for some branches of mathematics.

A consequence of this is that certain uses of overloaded symbols are
inconvenient, because they are too far from the mainstream intended
meaning. If you have such a use, and you want to write in Haskell, you
have to choose other lexical symbols to represent your operators. You
make your choice.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Brian Boutel

William Lee Irwin III wrote:
> 
> 
> The Standard Prelude serves its purpose well and accommodates the
> largest cross-section of users. Perhaps a Geek Prelude could
> accommodate the few of us who do need these sorts of schenanigans.
> 
>

Amen.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-08 Thread Brian Boutel

Patrik Jansson wrote:
>
> On Wed, 7 Feb 2001, Brian Boutel wrote:
> > * Haskell equality is a defined operation, not a primitive, and may not
> > be decidable. It does not always define equivalence classes, because
> > a==a may be Bottom, so what's the problem? It would be a problem,
> > though, to have to explain to a beginner why they can't print the result
> > of a computation.
> 
> The fact that equality can be trivially defined as bottom does not imply
> that it should be a superclass of Num, it only explains that there is an
> ugly way of working around the problem. Neither is the argument that the
> beginner should be able to print the result of a computation a good
> argument for having Show as a superclass.
> 

There is nothing trivial or ugly about a definition that reflects
reality and bottoms only where equality is undefined.

Of course, if you do not need to apply equality to your "numeric" type
then having to define it is a waste of time, but consider this:

- Having a class hierarchy at all (or making any design decision)
implies compromise.
- The current hierarchy (and its predecessors) represent a reasonable
compromise that meets most needs.
- Users have a choice: either work within the class hierarchy and accept
the pain of having to define things you don't need in order to get
the things that come for free, or omit the instance declarationsand
work outside the hierarchy. In that case you will not be able to use the
overloaded operator symbols   of the class, but that is just a matter of
concrete syntax, and ultimately unimportant.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-07 Thread Brian Boutel

Dylan Thurston wrote:
> 
> 
> Why doesn't your argument show that all types should by instances of
> Eq and Show?  Why are numeric types special?
> 

Why do you think it does? I certainly don't think so.

The point about Eq was that a objection was raised to Num being a
subclass of Eq because, for some numeric types, equality is undecidable.
I suggested that Haskell equality could be undecidable, so (==) on those
types could reflect the real situation. One would expect that it could
do so in a natural way, producing a value of True or False when
possible, and diverging otherwise. Thus no convincing argument has been
given for removing Eq as a superclass of Num.


In general, if you fine-grain the Class heirarchy too much, the picture
gets very complicated. If you need to define separate subclases of Num
for those types which have both Eq and Show, those that only Have Eq,
those than only have Show and those that have neither, not to mention
those that have Ord as well as Eq and those that don't, and then for all
the other distinctions that will be suggested, my guess is that Haskell
will become the preserve of a few mathematicians and everyone else will
give up in disgust. Then the likely result is that no-one will be
interested in maintaining and developing Haskell and it will die.



--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe