Re: Revamping the numeric classes

2001-02-11 Thread Tom Pledger

Marcin 'Qrczak' Kowalczyk writes:
 | Fri, 9 Feb 2001 17:29:09 +1300, Tom Pledger [EMAIL PROTECTED] pisze:
 | 
 |  (x + y) + z
 |  
 |  we know from the explicit type signature (in your question that I was
 |  responding to) that x,y::Int and z::Double.  Type inference does not
 |  need to treat x or y up, because it can take the first (+) to be Int
 |  addition.  However, it must treat the result (x + y) up to the most
 |  specific supertype which can be added to a Double.
 | 
 | Approach it differently. z is Double, (x+y) is added to it, so
 | (x+y) must have type Double.

That's a restriction I'd like to avoid.  Instead: ...so the most
specific common supertype of Double and (x+y)'s type must support
addition.

 | This means that x and y must have type Double.  This is OK, because
 | they are Ints now, which can be converted to Double.
 | 
 | Why is your approach better than mine?

It used a definition of (+) which was a closer fit for the types of x
and y.

 :
 |  h:: (Subtype a b, Subtype Int b, Eq b) = (Int - a) - Bool
 | 
 | This type is ambiguous: the type variable b is needed in the
 | context but not present in the type itself, so it can never be
 | determined from the usage of h.

Yes, I rashly glossed over the importance of having well-defined most
specific common supertype (MSCS) and least specific common subtype
(LSCS) operators in a subtype lattice.  Here's a more respectable
version:

h :: Eq (MSCS a Int) = (Int - a) - Bool

 |  That can be inferred by following the structure of the term.
 |  Function terms do seem prone to an accumulation of deferred
 |  subtype constraints.
 | 
 | When function application generates a constraint, the language gets
 | ambiguous as hell. Applications are found everywhere through the
 | program! Very often the type of the argument or result of an
 | internal application does not appear in the type of the whole
 | function being defined, which makes it ambiguous.
 | 
 | Not to mention that there would be *LOTS* of these constraints.
 | Application is used everywhere. It's important to have its typing
 | rule simple and cheap. Generating a constraint for every
 | application is not an option.

These constraints tend to get discharged whenever the result of an
application is not another function.  The hellish ambiguities can be
substantially tamed by insisting on a properly constructed subtype
lattice.

Anyway, since neither of us is about to have a change of mind, and
nobody else is showing an interest in this branch of the discussion,
it appears that the most constructive thing for me to do is return to
try-to-keep-quiet-about-subtyping-until-I've-done-it-in-THIH mode.

Regards,
Tom

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-09 Thread Fergus Henderson

On 08-Feb-2001, Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED] wrote:
 
 I don't like the idea of treating the case "no explicit definitions
 were given because all have default definitions which are OK"
 differently than "some explicit definitions were given".

I don't really like it that much either, but...

 When there is a superclass, it must have an instance defined, so if
 we permit such thing at all, I would let it implicitly define all
 superclass instances not defined explicitly, or something like that.
 At least when all methods have default definitions. Yes, I know that
 they can be mutually recursive and thus all will be bottoms...

... that is the problem I was trying to solve.

 So maybe there should be a way to specify that default definitions
 are cyclic and some of them must be defined?

I agree 100%.

 It is usually written in comments anyway, because it is not immediately
 visible in the definitions.

Yes.  Much better to make it part of the language, so that the compiler
can check it.

 (now any method definition
 can be omitted even if it has no default!),

Yeah, that one really sucks.

-- 
Fergus Henderson [EMAIL PROTECTED]  |  "I have always known that the pursuit
|  of excellence is a lethal habit"
WWW: http://www.cs.mu.oz.au/~fjh  | -- the last words of T. S. Garp.

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Marcin 'Qrczak' Kowalczyk

On Thu, 8 Feb 2001, Tom Pledger wrote:

 nice answer: give the numeric literal 10 the range type 10..10, which
 is defined implicitly and is a subtype of both -128..127 (Int8) and
 0..255 (Word8).

What are the inferred types for
f = map (\x - x+10)
g l = l ++ f l
? I hope I can use them as [Int] - [Int].

 x + y + z -- as above
 
 -- (x + y) + z   -- left-associativity of (+)
 
 -- realToFrac (x + y) + z-- injection (or treating up) done
   -- conservatively, i.e. only where needed

What does it mean "where needed"? Type inference does not proceed
inside-out. What about this?
h f = f (1::Int) == (2::Int)
Can I apply f to a function of type Int-Double? If no, then it's a
pity, because I could inline it (the comparison would be done on Doubles).
If yes, then what is the inferred type for h? Note that Int-Double is not
a subtype of Int-Int, so if h :: (Int-Int)-Bool, then I can't imagine
how h can be applied to something :: Int-Double.

-- 
Marcin 'Qrczak' Kowalczyk


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Jerzy Karczmarczuk

First, a general remark which has nothing to do with Num.

PLEASE WATCH YOUR DESTINATION ADDRESSES
People send regularly their postings to haskell-cafe with
several private receiver addresses, which is a bit annoying
when you click "reply all"...


Brian Boutel after Dylan Thurston:

  Why doesn't your argument show that all types should by instances of
  Eq and Show?  Why are numeric types special?
 
 Why do you think it does? I certainly don't think so.
 
 The point about Eq was that a objection was raised to Num being a
 subclass of Eq because, for some numeric types, equality is undecidable.
 I suggested that Haskell equality could be undecidable, so (==) on those
 types could reflect the real situation. One would expect that it could
 do so in a natural way, producing a value of True or False when
 possible, and diverging otherwise. Thus no convincing argument has been
 given for removing Eq as a superclass of Num.
 
 In general, if you fine-grain the Class heirarchy too much, the picture
 gets very complicated. If you need to define separate subclases of Num
 for those types which have both Eq and Show, those that only Have Eq,
 those than only have Show and those that have neither, not to mention
 those that have Ord as well as Eq and those that don't, and then for all
 the other distinctions that will be suggested, my guess is that Haskell
 will become the preserve of a few mathematicians and everyone else will
 give up in disgust. Then the likely result is that no-one will be
 interested in maintaining and developing Haskell and it will die.

Strange, but from the objectives mentioned in the last part of this 
posting (even if a little demagogic [insert smiley here if you wish])
I draw opposite conclusions.

The fact that the number of cases is quite large suggests that Eq, Show
and arithmetic should be treated as *orthogonal* issues, and treated
independently. 

If somebody needs Show for his favourite data type, he is free to
arrange
this himself. I repeat what I have already said: I work with functional
objects as mathematical entities. I want to add parametric surfaces, to
rotate trajectories. Also, to handle gracefully and legibly for those
simpletons who call themselves 'theoretical physicists', the arithmetic
of un-truncated lazy streams representing power series, or infinitely
dimensional differential algebra elements. Perhaps those are not 
convincing arguments for Brian Boutel. They are certainly so for me.

Num, with this forced marriage of (+) and (*) violates the principle
of orthogonality. Eq and Show constraints make it worse.

===

And, last, but very high on my check-list:

The implicit coercion of numeric constants: 3.14 -=-  (fromDouble
3.14)
etc. is sick. (Or was; I still didn't install the last version of GHC,
and with Hugs it is bad). The decision is taken by the compiler
internally,
and it doesn't care at all about the fact that in my prelude 
I have eliminated the Num class and redefined fromDouble, fromInt, etc. 

+

Dylan Thurston terminates his previous posting about Num with:

 Footnotes:
 [1]  Except for the lack of abs and signum, which should be in some
 other class.  I have to think about their semantics before I can say
 where they belong.

Now, signum and abs seem to be quite distincts beasts. Signum seem to
require Ord (and a generic zero...).

Abs from the mathematical point of view constitutes a *norm*. Now,
frankly, I haven't the slightest idea how to cast this concept into
Haskell class hierarchy in a sufficiently general way...

I'll tell you anyway that if you try to "sanitize" the numeric
classes, if you separate additive structures and the multiplication,
if you finally define abstract Vectors over some field of scalars,
and if you demand the existence of a generic normalization for your
vectors, than *most probably* you will need multiparametric classes
with dependencies. 


Jerzy Karczmarczuk
Caen, France

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Dylan Thurston

On Thu, Feb 08, 2001 at 11:24:49AM +, Jerzy Karczmarczuk wrote:
 First, a general remark which has nothing to do with Num.
 
 PLEASE WATCH YOUR DESTINATION ADDRESSES
 People send regularly their postings to haskell-cafe with
 several private receiver addresses, which is a bit annoying
 when you click "reply all"...

Yes, apologies.  The way the lists do the headers make it very easy to
reply to individuals, and hard to reply to the list.

 And, last, but very high on my check-list:
 
 The implicit coercion of numeric constants: 3.14 -=-  (fromDouble
 3.14)
 etc. is sick. (Or was; I still didn't install the last version of GHC,
 and with Hugs it is bad). The decision is taken by the compiler
 internally,
 and it doesn't care at all about the fact that in my prelude 
 I have eliminated the Num class and redefined fromDouble, fromInt, etc. 

Can't you just put "default ()" at the top of each module?

I suppose you still have the problem that a numeric literal "5" means
"Prelude.fromInteger 5".  Can't you define your types to be instances
of Prelude.Num, with no operations defined except Prelude.fromInteger?

 Dylan Thurston terminates his previous posting about Num with:
 
  Footnotes:
  [1]  Except for the lack of abs and signum, which should be in some
  other class.  I have to think about their semantics before I can say
  where they belong.
 
 Now, signum and abs seem to be quite distincts beasts. Signum seem to
 require Ord (and a generic zero...).
 
 Abs from the mathematical point of view constitutes a *norm*. Now,
 frankly, I haven't the slightest idea how to cast this concept into
 Haskell class hierarchy in a sufficiently general way...

This was one thing I liked with the Haskell hierarchy: the observation
that "signum" of real numbers is very much like "argument" of complex
numbers.  abs and signum in Haskell satisfy an implicit law:
   abs x * signum x = x  [1]
So signum can be defined anywhere you can define abs (except that it's
not a continuous function, so is not terribly well-defined).  A
default definition for signum x might read
   signum x = let a = abs x in if (a == 0) then 0 else x / abs x
(Possibly signum is the wrong name.  What is the standard name for
this operation for, e.g., matrices?)  [Er, on second thoughts, it's
not as well-defined as I thought.  Abs x needs to be in a field for
the definition above to work.]

 I'll tell you anyway that if you try to "sanitize" the numeric
 classes, if you separate additive structures and the multiplication,
 if you finally define abstract Vectors over some field of scalars,
 and if you demand the existence of a generic normalization for your
 vectors, than *most probably* you will need multiparametric classes
 with dependencies. 

Multiparametric classes, certainly (for Vectors, at least).
Fortunately, they will be in Haskell 2 with high probability.  I'm not
convinced about dependencies yet.

 Jerzy Karczmarczuk
 Caen, France

Best,
Dylan Thurston

Footnotes: 
[1]  I'm not sure what I mean by "=" there, since I do not believe
these should be forced to be instances of Eq.  For clearer cases,
consider the various Monad laws, e.g.,
   join . join = join . map join
(Hope I got that right.)  What does "=" mean there?  Some sort of
denotational equality, I suppose.



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Marcin 'Qrczak' Kowalczyk

Thu, 8 Feb 2001 10:51:58 -0500, Peter Douglass [EMAIL PROTECTED] pisze:

 The first part of my question (not contained in your reply) is
 whether it is feasible to disable a developer's access to the
 "unsafe" numerical operations.

import Prelude hiding (quot, rem, (/) {- etc. -})
import YourPrelude -- which defines substitutes

You can "disable" it now. You cannot disable them entirely - anyone can
define present functions in terms of your functions if he really wants.

 Whether or not an individual developer chooses to do so is another
 matter.

Why only quot? There are many other ways to write bottom:
head []
(\(x:xs) - (x,xs)) []
let x = x in x
log (-1)
asin 2
error "foo"

 If you "know" the value is non-zero before run-time, then that is
 statically determined.

I know but the compiler does not know, and I have no way to convince it.

 It is possible that the developer writes a function which returns a
 nonZeroNumeric value which actually has a value of zero.  However,
 the value of requiring division to have a nonZeroNumeric denominator
 is to catch at compile time the "error" of failing to scrutinize
 (correctly or incorrectly) for zero.

IMHO it would be more painful than useful.

 For most commercial software, the quality of run-time error messages
 is far less important than their absence.

It would not avoid them if the interface does not give a place to
report the error:
average xs = sum xs / case checkZero (length xs) of
Just notZero - notZero
Nothing  - error "This should never happen"
is not any more safe than
average xs = sum xs / length xs

and I can report bad input without trouble now:
average xs = case length xs of
0 - Nothing
l - Just (sum xs / l)

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Marcin 'Qrczak' Kowalczyk

Thu, 8 Feb 2001 21:41:56 +1100, Fergus Henderson [EMAIL PROTECTED] pisze:

 Should this define an instance for `foo T'?
 (I think not.)
 
 How about if the instance declaration is changed to
 
 instance bar T where
   f = 41
   -- no definition for f2
   b = 42
 
 ?
 (In that case, I think it should.)

I don't like the idea of treating the case "no explicit definitions
were given because all have default definitions which are OK"
differently than "some explicit definitions were given".

When there is a superclass, it must have an instance defined, so if
we permit such thing at all, I would let it implicitly define all
superclass instances not defined explicitly, or something like that.
At least when all methods have default definitions. Yes, I know that
they can be mutually recursive and thus all will be bottoms...

So maybe there should be a way to specify that default definitions
are cyclic and some of them must be defined? It is usually written
in comments anyway, because it is not immediately visible in the
definitions. If not formally in the language (now any method definition
can be omitted even if it has no default!), then perhaps the compiler
could detect most cases when methods are defined in terms of one
another and give a warning.

Generally the compiler could warn if the programmer has written bottom
in an unusual way. For example
f x = g some_expression
g x = f some_expression
is almost certainly a programmer error.

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Marcin 'Qrczak' Kowalczyk

Thu, 08 Feb 2001 11:24:49 +, Jerzy Karczmarczuk [EMAIL PROTECTED] pisze:

 The implicit coercion of numeric constants: 3.14 -=-  (fromDouble
 3.14) etc. is sick.

What do you propose instead?

(BTW, it's fromRational, to keep arbitrarily large precision.)

 Now, signum and abs seem to be quite distincts beasts. Signum seem
 to require Ord (and a generic zero...).

Signum doesn't require Ord.
signum z = z / abs z
for complex numbers.

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread William Lee Irwin III

On Thu, Feb 08, 2001 at 08:30:31PM +, Marcin 'Qrczak' Kowalczyk wrote:
 Signum doesn't require Ord.
 signum z = z / abs z
 for complex numbers.

I'd be careful here.

\begin{code}
signum 0 = 0
signum z = z / abs z
\end{code}

This is, perhaps, neither precise nor general enough.

The signum/abs pair seem to represent direction and magnitude.
According to the line of reasoning in some of the earlier posts in this
flamewar, the following constraints:

(1) z = signum z * abs z where * is appropriately defined
(2) abs $ signum z = 1

should be enforced, if possible, by the type system. This suggests
that for any type having a vector space structure over Fractional
(or whatever the hierarchy you're brewing up uses for rings with
a division partial function on them) that the result type of signum
lives in a more restricted universe, perhaps even one with a different
structure (operations defined on it, set of elements) than the argument
type, and it seems more than possible to parametrize it on the argument
type. The abs is in fact a norm, and the signum projects V^n - V^n / V.
Attempts to define these things on Gaussian integers, p-adic numbers,
polynomial rings, and rational points on elliptic curves will quickly
reveal limitations of the stock class hierarchy.

Now, whether it's actually desirable to scare newcomers to the language
into math phobia, wetting their pants, and running screaming with
subtleties like this suggests perhaps that one or more "alternative
Preludes" may be desirable to have. There is a standard Prelude, why not
a nonstandard one or two? We have the source. The needs of the geek do
not outweigh the needs of the many. Hence, we can cook up a few Preludes
or so on our own, and certainly if we can tinker enough to spam the list
with counterexamples and suggestions of what we'd like the Prelude to
have, we can compile up a Prelude for ourselves with our "suggested
changes" included and perhaps one day knock together something which can
actually be used and has been tested, no?

The Standard Prelude serves its purpose well and accommodates the
largest cross-section of users. Perhaps a Geek Prelude could
accommodate the few of us who do need these sorts of schenanigans.


Cheers,
Bill
-- 
j0][nD33R:#math Excel/Spreadsheet Q: What is the formula for finding
out the time passed between two dates and or two times in the same day?
MatroiDN:#math excel/spreadsheet? Hmm, this is math? Is there a GTM on
excel or maybe an article in annals about spreadsheets or maybe
there's a link from wolfram to doing your own computer work, eh?
danprime:#math jeeem, haven't you seen "Introduction to Algebraic Excel"?
danprime:#math or "Spreadsheet Space Embeddings in 2-Manifolds"
brouwer:#math i got my phd in spreadsheet theory
brouwer:#math i did my thesis on the spreadsheet conjecture

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Tom Pledger

Marcin 'Qrczak' Kowalczyk writes:
 | On Thu, 8 Feb 2001, Tom Pledger wrote:
 | 
 |  nice answer: give the numeric literal 10 the range type 10..10, which
 |  is defined implicitly and is a subtype of both -128..127 (Int8) and
 |  0..255 (Word8).
 | 
 | What are the inferred types for
 | f = map (\x - x+10)
 | g l = l ++ f l
 | ? I hope I can use them as [Int] - [Int].

f, g :: (Subtype a b, Subtype 10..10 b, Num b) = [a] - [b]
Yes, because of the substitution {Int/a, Int/b}.

 |  x + y + z -- as above
 |  
 |  -- (x + y) + z   -- left-associativity of (+)
 |  
 |  -- realToFrac (x + y) + z-- injection (or treating up) done
 |-- conservatively, i.e. only where needed
 | 
 | What does it mean "where needed"? Type inference does not proceed
 | inside-out.

In the expression

(x + y) + z

we know from the explicit type signature (in your question that I was
responding to) that x,y::Int and z::Double.  Type inference does not
need to treat x or y up, because it can take the first (+) to be Int
addition.  However, it must treat the result (x + y) up to the most
specific supertype which can be added to a Double.

 | What about this?
 | h f = f (1::Int) == (2::Int)
 | Can I apply f

h?

 | to a function of type Int-Double?

Yes.

 | If no, then it's a pity, because I could inline it (the comparison
 | would be done on Doubles).  If yes, then what is the inferred type
 | for h? Note that Int-Double is not a subtype of Int-Int, so if h
 | :: (Int-Int)-Bool, then I can't imagine how h can be applied to
 | something :: Int-Double.

There's no explicit type signature for the result of applying f to
(1::Int), so...

h :: (Subtype a b, Subtype Int b, Eq b) = (Int - a) - Bool

That can be inferred by following the structure of the term.  Function
terms do seem prone to an accumulation of deferred subtype
constraints.

Regards,
Tom

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread Brian Boutel

William Lee Irwin III wrote:
 
 
 The Standard Prelude serves its purpose well and accommodates the
 largest cross-section of users. Perhaps a Geek Prelude could
 accommodate the few of us who do need these sorts of schenanigans.
 


Amen.

--brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-07 Thread Ch. A. Herrmann

moved to haskell-cafe

Ketil E.g. way back, I wrote a simple differential equation solver.
Ketil Now, the same function *could* have been applied to vector
Ketil functions, except that I'd have to decide on how to implement
Ketil all the "Num" stuff that really didn't fit well.  Ideally, a
Ketil nice class design would infer, or at least allow me to
Ketil specify, the mathematical constraints inherent in an
Ketil algorithm, and let my implementation work with any data
Ketil satisfying those constraints.

the problem is that the --majority, I suppose?-- of mathematicians
tend to overload operators. They use "*" for matrix-matrix
multiplication as well as for matrix-vector multiplication etc.

Therefore, a quick solution that implements groups, monoids, Abelian
groups, rings, Euclidean rings, fields, etc. will not be sufficient.

I don't think that it is acceptable for a language like Haskell
to permit the user to overload predefined operators, like "*".

A cheap solution could be to define a type MathObject and operators like 
   :*: MathObject - MathObject - MathObject
Then, the user can implement:

a :*: b = case (a,b) of
 (Matrix x, Matrix y) - foo
 (Matrix x, Vector y) - bar
-- 
 Christoph Herrmann
 E-mail:  [EMAIL PROTECTED]
 WWW: http://brahms.fmi.uni-passau.de/cl/staff/herrmann.html

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-07 Thread Marcin 'Qrczak' Kowalczyk

07 Feb 2001 11:47:11 +0100, Ketil Malde [EMAIL PROTECTED] pisze:

 If it is useful to have a fine granularity of classes, you can
 imagine doing:
 
 class Multiplicative a b c where
 (*) :: a - b - c

Then a*b*c is ambiguous no matter what are types of a,b,c and the
result. Sorry, this does not work. Too general is too bad, it's
impossible to have everything at once.

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-07 Thread Dylan Thurston

Other people have been making great points for me.  (I particularly
liked the example of Dollars as a type with addition but not
multiplication.)  One point that has not been made: given a class
setup like

class Additive a where
  (+) :: a - a - a
  (-) :: a - a - a
  negate :: a - a
  zero :: a

class Multiplicative a where
  (*) :: a - a - a
  one :: a

class (Additive a, Multiplicative a) = Num a where
  fromInteger :: Integer - a

then naive users can continue to use (Num a) in contexts, and the same
programs will continue to work.[1]

(A question in the above context is whether the literal '0' should be
interpreted as 'fromInteger (0::Integer)' or as 'zero'.  Opinions?)

On Wed, Feb 07, 2001 at 06:27:02PM +1300, Brian Boutel wrote:
 * Haskell equality is a defined operation, not a primitive, and may not
 be decidable. It does not always define equivalence classes, because
 a==a may be Bottom, so what's the problem? It would be a problem,
 though, to have to explain to a beginner why they can't print the result
 of a computation.

Why doesn't your argument show that all types should by instances of
Eq and Show?  Why are numeric types special?

Best,
Dylan Thurston

Footnotes: 
[1]  Except for the lack of abs and signum, which should be in some
other class.  I have to think about their semantics before I can say
where they belong.



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-07 Thread andrew

On Wed, Feb 07, 2001 at 11:47:11AM +0100, Ketil Malde wrote:
 "Ch. A. Herrmann" [EMAIL PROTECTED] writes:
[...]
  the problem is that the --majority, I suppose?-- of mathematicians
  tend to overload operators. They use "*" for matrix-matrix
  multiplication as well as for matrix-vector multiplication etc.
 Yes, obviously.  On the other hand, I think you could get far by
 defining (+) as an operator in a Group, (*) in a Ring, and so forth.

As a complete newbie can I add a few points?  They may be misguided,
but they may also help identify what appears obvious only through
use...

- understanding the hierarchy of classes (ie constanly referring to
Fig 5 in the report) takes a fair amount of effort.  It would have
been much clearer for me to have classes that simply listed the
required super classes (as suggested in an earlier post).

- even for me, no great mathematician, I found the forced inclusion of
certain classes irritating (in my case - effectively implementing
arithmetic on tuples - Enum made little sense and ordering is hacked
in order to be total; why do I need to define either to overload "+"?)

- what's the deal with fmap and map?

 Another problem is that the mathematical constructs include properties
 not easily encoded in Haskell, like commutativity, associativity, etc.
 
  I don't think that it is acceptable for a language like Haskell
  to permit the user to overload predefined operators, like "*".

Do you mean that the numeric classes should be dropped or are you
talking about some other overloading procedure?

Isn't one popular use of Haskell to define/extend it to support small
domain-specific languages?  In those cases, overloading operatores via
the class mechanism is very useful - you can give the user concise,
but stll understandable, syntax for the problem domain.

I can see that overloading operators is not good in general purpose
libraries, unless carefully controlled, but that doesn't mean it is
always bad, or should always be strictly controlled.  Maybe the
programmer could decide what is appropriate, faced with a particular
problem, rather than a language designer, from more general
considerations?  Balance, as ever, is the key :-)

[...]
 From experience, I guess there are probably issues that haven't
 crossed my mind.   :-)

This is certainly true in my case - I presumed there was some deep
reason for the complex hierarchy that exists at the moment.  It was a
surprise to see it questioned here.

Sorry if I've used the wrong terminology anywhere.  Hope the above
makes some sense.

Andrew

-- 
http://www.andrewcooke.free-online.co.uk/index.html


- End forwarded message -

-- 
http://www.andrewcooke.free-online.co.uk/index.html

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: Revamping the numeric classes

2001-02-07 Thread Peter Douglass

 I have some questions about how Haskell's numeric classes might be
revamped.

 Is it possible in Haskell to circumscribe the availability of certain
"unsafe" numeric operations such as div, /, mod?  If this is not possible
already, could perhaps a compiler flag "-noUnsafeDivide" could be added to
make such a restriction?

 What I have in mind is to remove division by zero as an untypable
expression.  The idea is to require div, /, mod to take NonZeroNumeric
values in their second argument.  NonZeroNumeric values could be created by
functions of type: 
  Number a = a - Maybe NonZeroNumeric
or something similar.

  Has this been tried and failed?  I'm curious as to what problems there
might be with such an approach.

--PeterD  

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-07 Thread Tom Pledger

Dylan Thurston writes:
 :
 | (A question in the above context is whether the literal '0' should
 | be interpreted as 'fromInteger (0::Integer)' or as 'zero'.
 | Opinions?)

Opinions?  Be careful what you wish for.  ;-)

In a similar discussion last year, I was making wistful noises about
subtyping, and one of Marcin's questions

http://www.mail-archive.com/haskell-cafe@haskell.org/msg00125.html

was whether the numeric literal 10 should have type Int8 (2's
complement octet) or Word8 (unsigned octet).  At the time I couldn't
give a wholly satisfactory answer.  Since then I've read the oft-cited
paper "On Understanding Types, Data Abstraction, and Polymorphism"
(Cardelli  Wegner, ACM Computing Surveys, Dec 1985), which suggests a
nice answer: give the numeric literal 10 the range type 10..10, which
is defined implicitly and is a subtype of both -128..127 (Int8) and
0..255 (Word8).

The differences in arithmetic on certain important range types could
be represented by multiple primitive functions (or perhaps foreign
functions, through the FFI):

primAdd   :: Integer - Integer - Integer-- arbitrary precision
primAdd8s :: Int8- Int8- Int8   -- overflow at -129, 128
primAdd8u :: Word8   - Word8   - Word8  -- overflow at -1, 256
-- etc.

instance Additive Integer where
zero = 0
(+)  = primAdd

...with similar instances for the integer subrange types which may
overflow.  These other instances would belong outside the standard
Prelude, so that the ambiguity questions don't trouble people (such as
beginners) who don't care about the space and time advantages of fixed
precision integers.

Subtyping offers an alternative approach to handling arithmetic
overflows:
  - Use only arbitrary precision arithmetic.
  - When calculated result *really* needs to be packed into a fixed
precision format, project it (or treat it down, etc., whatever's
your preferred name), so that overflows are represented as
Nothing.

For references to other uses of  class Subtype  see:

http://www.mail-archive.com/haskell@haskell.org/msg07303.html

For a reference to some unification-driven rewrites, see:

http://www.mail-archive.com/haskell@haskell.org/msg07327.html

Marcin 'Qrczak' Kowalczyk writes:
 :
 | Assuming that Ints can be implicitly converted to Doubles, is the
 | function
 | f :: Int - Int - Double - Double
 | f x y z = x + y + z
 | ambiguous? Because there are two interpretations:
 | f x y z = realToFrac x + realToFrac y + z
 | f x y z = realToFrac (x + y) + z
 | 
 | Making this and similar case ambiguous means inserting lots of explicit
 | type signatures to disambiguate subexpressions.
 | 
 | Again, arbitrarily choosing one of the alternatives basing on some
 | set of weighting rules is dangerous,

I don't think the following disambiguation is too arbitrary:

x + y + z -- as above

-- (x + y) + z   -- left-associativity of (+)

-- realToFrac (x + y) + z-- injection (or treating up) done
  -- conservatively, i.e. only where needed

Regards,
Tom

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe