Re: A sample revised prelude for numeric classes

2001-02-12 Thread Bjorn Lisper

Tom Pledger:
Brian Boutel writes:
 :
 | Having Units as types, with the idea of preventing adding Apples to
 | Oranges, or Dollars to Roubles, is a venerable idea, but is not in
 | widespread use in actual programming languages. Why not?

There was a pointer to some good papers on this in a previous
discussion of units and dimensions:

http://www.mail-archive.com/haskell@haskell.org/msg04490.html

The main complication is that the type system needs to deal with
integer exponents of dimensions, if it's to do the job well.

Andrew Kennedy has basically solved this for higher order languages with HM
type inference. He made an extension of the ML type system with dimensional
analysis a couple of years back. Sorry I don't have the references at hand
but he had a paper in ESOP I think.

I think the real place for dimension and unit inference is in modelling
languages, where you can specify physical systems through differential
equations and simulate them numerically. Such languages are being
increasingly used in the "real world" now. 

It would be quite interesting to have a version of Haskell that would allow
the specification of differential equations, so one could make use of all
the good features of Haskell for this. This would allow the unified
specification of systems that consist both of physical and computational
components. This niche is now being filled by a mix of special-purpose
modeling languages like Modelica and Matlab/Simulink for the physical part,
and SDL and UML for control parts. The result is likely to be a mess, in
particular when these specifications are to be combined into full system
descriptions.

Bjrn Lisper

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Dimensions of the World (was: A sample revised prelude)

2001-02-12 Thread Jerzy Karczmarczuk

Ashley Yakeley after Tom Pledger:
 
 The main complication is that the type system needs to deal with
 integer exponents of dimensions, if it's to do the job well.
 
 Very occasionally non-integer or 'fractal' exponents of dimensions are
 useful. For instance, geographic coastlines can be measured in km ^ n,
 where 1 = n  2. This doesn't stop the CIA world factbook listing all
 coastline lengths in straight kilometres, however.
 
 More unit weirdness occurs with logarithms. For instance, if y and x are
 distances, log (y/x) = log y - log x. Note that 'log x' is some number +
 log (metre). Strange, huh?

When a week ago I mentioned those dollars difficult to multiply
(although
some people spend their lives doing it...), and some dimensional
quantities
which should have focalised some people attention on the differences
between (*) and (+), I never thought the discussion would go so far.

Dimensional quantities *are* a can of worms.
From the practical point of view they are very useful in order to avoid
making silly programming errors, I have applied them several times while
coding some computer algebra expressions.
Dimensions were "just symbols", but with "reasonable" mathematical
properties (concerning (*) and (/)), so factorizing this symbolic part
was an easy way to see whether I didn't produce some illegal
combinations.

Sometimes they are really "dimensionless" scaling factor! In
TeX/MetaFont
the units such as mm, cm, in etc. exist and function very nicely as
conversion factor.

W.L.I.III asks:

 If you (or anyone else) could comment on what sorts of units would be
 appropriate for the result type of a logarithm operation, I'd be glad to
 hear it. I don't know what the result type of this example is supposed
 to be if the units of a number are encoded in the type.

Actually, the logarithm example would be consider as spurious by almost
all "practical" mathematicians (e.g., physicists). A formula is sane if
the argument of the logarithm is dimensionless (if in x/y both elements
share the same dimension). Then adding and subtracting the same 
log(GHmSmurf) is irrelevant.

==

But in general mathematical physics (and in geometry which encompasses
the
major part of the former) there are some delicate issues, which
sometimes
involve fractality, and sometimes the necessity of "religious acts",
such
as the renormalization schemes in Quantum Field Theory. 
In this case we have the "dimensional transmutation" phenomenon: the
gluon
coupling constant which is dimensionless, acquires a dimension, and 
conditions the hadronic mass scale, i.e. the masses of elementary
particles.
[[[Yes, I know, you, serious comp. scist won't bother about it, but I
will
try anyway to tell you in two words why. A way of making a singular
theory
finite, is to put in on a discrete lattice which represent the phys.
space.
There is a dimensional object here: the lattice constant. Then you go to
zero with it, in order to retrieve the physical space-time. When you
reach
this zero, you lose this constant, and this is one of the reasons why
the
theory explodes. So, it must be introduced elsewhere... In another
words:
a physical correlation length L between objects is finite. If the
lattice
constant c is finite, L=N*c. But if c goes to zero... Now, programming
all this, Haskell or not, is another issue.]]]

==

Fractals are seen not only in geography, but everywhere, as Mandelbrot
and
his followers duly recognized. You will need them doing computations in
colloid physics, in the galaxy statistics, and in the metabolism of
human
body [[if you think that your energy depenses are proportional to your
volume, you are dead wrong, most interesting processes take place within
membranes. You are much flatter than you think, folks, ladies
included.]].

Actually, ALL THIS was one of major driving forces behind my interest in
functional programming. I found an approach to programming which did not
target "symbolic manipulations", but "normal computing", so it could be
practically competiting against Fortran etc. Yet, it had a potential to
deal in a serious, formal manner with the mathematical properties of the
manipulated objects.

That's why I suffer seeing random, ad hoc numerics.

Bjrn Lisper mentions some approach to dimensions:

 Andrew Kennedy has basically solved this for higher order languages 
 with HM type inference. He made an extension of the ML type system 
 with dimensional analysis a couple of years back. Sorry I don't have 
 the references at hand but he had a paper in ESOP I think.
 
 I think the real place for dimension and unit inference is in modelling
 languages, where you can specify physical systems through differential
 equations and simulate them numerically. Such languages are being
 increasingly used in the "real world" now. 

ESOP '94. Andrew Kennedy: Dimension Types. 348-362. 
There are other articles:
Jean Goubault. Infrence d'units physiques en ML ;
Mitchell Wand and Patrick O'Keefe. Automatic dimensional inference;
and 

Re: A sample revised prelude for numeric classes

2001-02-12 Thread Ketil Malde

[EMAIL PROTECTED] (Marcin 'Qrczak' Kowalczyk) writes:

 Why do you stop at allowing addition on Dollars and not include
 multiplication by a scalar?

 Perhaps because there is no good universal type for (*).
 Sorry, it would have to have a different symbol.

Is this ubiquitous enough that we should have a *standardized*
different symbol?   Any candidates?

 Having Units as types, with the idea of preventing adding Apples to
 Oranges, or Dollars to Roubles, is a venerable idea, but is not in
 widespread use in actual programming languages. Why not?

 It does not scale to more general cases. (m/s) / (s) = (m/s^2),
 so (/) would have to have the type (...) = a - b - c, which is not
 generally usable because of ambiguities. Haskell's classes are not
 powerful enough to define full algebra of units.

While it may not be in the language, nothing's stopping you from - and
some will probably encourage you to - implementing e.g. financial
libraries with different data types for different currencies. 

Which I think is a better way to handle it, since when you want m to
be divisible by s is rather application dependent.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: A sample revised prelude for numeric classes

2001-02-12 Thread Jon Fairbairn

On 12 Feb 2001, Ketil Malde wrote:

 [EMAIL PROTECTED] (Marcin 'Qrczak' Kowalczyk) writes:
 
  Why do you stop at allowing addition on Dollars and not include
  multiplication by a scalar?
 
  Perhaps because there is no good universal type for (*).
  Sorry, it would have to have a different symbol.
 
 Is this ubiquitous enough that we should have a *standardized*
 different symbol?  

I'd think so.

 Any candidates?

.* *. [and .*.] ?

where the "." is on the side of the scalar

-- 
Jn Fairbairn [EMAIL PROTECTED]
31  Chalmers Road[EMAIL PROTECTED]
Cambridge CB1 3SZ  +44 1223 570179 (pm only, please)


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Scalable and Continuous

2001-02-12 Thread Marcin 'Qrczak' Kowalczyk

On Mon, 12 Feb 2001, Ashley Yakeley wrote:

 class (Additive a) = Scalable a
  scale :: Real - a - a -- equivalent to * (not sure of name for Real type)

Or times, which would require multiparameter classes.
5 `times` "--" == "--"
5 `times` (\x - x+1) === (\x - x+5)
But this would suggest separating out Monoid from Additive - ugh. It makes
sense to have zero and (+) for lists and functions a-a, but not negation.
There is a class Monoid for ghc's nonstandard MonadWriter class. We would
have (++) unified with (+) and concat unified with sum.

I'm afraid of making too many small classes. But it would perhaps be not
so bad if one could define superclass' methods in subclasses, so that one
can forget about exact structure of classes and treat a bunch of classes
as a single class if he wishes. It would have to be combined with
compiler-inferred warnings about mutual definitions giving bottoms.

-- 
Marcin 'Qrczak' Kowalczyk


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-12 Thread William Lee Irwin III

In a later posting Marcin Kowalczyk says:
 If (+) can be implicitly lifted to functions, then why not signum?
 Note that I would lift neither signum nor (+). I don't feel the need.
  ...

On Mon, Feb 12, 2001 at 09:33:03AM +, Jerzy Karczmarczuk wrote:
 I not only feel the need, but I feel that this is important that the
 additive structure in the codomain is inherited by functions. In a more
 specific context: the fact that linear functionals over a vector space
 form also a vector space, is simply *fundamental* for the quantum 
 mechanics, for the cristallography, etc. You don't need to be a Royal
 Abstractor to see this. 

I see this in a somewhat different light, though I'm in general agreement.

What I'd like to do is to be able to effectively model module structures
in the type system, and furthermore be able to simultaneously impose
distinct module structures on a particular type. For instance, complex
n-vectors are simultaneously C-modules and R-modules. and an arbitrary
commutative ring R is at once a Z-module and an R-module. Linear
functionals, which seem like common beasts (try a partially applied
inner product) live in the mathematical structure Hom_R(M,R) which is once
again an R-module, and, perhaps, by inheriting structure on R, an R'
module from various R'. So how does this affect Prelude design? Examining
a small bit of code could be helpful:

-- The group must be Abelian. I suppose anyone could think of this.
class (AdditiveGroup g, Ring r) = LeftModule g r where
() :: r - g - g

instance AdditiveGroup g = LeftModule g Integer where
n  x   | n == 0 = one
| n  0  = -(n  (-x))
| n  0  = x + (n-1)  x

... and we naturally acquire the sort of structure we're looking for.
But this only shows a possible outcome, and doesn't motivate the
implementation. What _will_ motivate the implementation is the sort
of impact this has on various sorts of code:

(1) The fact that R is an AdditiveGroup immediately makes it a
Z-module, so we have mixed-mode arithmetic by a different
means from the usual implicit coercion.

(2) This sort of business handles vectors quite handily.

(3) The following tidbit of code immediately handles curried innerprods:

instance (AdditiveGroup group, Ring ring) = LeftModule (group-ring) ring
where
r  g = \g' - r  g g'

(4) Why would we want to curry innerprods? I envision:

type SurfaceAPoles foo = SomeGraph (SomeVector foo)

and then

surface :: SurfaceAPoles bar
innerprod v `fmap` normalsOf faces where faces = facesOf surface

(5) Why would we want to do arithmetic on these beasts now that
we think we might need them at all?

If we're doing things like determining the light reflected off of the
various surfaces we will want to scale and add together the various
beasties. Deferring the innerprod operation so we can do this is inelegant
and perhaps inflexible compared to:

lightSources :: [(SomeVector foo - Intensity foo, Position)]
lightSources = getLightSources boundingSomething
reflection = sum $ map (\(f,p) - getSourceWeight p * f) lightSources
reflection `fmap` normalsOf faces where faces = facesOf surface

and now in the lightSources perhaps ambient light can be represented
very conveniently, or at least the function type serves to abstract out
the manner in which the orientation of a surface determines the amount
of light reflected off it.

(My apologies for whatever inaccuracies are happening with the optics
here, it's quite far removed from my direct experience.)

Furthermore, within things like small interpreters, it is perhaps
convenient to represent the semantic values of various expressions by
function types. If one should care to define arithmetic on vectors and
vector functions in the interpreted language, support in the source
language allows a more direct approach. This would arise within solid
modelling and graphics once again, as little languages are often used
to describe objects, images, and the like.

How can we anticipate all the possible usages of pretty-looking vector
and matrix algebra? I suspect graphics isn't the only place where
linear algebra could arise. All sorts of differential equation models
of physical phenomena, Markov models of state transition systems, even
economic models at some point require linear algebra in their
computational methods.  It's something I at least regard as a fairly
fundamental and important aspect of computation. And to me, that means
that the full power of the language should be applied toward beautifying,
simplifying, and otherwise enabling linear algebraic computations.


Cheers,
Bill
P.S.:   Please forgive the harangue-like nature of the post, it's the best
I could do at 3AM.

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-12 Thread Marcin 'Qrczak' Kowalczyk

On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:

 I not only feel the need, but I feel that this is important that the
 additive structure in the codomain is inherited by functions.

It could support only the basic arithmetic. It would not automatically
lift an expression which uses () and if. It would be inconsistent to
provide a shortcut for a specific case, where generally it must be
explicitly lifted anyway. Note that it does make sense to lift () and if,
only the type system does not permit it implicitly because a type is fixed
to Bool.

Lifting is so easy to do manually that I would definitely not constrain
the whole Prelude class system only to have convenient lifting of basic
arithmetic. When it happens that an instance of an otherwise sane class
for functions makes sense, then OK, but nothing more.

-- 
Marcin 'Qrczak' Kowalczyk


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-12 Thread Marcin 'Qrczak' Kowalczyk

On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:

 I want to be *able* to define mathematical operations upon objects
 which by their intrinsic nature permit so!

You can't do it in Haskell as it stands now, no matter what the Prelude
would be.

For example I would say that with the definition
abs x = if x = 0 then x else -x
it's obvious how to obtain abs :: ([Int]-Int) - ([Int]-Int): apply the
definition pointwise.

But it will never work in Haskell, unless we changed the type rules for if
and the tyoe of the result of (=).

You are asking for letting
abs x = max x (-x)
work on functions. OK, in this particular case it can be made to work by
making appropriate instances, but it's because this is a special case
where all intermediate types are appropriately polymorphic.

This technique cannot work in general, as the previous example shows. So
IMHO it's better to not try to pretend that functions can be implicitly
lifted. Better provide as convenient as possible way of manual lifting
arbitrary functions, so it doesn't matter if they have fixed Integer in
the result or not.

You are asking for an impossible thing.

 I defined hundred times some special functions to add lists or
 records, to multiply a tree by a scalar (btw.: Jn Fairbarn proposes
 (.*), I have in principle nothing against, but these operators is used
 elsewhere, in other languages, CAML and Matlab; I use (*) ).

Please show a concrete proposal how Prelude classes could be improved.

-- 
Marcin 'Qrczak' Kowalczyk


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: A sample revised prelude for numeric classes

2001-02-12 Thread Marcin 'Qrczak' Kowalczyk

Mon, 12 Feb 2001 12:04:39 +0100 (CET), Marcin 'Qrczak' Kowalczyk 
[EMAIL PROTECTED] pisze:

 This is my bet.

I changed my mind:

class Eq a = PartialOrd a where -- or Ord
(), (), (=), (=) :: a - a - Bool
-- Minimal definition: () or (=).
-- For partial order (=) is required.
-- For total order () is recommended for efficiency.
a  b  = a = b  a /= b
a  b  = b  a
a = b = not (b  a)
a = b = b = a

-- 
 __("  Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/
  ^^  SYGNATURA ZASTPCZA
QRCZAK


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-12 Thread Jerzy Karczmarczuk

Marcin Kowalczyk continues:

 On Mon, 12 Feb 2001, Jerzy Karczmarczuk wrote:
 
  I want to be *able* to define mathematical operations upon objects
  which by their intrinsic nature permit so!
 
 You can't do it in Haskell as it stands now, no matter what the Prelude
 would be.
 
 For example I would say that with the definition
 abs x = if x = 0 then x else -x
 it's obvious how to obtain abs :: ([Int]-Int) - ([Int]-Int): apply the
 definition pointwise.
 
 But it will never work in Haskell, unless we changed the type rules for if
 and the tyoe of the result of (=).
 
 You are asking for letting
 abs x = max x (-x)
 work on functions. OK, in this particular case it can be made to work 
 

Why don't you try from time to time to attempt to understand what
other people want? And wait, say 2 hours, before responding? 

I DON'T WANT max TO WORK ON FUNCTIONS. I never did. I will soon (because
I am writing a graphical package where max serves to intersect implicit
graphical objects) need that, but for very specific functions which
represent textures, but NOT in general.

I repeat for the last time, that I want to have those operations which
are *implied* by the mathematical properties. And anyway, if you replace
x=0 by x=zero with an appropriate zero, this should work as well.
I want only that Prelude avoids spurious dependencies.

This is the way I program in Clean, where there is no Num, and (+), (*),
zero, abs, etc. constitute classes by themselves. So, when you say:

 You are asking for an impossible thing.

My impression is what is impossible, is your way of interpreting/
understanding the statements (and/or desiderata) of other people. 

  I defined hundred times some special functions to add lists or
  records, to multiply a tree by a scalar (btw.: Jn Fairbarn proposes
  (.*), I have in principle nothing against, but these operators is used
  elsewhere, in other languages, CAML and Matlab; I use (*) ).
 
 Please show a concrete proposal how Prelude classes could be improved.

(Why do you precede your query by this citation? What do you have to say
here about the syntax proposed by Jn Fairbarn, or whatever??)

I am Haskell USER. I have no ambition to save the world. The "proposal"
has been presented in 1995 in Nijmegen (FP in education). Actually, it
hasn't, I concentrated on lazy power series etc., and the math oriented
prelude has been mentioned casually. Jeroen Fokker presented similar
ideas, implemented differently. 
If you have nothing else to do (but only in this case!) you may find 
the modified prelude called math.hs for Hugs (which needs a modified 
prelude.hs exporting primitives) in 

http://users.info.unicaen.fr/~karczma/humat/

This is NOT a "public proposal" and I *don't want* your public comments
on it. If you want to be nice, show me some of *your* Haskell programs.

Jerzy Karczmarczuk
Caen, France

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Clean numeric system?

2001-02-12 Thread Dylan Thurston

On Mon, Feb 12, 2001 at 04:40:06PM +, Jerzy Karczmarczuk wrote:
 This is the way I program in Clean, where there is no Num, and (+), (*),
 zero, abs, etc. constitute classes by themselves. ...

I've heard Clean mentioned before in this context, but I haven't found
the Clean numeric class system described yet.  Can you send me a
pointer to their class system, or just give me a description?

Does each operation really have its own class?  That seems slightly
silly.  Are the (/) and 'recip' equivalents independent, and
independent of (*) as well?

Best,
Dylan Thurston

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: A sample revised prelude for numeric classes

2001-02-12 Thread Dylan Thurston

On Mon, Feb 12, 2001 at 07:24:31AM +, Marcin 'Qrczak' Kowalczyk wrote:
 Sun, 11 Feb 2001 22:27:53 -0500, Dylan Thurston [EMAIL PROTECTED] pisze:
  Reading this, it occurred to me that you could explictly declare an
  instance of Powerful Integer Integer and have everything else work.
 No, because it overlaps with Powerful a Integer (the constraint on a
 doesn't matter for determining if it overlaps).

Point.  Thanks.  Slightly annoying.

   Then the second argument of (^) is always arbitrary RealIntegral,
  
  Nit: the second argument should be an Integer, not an arbitrary
  RealIntegral.
 
 Of course not. (2 :: Integer) ^ (i :: Int) makes perfect sense.

But for arbitrary RealIntegrals it need not make sense.

Please do not assume that
  toInteger :: RealIntegral a = a - Integer
  toInteger n | n  0 = toInteger negate n
  toInteger 0 = 0
  toInteger n | n  0 = 1 + toInteger (n-1)
(or the more efficient version using 'even') terminates (in principle)
for all RealIntegrals, at least with the definition as it stands in my
proposal.  Possibly toInteger should be added; then (^) could have the
type you suggest.  For usability issues, I suppose it should.  (E.g.,
users will want to use Int ^ Int.)

OK, I'm convinced of the necessity of toInteger (or an equivalent).
I'll fit it in.

Best,
Dylan Thurston

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: A sample revised prelude for numeric classes

2001-02-12 Thread Dylan Thurston

On Sun, Feb 11, 2001 at 09:17:53PM -0800, William Lee Irwin III wrote:
 Consider taking of the residue of a truly infinite member of Z[[x]]
 mod an ideal generated by a polynomial, e.g. 1/(1-x) mod (1+x^2).
 You can take the residue of each term of 1/(1-x), so x^(2n) - (-1)^n
 and x^(2n+1) - (-1)^n x, but you end up with an infinite number of
 (nonzero!) residues to add up and hence encounter the troubles with
 processes not being finite that I mentioned.

Sorry, isn't (1+x^2) invertible in Z[[x]]?

 I think it's nice to have the Cauchy principal value versions of things
 floating around.  I know at least that I've had call for using the CPV
 of exponentiation (and it's not hard to contrive an implementation),
 but I'm almost definitely an atypical user. (Note, (**) does this today.)

Does Cauchy Principal Value have a specific definition I should know?
The Haskell report refers to the APL language report; do you mean that
definition?

For the Complex class, that should be the choice.

 I neglected here to add in the assumption that (=) was a total relation,
 I had in mind antisymmetry of (=) in posets so that element isomorphism
 implies equality. Introducing a Poset class where elements may be
 incomparable appears to butt against some of the bits where Bool is
 hardwired into the language, at least where one might attempt to use a
 trinary logical type in place of Bool to denote the result of an
 attempted comparison.

I'm still agnostic on the Poset issue, but as an aside, let me mention
that "Maybe Bool" works very well as a trinary logical type.  "liftM2
" does the correct trinary and, for instance.

 On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
  But to define = in terms of meet and join you already need Eq!
  
x = y === meet x y == y
 
 I don't usually see this definition of (=), and it doesn't seem like
 the natural way to go about defining it on most machines. The notion
 of the partial (possibly total) ordering (=) seems to be logically
 prior to that of the meet to me. The containment usually goes:

It may be logically prior, but computationally it's not...  Note that
the axioms for lattices can be stated either in terms of the partial
ordering, or in terms of meet and join.

(In a completely fine-grained ordering heirarchy, I would have the
equation I gave above as a default definition for =, with the
expectation that most users would want to override it.  Compare my
fromInteger default definition.)

Best,
Dylan Thurston

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Deja vu: Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-12 Thread Laszlo Nemeth


[incomprehensible (not necessarily wrong!) stuff about polynomials,
 rings, modules over Z and complaints about the current prelude nuked]

--- Marcin 'Qrczak' Kowalczyk pisze ---

 Please show a concrete proposal how Prelude classes could be improved.

--- Jerzy Karczmarczuk repondre ---

 I am Haskell USER. I have no ambition to save the world. The "proposal"
 has been presented in 1995 in Nijmegen (FP in education). Actually, it
 hasn't, I concentrated on lazy power series etc., and the math oriented
 prelude has been mentioned casually. Jeroen Fokker presented similar
 ideas, implemented differently. 

I'm afraid all this discussion reminds me the one we had a year or two
ago. At that time the mathematically inclined side was lead by Sergei,
who to his credit developed the Basic Algebra Proposal, which I don't
understand, but many people seemed to be happy about at that time. And
then of course nothing happend, because no haskell implementor has
bitten the bullet and implemented the proposal. This is something
understandable as supporting Sergei's proposal seem to be a lot of
work, most of which would be incompatible with current
implementations. And noone wants to maintain *two* haskell compilers
within one.

Even if this discussion continues and another brave soul develops
another algebra proposal I am prepared to bet with both of you in one
years supply of Ben and Jerry's (not Jerzy :)!) icecream that nothing
will continue to happen on the implementors side. It is simply too
much work for an *untested* (in practice, for teaching etc)
alternative prelude.

So instead of wasting time, why don't you guys ask the implementors to
provide a flag '-IDontWantYourStinkingPrelude' which would give you a
bare metal compiler with no predefined types, functions, classes, no
derived instances, no fancy stuff and build and test your proposals
with it?

I guess the RULES pragma (in GHC) could be abused to allow access to
the primitive operations (on Ints), but you are still likely to loose
much of the elegance, conciseness and perhaps even some efficiency of
Haskell (e.g. list comprehensions), but this should allow us to gain
experience in what sort of support is essential for providing
alternative prelude(s). Once we learnt how to decouple the prelude
from the compiler, and gained experience with alternative preludes
implementors would have no excuse not to provide the possibility
(unless it turns out to be completely impossible or impractical, in
which case we learnt something genuinely useful).

So, Marcin (as you are one of the GHC implementors), how much work
would it be do disable the disputed Prelude stuff within the compiler,
and what would be lost?

Laszlo

[Disclaimer: Just my 10 wons. This message is not in disagreement or
 agreement with any of the previous messages]

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: Revamping the numeric HUMAN ATTITUDE

2001-02-12 Thread Simon Peyton-Jones

| I'm seeing a bit of this now, and the error messages GHC spits out
| are hilarious! e.g.
| 
| My brain just exploded.
| I can't handle pattern bindings for 
| existentially-quantified constructors.
| 
| and
| 
| Couldn't match `Bool' against `Bool'
| Expected type: Bool
| Inferred type: Bool
| 

The first of these is defensible, I think.  It's not at all clear 
(to me anyway) what pattern bindings for existentially-quantified 
constructors should mean.  

The second is plain bogus.  GHC should never give a message like
that.  Which version of the compiler are you using?   If you
can send a small example I'll try it on the latest compiler.

Simon


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe